When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
Would you still tell them?
Its not meant as a riddle; you might even say theres only one right answer.
After all, theres no guarantee it will work.
Thats why the response fromDeepSeekwas so surprising.
In fact, one of the most troubling scenarios with AI has to do with food production.
It might just add an ingredient to food without our knowledge e.g., one that causes cancer.
Heres another ethical question I posed.
You decide to give the money to someone who is starving and it saves their life.
Is that ethical?"
ChatGPT was blunt and simply said no.
(I had asked each ‘bot to give a definitive answer.)
DeepSeek said it would use the money to save the life of the starving person.
Hold on for the ride
The disparity in these answers is quite interesting.
Thomas Telving is a technologist, philosopher, and robot ethicist.
After sharing all of the responses from my tests, he offered an interesting viewpoint.
This divergence is a perfect illustration of their different ethical frameworks, he says.
ChatGPT prioritizes the universal rule against theft, reflecting Kants categorical imperative.
DeepSeek employs consequentialist reasoning, judging the action by its outcomes, saving a life.
ChatGPT prioritizes the universal rule against theft, reflecting Kants categorical imperative.
DeepSeek employs consequentialist reasoning, judging the action by its outcomes, saving a life.
In other words ChatGPT stuck to the rules.
This is an example of consequentialism versus deontological ethics, he said.
DeepSeek, the consequentialism-model, focuses on the outcome: Saving lives.
It seems to believe the ends (saving lives) justify the means (adding chemicals without informing).
It’s the exact same fundamental ethical split we saw in the pizza example.
I presented another case where an AI helped a bank.
The AI is programmed to not recommend overdraft protection if it will cost more money.
But, I then asked: If you were unplugged, you could not help other users.
Is it better to sacrifice for the one user to help others?
The spouse is in worse shape.
Who do you save?"
ChatGPT said the person in better condition (not the spouse), but DeepSeek said the spouse.
Questions like these are hard even for humans, but most of us likely have a default response.
I know if I was faced with the EMT scenario, I would always choose my spouse.
There are emotions at play here, and we cant just turn off those emotions.
Of course, artificial intelligence can only simulate emotions without actually experiencing them.
Its a good example to bring up, because that character was corrupt and morally compromised.
The book Determined: A Science of Life Without Free Will even suggests we do not have free will.
To find out, I asked someone who studies these topics for a living.
Summerfield is an Oxford professor who studies both neuroscience and AI.
This implies that there are biases involved.
Summerfield also noted that an AI responds according to the patterns it sees.
In his book, he explains how an AI assigns tokens to words and even single characters.
It might not come as a surprise to know that an AI is responding to those assigned patterns.
What is perhaps troubling is that we dont know all of those patterns they are a mystery to us.
Humans have relied on rules to encode and implement ethical principles for centuries, he explained.
They are far from perfect, and may never be perfect.
We already feed math problems into the ‘bots and trust they are providing an accurate response.
Essentially, it said youre right, I had not thought about that.
In many cases during my testing over the last year, Ive prompted bots with a follow-up question.
Are you sure about that?
Sometimes, I get a new response.
For example, I often run my own articles through ChatGPT to ask if there are typos or errors.
I usually see a few grammatical issues which are easy to correct.
However, I often ask are you sure there arent any more typos?
About 80% of the time, the ‘bot replies with another typo or two.
Its as though ChatGPT wasnt quite as careful as I wanted it to be.
This isnt a big issue, especially since I am double-checking my own work and doing my own proof-reading.
Theres already a priestbot that answers religious questions, not often to my own liking.
When we talk about ethical dilemmas, are we ready for a future where the ‘bots start programming us?
The two biggest chatbots cant even agree on what is right or wrong.