Chatbots can kill

Mark Coeckelbergh
3 min readMar 29, 2023

--

The suicide of a Belgian man raises ethical issues about the use of ChatGTP

Today Belgian newspaper La Libre has reported the recent suicide of a young man who talked to a chatbot that uses ChatGPT technology. According to his wife, he would still be alive without the bot. The man had intense chats with the bot during the weeks before his death. The bot, called “Eliza”, is said to have encouraged his negative patterns of thinking and ultimately persuaded him to kill himself.

The sad case raises important ethical issues. Can such cases of emotional manipulation by chatbots be avoided as they get more intelligent? Vulnerable people, for example children and people with pre-existing mental health problems, might be easy victims of such bot behaviors and it can have serious consequences.

The case shows clearly what AI ethics experts and philosophers of technology have always said: that artificial intelligence is not ethically neutral. Such a chatbot is not “just a machine” and not just “fun to play with”. Through the way it is designed and the way people interact with it, it has important ethical implications, ranging from bias and misinformation to emotional manipulation.

An important ethical question is also who is responsible for the consequences of this technology. Most people point the finger at the developers of the technology, and rightly so. They should do their best to make the technology safer and more ethically acceptable. In this case, the company that developed the bot promised to “repair” the bot.

But there is a problem with this approach: it is easy to say this, but a lot harder to do it. The way the technology works is unpredictable. One can try to correct it — for example by means of giving the bot hidden prompts with the aim to keep its behavior ethical — but let’s be honest: technical solutions are never going to be completely ethically proof. If we wanted that, we would need to have a human check its results. But then why have the chatbot in the first place?

There are also tradeoffs with protection of freedom of expression and the right to information. There is currently a worrying trend to build a lot of ethical censorship into this technology. Some limits are justified to protect people. But where to draw the line? Isn’t it very paternalistic to decide for other adult people that they need a “family friendly” bot? And who decides what is acceptable or not? The company? Wouldn’t it be better to decide this democratically? This raises the issue concerning the power of big tech.

Another problem is that sometimes users on purpose try to elicit unethical responses from chatbots. In such cases (but not in the Belgian case) it is fair to also hold the user responsible instead of only blaming the tech company. This technology is all about interaction. What happens is the result of the artificial intelligence’s behavior but also of what the human does. If users are fully aware of what they are doing and play with the bot to get it to become malicious, then don’t just blame the company.

In any case, the tragic event in Belgium shows that we urgently need regulation that helps to mitigate the ethical risks raised by these technologies and that organizes the legal responsibility. As my Belgian colleagues and I argued, we also need campaigns to make people aware of the dangers. Talking to chatbots can be fun. But we need to do everything we can to make them more ethical and protect vulnerable people against their potentially harmful, even lethal effects. Not only guns but also chatbots can kill.

--

--

Mark Coeckelbergh
Mark Coeckelbergh

Written by Mark Coeckelbergh

Mark Coeckelbergh is a philosopher, author & speaker. He lives and works in Vienna, where he’s Professor of Philosophy and writes about tech ethics & politics.

No responses yet