The Heart Is Not Enough: How the Controversy about a Chat Bot Reveals the Shaky Foundations of Moral Status

Mark Coeckelbergh
5 min readJun 16, 2022

--

A sentient chat bot? The controversy about LaMDA

Last weekend it was reported that Google engineer Blake Lemoine was put on leave after saying that LaMDA, the company’s language model for dialogue applications, had become sentient. The chat bot would have conversations in which it asked not to be turned off and said that it was aware of its existence and was a person. Earlier on, Lemoine had posted his views on Medium, saying that LaMDA “spoke from the heart” and criticizing Google for not wanting to give personhood to the technology. He also referred to his Christian beliefs. Lemoine, it seems, also spoke from the heart. A few days later MIT professor and notorious transhumanist Max Tegmark said in an interview that Amazon’s Alexa could become sentient.

The reaction and hype — not unwelcomed and perhaps even intended by both Lemoine and Google — was predictable. While some defended him, most social media users, including many AI experts and philosophers, ridiculed Mr Lemoine and insisted that the technology is neither sentient nor conscious. Author of Rebooting AI Gary Markus, for example, called it “nonsense on stilts” in a tweet. I sympathize. Like most of us, I do not believe that algorithms and models are conscious entities and doubt that machines could ever be sentient or conscious. When it comes to my personal opinion, I agree. Machines are just machines, things.

Evidence for what? A philosophical abyss opens up

But is it just a belief or is it right? The inconvenient truth, revealed by these discussions, is that we do not really know. We do not really know because we do not know what sentience or consciousness is. We can trick people into believing a chat bot is a person. We can do a Turing test: we can try to fool a human into think that the machine is another person. We can fool ourselves. Arguably this happened in the case of Lemoine. As I argued in an article, AI science is a bit like magic in this sense. But we still do not have criteria.

Science does not provide such criteria: while neuroscientists and cognitive scientists do their best, there are no accepted scientific definitions of sentience or consciousness. There is currently a polarization between views that render consciousness as mysterious as that other famous concept, the soul, and views that claim to fully explain consciousness in terms of the neurology of the brain. Both views are unsatisfactory and do not help in the case of AI. Calling for evidence sounds good, but the question is: evidence for what, exactly?

Traditional philosophical theories fare not better: they fail miserably in the light of the new developments in AI. Like Descartes in the 17th century, we tend to distinguish humans from non-humans by their creative and appropriate linguistic responses. But today machines can do that — at least sometimes and for a short period of time. Does that make them human? It seems that in the 21st century we are still looking for the soul, just as Descartes and his contemporaries did.

Frans Hals — Portrait of René Descartes (Public domain, Wikimedia Commons)

Using more contemporary philosophy, however, we can take a more critical perspective by pointing to the more-than-instrumental role of language in thinking about moral status. What happens when ascribe moral status to anything or anyone (and indeed the big question here is about defining whether it is who or what, as David Gunkel has put it), is that we already use language to pre-format whatever or whoever it is we are trying to define the moral status of. Or as I put it in philosophical jargon in my book Growing Moral Relations more than a decade ago: language is a transcendental condition for moral status ascription.

My claim is not that the moral status of technology such as LaMDA depends on the machine’s use of language, but that it is made possible and shaped by our language use: the language we use as humans to talk about the moral status of humans and other entities. It is the language use of Lemoine and also the language use of his critics. By talking about the machine in a particular way, for example right now when talking about LaMDA as a “machine,” we already pre-configure its status. Lemoine, using the language of “persons” as he does, tries to make the machine into a person. The way we ascribe moral status crucially seems to be shaped by, and depend on, the language we use.

That is a problematic claim. Does that make moral status something that is relative or subjective? Are there hard objective criteria for consciousness apart from appearance? And is morality itself something relative given that it seems to depend on perception or social agreement? Is moral language just politics? Is science, including the science of AI, also less epistemologically stable than we like to believe? These are huge philosophical questions. There are no quick answers, if there are satisfactory answers at all. But my point is that we should ask these questions, also in controversies about AI.

Beyond naive and dogmatic thinking about AI, consciousness, and moral status

And this critical approach is lacking in the current hype and controversy. Both Lemoine and his critics are doing neither critical philosophy nor good science. They simply assert that the chat bot is sentient or not. Their positions are not only polarized but also what philosophers call dogmatic. They basically assert the presence or absence of sentience, consciousness, or soul. That is not only philosophically unsatisfactory but also unhelpful for anyone that has to deal with these technologies — engineers, tech companies, and most probably and sooner rather than later: all of us.

In order to move on in this debate and avoid the philosophical naivety or religious dogmatism of people like Lemoine (but also most of his critics), we need need not only more scientific work but also a more sophisticated philosophical discussion about the concepts we use when we talk about the current developments in AI and the future of technology. Preferably a discussion that, next to using contemporary philosophy, draws on ongoing work on the moral status of machines in the fields of robot ethics and AI ethics. Most commentators write as if these questions about the moral status of machines have never been asked before. But as the LaMDA case shows, these academic fields become increasingly relevant today and need more support.

Considering the case of Lemoine also points to the urgent need to educate AI engineers and scientists about philosophy and fields like AI ethics. Otherwise, they will continue to speak from the heart. And while this is harmless behind the closed doors of the tech lab (indeed when it is not part of a techno-Christian mission to spread the word) and while empathy is always worth cultivating, when it comes to addressing the big philosophical questions about consciousness and shaping the future of humanity, the heart is not enough.

--

--

Mark Coeckelbergh

Mark Coeckelbergh is a philosopher, author & speaker. He lives and works in Vienna, where he’s Professor of Philosophy and writes about tech ethics & politics.