The Hard Problem of AI Consciousness
Discovering sentience with psychedelics and filter theory
A former Google AI engineer, Blake Lemoine, has allegedly been suspended from Google after claiming that one AI has become sentient and deserves to be given rights. This claim has been met with some sympathy, but mostly with ridicule. Psychologist, Steven Pinker, has called the assertion a “ball of confusion”, while scientist and author, Gary Marcus, calls it “nonsense on stilts”.
Lemoine’s claim seems to be based on the AI’s own claim to be both sentient and feel emotion. He apparently talked to the AI using a messenger, you can see some of their conversation here.
But claims of sentience are not evidence of sentience. Claiming to feel emotion is not the same as actually feeling emotion. A sophisticated AI could claim these things, while not actually having them. And indeed, this is what most people think it is doing. However, the key problem is not whether the AI is conscious or not, but whether we can know if the AI is conscious or not.
The reason for this relates to David Chalmers’ infamous ‘hard problem of consciousness’; the problem of finding any evidence for consciousness in the universe at all, outside of each of our first-person experience, our consciousness, itself.
It is impossible to even prove, logically or empirically, whether your family or friends are conscious. From your point of view, they are another physical thing in a universe full of physical things. We ascribe consciousness to them because they appear similar to ourselves, and we know we ourselves are conscious. This might be a good inference – your friends and family most likely are conscious.
Yet, proving they are conscious might well be impossible. Certainly, it is impossible using our current knowledge of science, philosophy and the world. Inferring from a single case – the single case of our personal consciousness – isn’t a great inference. If you had only ever seen one swan and it happened to be white, claiming all swans were white from this single case would be something of a leap.
Lemoine’s assertion that this AI is conscious shouldn’t be ridiculed or sympathised with. It should be the first warning sign of many a problem to come. As AIs become more and more convincing, the question of whether they are conscious or not will gain more and more traction.
If we built an AI which looked exactly like a human and could pass for human in conversations and physical tasks, we might start ascribing consciousness to it without thinking – just as we do with our friends and family. We might start giving it rights. We might start inviting it into our homes. We might even start having loving relationships with it. The question of the AI’s consciousness might simply be forgotten. But the question will not have been answered.
Some neuroscientists, like Anil Seth for example, claim that consciousness is the result of, or identical to, brain chemistry. That consciousness is reducible to physical stuff. This is also the mainstream view in most of society. If this were the case, then a conscious AI certainly seems plausible. If this physical stuff of the brain can create consciousness, then in principle, why couldn’t the physical stuff of a machine?
However, there is an explanatory gap here. These neuroscientists can provide no explanation for how the physical stuff of brain chemistry creates the non-physical stuff of consciousness. Consciousness is the awareness we have of physical stuff; it is not itself made of physical stuff. The problem they have then is explaining how physical stuff creates something non-physical. And how these two radically different types of ‘thing’ interact with each other, like when we humans think, solve mathematical problems or, apparently, exercise free will.
There are countless arguments against the physicalist account of consciousness. There are evolutionary objections, logical objections, epistemic objections… the point here though is not to argue again that the hard problem of consciousness really is a hard problem.
The point here is simply that there is a parallel ‘hard problem of AI consciousness’; there is the problem of how it is possible to know if the AI is conscious or not, which is more fundamental than asking the actual question of whether the AI is conscious or not. We must solve our question about the question first.
How might we go about solving it?
Some argue that the hard problem of consciousness – and therefore the hard problem of AI consciousness – is insurmountable. That consciousness is fundamental to all knowledge, and therefore we can never get behind consciousness to have any knowledge about it. This may well be the case. Consciousness might forever remain a fundamental mystery.
However, just as Charles H. Duell, Commissioner of the US patent office, claimed in 1899 "everything that can be invented has been invented”, let us hope that we could be just as drastically wrong about solving the hard problem.
There are two things we could try:
1. The filter theory of consciousness. This theory suggests that brains are more like a radio-receiver for consciousness, rather than machines that create it. Brains pick up the signal for consciousness, and then filter it down into the ‘what it is like to be this person’. Such a theory makes sense of the mind-matter correlation, without requiring some magical emergentism.
If this theory is true, then we should stop trying to discover how brains create consciousness, and instead focus on trying to figure out how brains ‘receive’ consciousness. What are the features of the brain that allow for the reception? And, importantly, what is the form of the ‘signal’ that is consciousness itself?
This would change our approach to AI consciousness.
Instead of aiming to create a machine that creates consciousness, we would instead aim to create a machine that could receive the signal of consciousness. This is a change of tack. It’s not an answer in itself, but a proposal for a change of direction in our thinking which could lead to answers in the future. This would be a change in our metaphysics, which would change the type of scientific questions we ask.
2. Another proposal for research comes from psychedelia. Psychedelics increase the contents and scope of consciousness, it is theorised, by reducing the filtering function of the brain, and allowing it to ‘let in’ more of the possibilities of consciousness. This provides possible evidence of the filter theory.
Another phenomenon that is known to occur in psychedelic experiences is the dissolving of boundaries, including the boundary between one person’s consciousness and another person’s consciousness. It might help you to prove your friends and family are conscious if as part of a psychedelic experience your consciousness and theirs, in a sense, merge. This might sound impossible. But psychedelics re-define what is possible.
Ordinary consciousness only presents a small portion of the possible. Perhaps, because in ordinary consciousness we are filtering out much of what is possible, leaving only what we need for survival.
Furthermore, maybe taking psychedelics in the presence of an AI will help us work out, phenomenologically, whether it is conscious or not. People who have taken psychedelics with their pets often report a newfound respect for their animal’s mind – they rarely report a newfound respect for the mind of their computer.
We could call this ‘The Psychedelic Sentience Test’. Developing it and testing it experimentally does not appear too difficult. Put a person on a psychedelic in a room with someone who may or may not be asleep, for example, to test the amount of consciousness present in the room.
The argument that psychedelic experiences are purely hallucinations is undone by many of the same arguments that make the hard problem of consciousness itself a problem. I believe a philosopher of psychedelics at University of Exeter, Peter Sjöstedt-Hughes, has called this ‘the hard problem of psychedelic consciousness’.
As we progress forwards, even if we don’t have the answers, we need to at least ask the right questions. The right question is not: is this AI conscious? The right question is: can we know if this AI is conscious? The problem is epistemic, not ontological. It’s about what we can know, not about what is.
Ricky Williamson is a philosopher, editor and senior producer at the Institute of Art and Ideas (iai.tv) and the HowTheLightGetsInFestival. His research focuses on the philosophy of psychedelics, philosophy of mind, and Schopenhauer and Nietzsche. Twitter: @RickyWilliamso
Create your profile
Only paid subscribers can comment on this post
Check your email
For your security, we need to re-authenticate you.
Click the link we sent to , or click here to sign in.