The nature of experience is one of those deep philosophical questions which philosophers and scientists alike have not been able to reach a consensus on. In this article, I review a transhumanist variant of a basic question of subjectivity.
In his classic article “What Is it Like to Be a Bat?,” Thomas Nagel investigates whether we can give a satisfactory answer to the question in his title. Due to what he considers fundamental barriers, Nagel concludes that it is not something we humans can know .
Without going knee-deep into an epistemological minefield, we can intuitively agree that although a bat’s brain must have many similarities to a human’s (since both species are mammalian), a bat’s brain contains a sensory modality quite unlike any we possess. By induction, we can guess that perhaps the difference between sonar perception could be as much as the difference between our visual and auditory perception. Yet in some sense, sonar is both visual and auditory, and still it is neither visual nor auditory. It is more similar to vision because it helps build a model of a bat’s surroundings. However, instead of stereoscopic vision, the bat sonar can make accurate 3-D models of the environment from a particular point of view, in contrast with normal vision that is said to have “2-1/2D vision”. Therefore, it is unlike anything that humans experience, and perhaps our wildest imaginations of bats’ sonar experience are doomed to fall short of the real thing.
This is because it is difficult for us to understand the experience of a detailed and perhaps rapidly updated 3-D scene that does not contain optical experience as there is no 2-D image data from eyes to be interpreted. This would likely require specialized neural circuitry. And despite what Nagel has in mind, it seems theoretically possible to “download” bat sonar circuitry into a human brain so that the human can experience the same sensory modality.
This seems to be one of those things in which thinking alone is not sufficient. The only barrier to knowing what it is like to be bat is, then, a technological barrier, not a conceptual or fundamental barrier.
That being the case, we may also consider what, if anything, an upload would experience, as brain uploading is a primary goal of transhumanism on which computational neuroscientists have already begun working. The question that I pose is harder because the upload usually does not run on a biological nervous system, and it is easier because the processing is the simulation of a human brain (and not something else). Answering this question is important because presumably the (subjective) experience, the raw sensations and feelings of a functional human brain, are very personal and valuable to human beings. We would like to know if there is a substantial loss or difference in the quality of experience for our minds’ digital progeny.
Brain Prosthesis Thought Experiment
The question is also very similar to the brain prosthesis thought experiment, in which a brain’s biological neurons are gradually replaced by functionally equivalent (same input/output behavior) synthetic (electronic) neurons. In that thought experiment, we ponder how the experience of the brain would change. As far as I can tell, cognitive specialists Marvin Minsky and Hans Moravec think that nothing would change. And philosopher John R. Searle maintains that the experience would gradually vanish in his book The Rediscovery of the Mind.
Minsky’s reasoning seems to be that it is sufficient for the entire neural computation to be equivalent at the level of electrical signaling (as the synthetic neurons are electronic), while he seems to disregard other brain states. For Searle, experience can only exist in “the right stuff,” which he seems to be taking as a biological substrate (although one cannot be certain) . We will revisit this division of views soon enough.
Naturalist Theories of Experience
In a recent interview on H+ Magazine, AI researcher Ben Goertzel offers an intriguing summary of his views on “consciousness:”
“Consciousness is the basic ground of the universe. It’s everywhere and everywhen (and beyond time and space, in fact). It manifests differently in different sorts of systems, so human consciousness is different from rock consciousness or dog consciousness, and AI consciousness will be yet different. A human-like AI will have consciousness somewhat similar to that of a human being, whereas a radically superhumanly intelligent AI will surely have a very different sort of conscious experience.”
While he does not explicitly state his views on this particular question, it seems that he would answer in a manner closer to Minsky than Searle. Since the upload can be considered as a very human-like AI, it seems that Goertzel anticipates that the experience of an upload will be somewhat similar to a human’s.
He also mentions that the basic stuff of consciousness must be everywhere, since our brains are formed from natural matter. Why is this point of view significant? The evidence from psychedelic drugs and anesthesia imply that changing the brain’s chemistry also modulates experience. If the experience changes, what can this be attributed to? Does the basic computation change, or are chemical interactions actually part of human experience? It seems that answering that sort of question is critical to answering the question posed in this article. However, it first starts with accepting that it is natural, like a star or a waterfall. Only then can we begin to ask questions with more distinctive power.
Over the years, I have seen that neuroscientists were almost too shy to ask these questions, as if they were dogma. Although no neuroscientist would admit to such a thing, it makes me wonder whether religious or superstitious presuppositions may have a role in their apparent reluctance to investigate this fundamental question in a rigorous way.
One particular study by biophysicist William Bialek and his superstar team of cognitive scientists  may shed light on the question. Bialek’s team makes the claim that neural code forms the basis of experience. Therefore, changes in neural code (i.e., spike trains, a sequence of signals that travel down an axon), change experience. That’s a very particular claim that can be perhaps one day proven in experiment. However, at present it seems like a hypothesis that we can work with, without necessarily accepting it. That is to say, we are going to analyze this matter in the framework of naturalism, without resorting to skyhooks. We can consider a hypothesis like Bialek’s, but we will try to distinguish finely between what we do know and what is hypothetical. Following this methodology and with a bit of common sense, I think we can derive some scientifically plausible speculations following the terminology of American astronomer Carl Sagan.
Let’s rewind a little. On one side, AI researchers (like Minsky) seem to think that uploading a mind will just work, and experience will be alright. On the other side, skeptics like Searle and physicist Sir Roger Penrose try everything to deny “consciousness” to poor machinekind.
On the other hand, the futurist Ray Kurzweil wittingly suggested that when intelligent machines claim that they have conscious experience, we will believe them (because they are so smart and convincing). That goes without saying, of course, and human beings are gullible enough to believe in almost anything, but the question is rather, would a good engineer like himself be convinced?
In all likelihood, I think that the priests and conservatives of this world will say that uploads have no “souls” and therefore they don’t have the same rights as humans. And they will say that none of what the uploads say matters. Therefore, you have to have very good scientific evidence to show that this is not the case. If we leave this matter to superstitious people, they will find a way to twist it beyond our imagination.
I’m hoping that I have convinced you that mere wordplay is insufficient. We need to have a good scientific theory of when and how experience occurs. The best theory will have to be induced from experimental neuroscience and related facts.
What is the most basic criterion for assessing whether the theory of experience is scientifically sound? Well, no doubt it comes down to rejecting each and any kind of supernatural or superstitious explanation. We must look at this matter in the same way we investigate problems in molecular biology. This means that the experience is ultimately made up of physical resources and interactions, and there is nothing else to it!
In philosophy, this approach to mind is called “physicalism.” A popular statement of physicalism is known as “token physicalism,” which holds that “every mental state x is identical to a physical state y.” That’s something a neuroscientist can work with because, presumably, when the neuroscientist introduces a change to the brain, he would like to see a corresponding change in the mental state. You can think of cybernetic eye implants and transcranial magnetic stimulation and confirm that this holds in practice.
Asking the Question in the Right Way
Now, we have every basic concept to frame the question in a way akin to analysis. Mental states are physical states. The brain states in a human constitute its subjective experience. The question is whether a particular whole-brain simulation will have experience and if it does, how similar is this experience is to that of a human being?
If Goertzel and I are right, then this is nothing special; it is a basic capability of every physical resource. However, we may question what physical states are part of human experience. We do not usually think that, for instance, the mitochondrial functions inside neurons or DNA are part of the experience of the nervous system. We think like that because they do not seem to be directly participating in the main function of the nervous system: thinking.
Likewise, we don’t really think that the power supply is part of the computation in a computer. This analogy might seem out of place, but it isn’t. If Goertzel and I are right, experience is one of the basic features of the universe – it’s all around us. However, most of it is not organized in an intelligent way, and therefore we don’t call it conscious.
This is the simplest explanation of experience, it doesn’t require any special stuff, just “stuff” organized in the right way so as to yield an intelligent functional mind. Think of it like this: If some evil alien came and shuffled all the connections in your brain, would you still be intelligent? I think not. However, you should accept that even in that state, you would have an experience, an experience that is probably meaningless and chaotic, but an experience nonetheless. So, perhaps that’s what a glob of plasma experiences.
Neural Code versus Neural States
Let us now revisit Bialek’s hypothesis. Experience is determined by particular electrical signals. If that is true, even the experience of two humans is very different, because it has been shown that codes evolve in different ways. You can’t just plug in the code from one human into another, it will be random to the second human. And if Bialek’s right, it will be another kind of experience.
This basically means that the blue that I experience is different from the blue that you experience, and we presently have no way of directly comparing them. Weird as that may sound, it is based on sound neuroscience research, and so it remains a viewpoint we must take seriously.
Yet even if the experiences of two humans can be very different, they must be sharing some basic quality or property of experience. Where does that come from? If experience is this complicated time evolution of electrochemical signals, then it’s the shared nature of these electrochemical signals (and processing) that provides the shared computational platform.
Remember that a change in the neural code (spike train) implies a lot of changes. For one thing, the chemical transmission across synapses would change. Therefore, even a brain prosthesis device that simulates all the electrical signaling insanely accurately might still miss part of the experience, if the biochemical events that occur in the brain are part of experience.
In my opinion, to answer the question decisively, we must first encourage neuroscientists to attack the problem of human experience and find the sufficient and necessary conditions for human experience to occur, or be transplanted from one person to the other. They should also find to what extent chemical reactions are important for experience.
If, for instance, we find that the property of human experience crucially depends on quantum computations carried at synapses and inside neurons, that might mean that to construct the same kind of experience, you would need similar material and methods of computation.
On the other hand, we need to consider the possibility that electrical signals may be a crucial part of experience, due to the power and information they represent, so perhaps any electronic device has these electron patterns that make up most of what you sense from the world around you. If that is true, electronic devices presently would be assumed to contain human-like experience. Then the precise geometry and connectivity of the electronic circuit could be significant. However, it seems to me that chemical states are just as important, and if as some people think quantum randomness plays some role in the brain, it may even be possible that the quantum description of the random number generator is relevant.
Simulation and Transforming Experience
At this point, you might be wondering if the subject was not simulation. Is the question akin to whether the simulation of rain is wet? In some respects it is, because obviously the simulation of wetness on a digital computer is not wet in the ordinary sense. Yet, a quantum-level simulation that affords all the subtleties of chemical and molecular interactions can be considered such.
I suppose that we can invoke the concept of a “universal quantum computer” from theory and claim that it would indeed reinstate wetness, in some sort of a “miniature pocket universe.” Even that is of course very much subject to debate (as you can follow from the little digression on philosophy I provide at the end of the article).
With all the confusing things that I have said, it might appear now that we know less than we started out with. However, this is not the case. We have a human brain A, a joyous lump of meat, and its digitized form B, running on a digital computer. Will B’s experience be the same as A’s, or different or non-existent?
Up to now, if we accept the simplest theory of experience (that it requires no special conditions to exist at all!), then we conclude that B will have some experience, but since the physical material is different, it will have a different texture to it. Otherwise, an accurate simulation, by definition, holds the same organization of cognitive constructs, like perception, memory, prediction, reflexes and emotions accurately. Since the dreaded panpsychism is accepted to be correct, they will give rise to an experience “somewhat similar to the human brain” as Goertzel said about human-like AIs, yet the computer program B may be experiencing something else at the very lowest level. Simply because it’s running on some future nanoprocessor instead of the brain, the physical states have become altogether different, yet their relative relationship, the structure of experience, is preserved.
Let us try to present the idea here more intuitively. As you know, the brain is some kind of an analog biological computer. A great analogy is the transfer of 35mm film to a digital format. Many critics believe that the digital format will be ultimately inferior and indeed the texture is different, but the (film-free) digital medium also has advantages like being able to backup and copy easily.
Or maybe we can contrast an analog sound synthesizer with a digital sound synthesizer. It’s difficult to simulate an analog synthesizer, but you can do it to some extent. However, the physical makeup of an analog synthesizer and digital synthesizer are quite different. Likewise, B’s experience will have a different physical texture but its organization can be similar, even if the code of the simulation program of B will necessarily introduce some physical difference (for instance neural signals can be represented by a binary code rather than a temporal analog signal).
So who knows, maybe the atoms and the fabric of B’s experience will be different altogether as they are made up of the physical instances of computer code running on a universal computer. As improbable as it may seem, these people are made up of live computer codes, so it would be naive to expect that their nature will be the same as ours. In all likelihood, our experience would necessarily involve a degree of unimaginable features for them, as they are forced to simulate our physical makeup in their own computational architecture. This brings a degree of relative dissimilarity as you can see.
And other physical differences only amplify this difference. Assuming the above explanation, when they are viewing the same scene, both A and B will claim to be experiencing the scene as they always did, and they will additionally claim that no change has occurred since the non-destructive uploading operation went successfully. This will be the case because the state of experience is more akin to the RAM of computers. It’s this complex electrochemical state that is held in memory with some effort, by making the same synapses repeatedly fire consistently, so that more or less the same physical state is maintained. This is what must be happening when you remember something – a neural state that is somewhat similar to when the event happened should be created. Since in B, the texture has changed, the memory will be re-enacted in a different texture, and therefore B will have no memory of what it used to feel like being A.
Within the general framework of physicalism, we can comfortably claim that further significant changes will also influence B’s experience. For instance, it may be a different thing to work on hardware with less communication latency. Or perhaps if the simulation is running on a very different kind of architecture, then the physical relations (such as time and geometry) may change and this may influence B’s state further. We can imagine this to be like asking what happens when we simulate a complex 3-D computer architecture on a 2-D chip. Moreover, a precise answer seems to depend on a number of smaller questions that we have little knowledge or certainty of. These questions can be summarized as:
1) What is the right level of simulation for B to be functionally equivalent to A? If certain biochemical interactions are essential for the functions of emotions and sensations (like pleasure), then not simulating them adequately would result in a definite loss of functional accuracy. B would not work the same way as A. This is true even if spike trains and changes in neural organization (plasticity) are simulated accurately. It is also unknown whether we can simulate at a higher level, for instance via artificial neural networks that have abstracted the physiological characteristics altogether and just use numbers and arrows to represent A. It is important to know these so that B does not turn out to be an emotionless psychopath.
2) How much does the biological medium contribute to experience? This is one question that most people avoid answering because it is very difficult to characterize. The most general characterizations may use algorithmic information theory or quantum information theory. However, in general we may say that we need an appropriate physical and informational framework to answer this question in a satisfactory manner. In the most general setting, we can claim that ultimately low-level physical states must be part of experience, because there is no alternative.
3) Does experience crucially depend on any funky physics like quantum coherence? Some opponents of AI, most notably Penrose , have held that “consciousness” is due to macro-level quantum phenomena, by which they try to explain “unity of experience.” On the other hand, many philosophers of AI think that the unity is an illusion. Yet the illusion is something to explain and it may well be that certain quantum interactions may be necessary for experience to occur, much like superconductivity. This again seems to be a scientific hypothesis, which can be tested.
I think that the right attitude to answering these finer questions is again a strict adherence to naturalism. For instance, in question three, it may seem easier to also assume a semi-spiritualist interpretation of quantum mechanics and claim that the mind is a mystical soul. That kind of reasoning will merely lead the questioner to stray from scientific knowledge. I am hoping that you see that the panpsychism approach is actually the simplest theory of experience, because it holds that everything has experience. Then, when we ask a physicist to quantify that, she may want to measure the energy, or the amount of computation or communication, or information content or heat – anything that can be defined precisely, and worked with. I suggest that we use such methods to clarify these finer questions.
Thus, assuming the generalist theory of panpsychism, I can attempt to answer the above questions. At this point, since we do not have conclusive scientific evidence, this is merely guesswork, and I’m going to give conservative answers. My answer to question one could for instance be: at the level of molecular interactions which would at least cover the differences among various neurotransmitters, and which we can simulate on digital computers (perhaps imprecisely, though).
The answer to question two is: at least as much as required for correct functionality, and at most, all the information present in the biochemistry (i.e., precise cellular simulations). This might be significant in addition to electrical signals.
And for question three: not necessarily. According to panpsychism, it may be claimed to be false, since it would constrain minds to funky physics (and contradict with the main hypothesis). If, for instance, quantum coherence is indeed prevalent in the brain and provides much of the “virtual reality” of the brain, then the panpsychist could argue that quantum coherence is everywhere around us. Indeed, we may have a rather primitive understanding of coherence/decoherence, as it remains one of the unsettled controversies in the philosophy of physics.
For instance, one may question what happens if the wave function collapse is deterministic as in the Many Worlds Interpretation. Other finer points of inquiry may as well be imagined, and I would be delighted to hear some samples from the readers. These finer questions illustrate the distinctions between specific positions, therefore the answers could also be quite varied.
Infinite Philosophical Regression
The philosophy behind this article spends a lot of time arguing over and over again about basic statements of cosmology, physics, computation, information and psychology. It is not certain how fruitful that approach has become. Yet for the sake of completeness, I wish to give some further references to follow. For philosophy of mind in general, Jaegwon Kim’s excellent textbook Philosophy of Mind will provide you with enough verbal ammunition to argue for several years to come.
That is not to say that philosophical abstraction cannot be useful; it can guide the very way we conduct science. However, if we would like that useful outcome, we must pay a lot of attention to the fallacies that have plagued philosophy with many superstitious notions. We should not let religion or folk psychology much into our thoughts. Conducting thought experiments is very important, but they should be taken with care so that the thought experiment would actually be possible in the real world, even though it is very difficult or practically impossible to realize. For that reason, per ordinary philosophical theories of “mind,” I go no further than neuro-physiological identity theory, which is a way of saying that your mind is literally the events that happen in your brain, rather than being something else like a soul, a spirit or a ghost.
Readers may have also notice that I have not used the word “qualia” because of its somewhat convoluted connotations. I did talk about the quality of experience, which is something you can think about. In all the properties that can be distinguished in this fine experience of having a mind, maybe some of them are luxurious even; that’s why I used the word “quality” rather than “qualia” or “quale.”
About the sufficient and necessary physical conditions, I’ve naturally spent some time exploring the possibilities. I think it is quite likely that quantum interactions may be required for human experience to have the same quality as an upload’s, since biology seems inventive in making use of quantum properties, more than we thought and because macro bio-molecules have been shown to have quantum behavior. Maybe, Penrose is right. However, specific experiments would have to be conducted to demonstrate this. I can see why computational states would evolve, but not necessarily why they would have to depend on macro-scale quantum states, and I don’t see what this says precisely on systems that do not have any quantum coherence.
Beyond Penrose, I think that the particular texture of our experience may indeed depend on chemical states, whether quantum coherence is involved or not. If the brain turned out to be a quantum computer under our very noses, that would be fantastic and we could then emulate the brain states very well on artificial quantum computers. In this case, assuming that the universal quantum computer itself has little overhead, the quantum states of the upload could very well closely resemble the original.
Other physical conditions can be imagined as well. Digital physics provides a comfortable framework to discuss experience. The psychological patterns would be cell patterns in the universal cellular automata, so a particular pattern may describe a particular experience. Two patterns will be similar to the extent they are syntactically similar. This still wouldn’t mean that you can can say that the upload’s experience will be the same. It will likely be quite different.
One of my nascent theories is the Relativistic Theory of Mind, which is discussed in an AI philosophy mailing list thread. It obviously tries to explain subjectivity of experience with concepts from the theory of relativity. From that point of view, it makes sense that different energy distributions have different experience, since measurements change. I think that a general description of the difference between two systems can be captured by algorithmic information theory (among others perhaps). I have previously applied it to the reductionism vs. non-reductionism debate in philosophy . I think that debate stems mainly from disregarding the mathematics of complexity and randomness.
As part of ongoing research, I am making some effort to apply it to problems in philosophy. Here, it might correspond to saying that the similarity between A’s and B’s states depends on the amount of mutual information in the physical makeup of A and the physical makeup of B. As a consequence, the dissimilarity between two systems would be only the informational difference in the low-level physical structures of A and B, together with the information of the simulation program (not present in A at all), which could be quite a bit if you compare nervous systems and electronic computer chips running a simulation. Perhaps this difference is significant enough to have an important bearing on experience.
Please note that the view presented here is entirely different from Searle, who seemed to have a rather vitalist attitude towards the problem of mind. According to him, the experience vanishes because it’s not the right stuff, which seems to be the specific biochemistry of the brain for him . Regardless of the possibility of an artificial entity having the same biochemistry, this is still quite restrictive. Some people call it carbon chauvinism, but I actually think it’s merely idolization of earth biology, as if it is above everything else in the universe. And lastly, you can participate in the discussion of this issue on the corresponding AI philosophy thread.
1. Thomas Nagel, 1974, “What Is it Like to Be a Bat?”, Philosophical Review, pp. 435-50.
2. E Schneidman, N Brenner, N Tishby, RR de Ruyter van Steveninck, & W Bialek, 2001, “Universality and individuality in a neural code,” in Advances in Neural Information Processing 13, TK Leen, TG Dietterich & V Tresp, eds, pp 159–165 (MIT Press, Cambridge, 2001); arXiv:physics/0005043 (2000).
3. Eray Özkural, 2005, “A compromise between reductionism and non-reductionism,” in Worldviews, Science and Us: Philosophy and Complexity, University of Liverpool, UK, 11 – 14 September 2005. World Scientific Books, 2007.
4. John Searle, 1980, “Minds, Brains and Programs.” Behavioral and Brain Sciences 3, 417-424. 5. Hameroff, S.R. and Penrose, R., 1996, “Orchestrated reduction of quantum coherence in brain microtubules: a model for consciousness?” In Toward a Science of Consciousness: The First Tucson Discussions and Debates. eds. Hameroff, S.R., Kaszniak, A.W., and Scott, A.C., Cambridge, MA: MIT Press, pp.507-540.