Brain States and the Nature of Experience


The nature of experience is one of those deep philosophical questions which philosophers and scientists alike have not been able to reach a consensus on. In this article, I review a transhumanist variant of a basic question of subjectivity.

In his classic article “What Is it Like to Be a Bat?,” Thomas Nagel investigates whether we can give a satisfactory answer to the question in his title. Due to what he considers fundamental barriers, Nagel concludes that it is not something we humans can know [1].

Without going knee-deep into an epistemological minefield, we can intuitively agree that although a bat’s brain must have many similarities to a human’s (since both species are mammalian), a bat’s brain contains a sensory modality quite unlike any we possess. By induction, we can guess that perhaps the difference between sonar perception could be as much as the difference between our visual and auditory perception. Yet in some sense, sonar is both visual and auditory, and still it is neither visual nor auditory. It is more similar to vision because it helps build a model of a bat’s surroundings. However, instead of stereoscopic vision, the bat sonar can make accurate 3-D models of the environment from a particular point of view, in contrast with normal vision that is said to have “2-1/2D vision”. Therefore, it is unlike anything that humans experience, and perhaps our wildest imaginations of bats’ sonar experience are doomed to fall short of the real thing.

This is because it is difficult for us to understand the experience of a detailed and perhaps rapidly updated 3-D scene that does not contain optical experience as there is no 2-D image data from eyes to be interpreted. This would likely require specialized neural circuitry. And despite what Nagel has in mind, it seems theoretically possible to “download” bat sonar circuitry into a human brain so that the human can experience the same sensory modality.

This seems to be one of those things in which thinking alone is not sufficient. The only barrier to knowing what it is like to be bat is, then, a technological barrier, not a conceptual or fundamental barrier.

That being the case, we may also consider what, if anything, an upload would experience, as brain uploading is a primary goal of transhumanism on which computational neuroscientists have already begun working. The question that I pose is harder because the upload usually does not run on a biological nervous system, and it is easier because the processing is the simulation of a human brain (and not something else). Answering this question is important because presumably the (subjective) experience, the raw sensations and feelings of a functional human brain, are very personal and valuable to human beings. We would like to know if there is a substantial loss or difference in the quality of experience for our minds’ digital progeny.

Brain Prosthesis Thought Experiment

The question is also very similar to the brain prosthesis thought experiment, in which a brain’s biological neurons are gradually replaced by functionally equivalent (same input/output behavior) synthetic (electronic) neurons. In that thought experiment, we ponder how the experience of the brain would change. As far as I can tell, cognitive specialists Marvin Minsky and Hans Moravec think that nothing would change. And philosopher John R. Searle maintains that the experience would gradually vanish in his book The Rediscovery of the Mind.

Minsky’s reasoning seems to be that it is sufficient for the entire neural computation to be equivalent at the level of electrical signaling (as the synthetic neurons are electronic), while he seems to disregard other brain states. For Searle, experience can only exist in “the right stuff,” which he seems to be taking as a biological substrate (although one cannot be certain) [4]. We will revisit this division of views soon enough.

Naturalist Theories of Experience

In a recent interview on H+ Magazine, AI researcher Ben Goertzel offers an intriguing summary of his views on “consciousness:”

“Consciousness is the basic ground of the universe. It’s everywhere and everywhen (and beyond time and space, in fact). It manifests differently in different sorts of systems, so human consciousness is different from rock consciousness or dog consciousness, and AI consciousness will be yet different. A human-like AI will have consciousness somewhat similar to that of a human being, whereas a radically superhumanly intelligent AI will surely have a very different sort of conscious experience.”

While he does not explicitly state his views on this particular question, it seems that he would answer in a manner closer to Minsky than Searle. Since the upload can be considered as a very human-like AI, it seems that Goertzel anticipates that the experience of an upload will be somewhat similar to a human’s.

He also mentions that the basic stuff of consciousness must be everywhere, since our brains are formed from natural matter. Why is this point of view significant? The evidence from psychedelic drugs and anesthesia imply that changing the brain’s chemistry also modulates experience. If the experience changes, what can this be attributed to? Does the basic computation change, or are chemical interactions actually part of human experience? It seems that answering that sort of question is critical to answering the question posed in this article. However, it first starts with accepting that it is natural, like a star or a waterfall. Only then can we begin to ask questions with more distinctive power.

Over the years, I have seen that neuroscientists were almost too shy to ask these questions, as if they were dogma. Although no neuroscientist would admit to such a thing, it makes me wonder whether religious or superstitious presuppositions may have a role in their apparent reluctance to investigate this fundamental question in a rigorous way.

One particular study by biophysicist William Bialek and his superstar team of cognitive scientists [2] may shed light on the question. Bialek’s team makes the claim that neural code forms the basis of experience. Therefore, changes in neural code (i.e., spike trains, a sequence of signals that travel down an axon), change experience. That’s a very particular claim that can be perhaps one day proven in experiment. However, at present it seems like a hypothesis that we can work with, without necessarily accepting it. That is to say, we are going to analyze this matter in the framework of naturalism, without resorting to skyhooks. We can consider a hypothesis like Bialek’s, but we will try to distinguish finely between what we do know and what is hypothetical. Following this methodology and with a bit of common sense, I think we can derive some scientifically plausible speculations following the terminology of American astronomer Carl Sagan.

The Debate

Let’s rewind a little. On one side, AI researchers (like Minsky) seem to think that uploading a mind will just work, and experience will be alright. On the other side, skeptics like Searle and physicist Sir Roger Penrose try everything to deny “consciousness” to poor machinekind.

On the other hand, the futurist Ray Kurzweil wittingly suggested that when intelligent machines claim that they have conscious experience, we will believe them (because they are so smart and convincing). That goes without saying, of course, and human beings are gullible enough to believe in almost anything, but the question is rather, would a good engineer like himself be convinced?

In all likelihood, I think that the priests and conservatives of this world will say that uploads have no “souls” and therefore they don’t have the same rights as humans. And they will say that none of what the uploads say matters. Therefore, you have to have very good scientific evidence to show that this is not the case. If we leave this matter to superstitious people, they will find a way to twist it beyond our imagination.

I’m hoping that I have convinced you that mere wordplay is insufficient. We need to have a good scientific theory of when and how experience occurs. The best theory will have to be induced from experimental neuroscience and related facts.

What is the most basic criterion for assessing whether the theory of experience is scientifically sound? Well, no doubt it comes down to rejecting each and any kind of supernatural or superstitious explanation. We must look at this matter in the same way we investigate problems in molecular biology. This means that the experience is ultimately made up of physical resources and interactions, and there is nothing else to it!

In philosophy, this approach to mind is called “physicalism.” A popular statement of physicalism is known as “token physicalism,” which holds that “every mental state x is identical to a physical state y.” That’s something a neuroscientist can work with because, presumably, when the neuroscientist introduces a change to the brain, he would like to see a corresponding change in the mental state. You can think of cybernetic eye implants and transcranial magnetic stimulation and confirm that this holds in practice.

Asking the Question in the Right Way

Now, we have every basic concept to frame the question in a way akin to analysis. Mental states are physical states. The brain states in a human constitute its subjective experience. The question is whether a particular whole-brain simulation will have experience and if it does, how similar is this experience is to that of a human being?

If Goertzel and I are right, then this is nothing special; it is a basic capability of every physical resource. However, we may question what physical states are part of human experience. We do not usually think that, for instance, the mitochondrial functions inside neurons or DNA are part of the experience of the nervous system. We think like that because they do not seem to be directly participating in the main function of the nervous system: thinking.

Likewise, we don’t really think that the power supply is part of the computation in a computer. This analogy might seem out of place, but it isn’t. If Goertzel and I are right, experience is one of the basic features of the universe – it’s all around us. However, most of it is not organized in an intelligent way, and therefore we don’t call it conscious.

This is the simplest explanation of experience, it doesn’t require any special stuff, just “stuff” organized in the right way so as to yield an intelligent functional mind. Think of it like this: If some evil alien came and shuffled all the connections in your brain, would you still be intelligent? I think not. However, you should accept that even in that state, you would have an experience, an experience that is probably meaningless and chaotic, but an experience nonetheless. So, perhaps that’s what a glob of plasma experiences.

Neural Code versus Neural States

Let us now revisit Bialek’s hypothesis. Experience is determined by particular electrical signals. If that is true, even the experience of two humans is very different, because it has been shown that codes evolve in different ways. You can’t just plug in the code from one human into another, it will be random to the second human. And if Bialek’s right, it will be another kind of experience.

This basically means that the blue that I experience is different from the blue that you experience, and we presently have no way of directly comparing them. Weird as that may sound, it is based on sound neuroscience research, and so it remains a viewpoint we must take seriously.

Yet even if the experiences of two humans can be very different, they must be sharing some basic quality or property of experience. Where does that come from? If experience is this complicated time evolution of electrochemical signals, then it’s the shared nature of these electrochemical signals (and processing) that provides the shared computational platform.

Remember that a change in the neural code (spike train) implies a lot of changes. For one thing, the chemical transmission across synapses would change. Therefore, even a brain prosthesis device that simulates all the electrical signaling insanely accurately might still miss part of the experience, if the biochemical events that occur in the brain are part of experience.

In my opinion, to answer the question decisively, we must first encourage neuroscientists to attack the problem of human experience and find the sufficient and necessary conditions for human experience to occur, or be transplanted from one person to the other. They should also find to what extent chemical reactions are important for experience.

If, for instance, we find that the property of human experience crucially depends on quantum computations carried at synapses and inside neurons, that might mean that to construct the same kind of experience, you would need similar material and methods of computation.

On the other hand, we need to consider the possibility that electrical signals may be a crucial part of experience, due to the power and information they represent, so perhaps any electronic device has these electron patterns that make up most of what you sense from the world around you. If that is true, electronic devices presently would be assumed to contain human-like experience. Then the precise geometry and connectivity of the electronic circuit could be significant. However, it seems to me that chemical states are just as important, and if as some people think quantum randomness plays some role in the brain, it may even be possible that the quantum description of the random number generator is relevant.

Simulation and Transforming Experience

At this point, you might be wondering if the subject was not simulation. Is the question akin to whether the simulation of rain is wet? In some respects it is, because obviously the simulation of wetness on a digital computer is not wet in the ordinary sense. Yet, a quantum-level simulation that affords all the subtleties of chemical and molecular interactions can be considered such.

I suppose that we can invoke the concept of a “universal quantum computer” from theory and claim that it would indeed reinstate wetness, in some sort of a “miniature pocket universe.” Even that is of course very much subject to debate (as you can follow from the little digression on philosophy I provide at the end of the article).

With all the confusing things that I have said, it might appear now that we know less than we started out with. However, this is not the case. We have a human brain A, a joyous lump of meat, and its digitized form B, running on a digital computer. Will B’s experience be the same as A’s, or different or non-existent?

Up to now, if we accept the simplest theory of experience (that it requires no special conditions to exist at all!), then we conclude that B will have some experience, but since the physical material is different, it will have a different texture to it. Otherwise, an accurate simulation, by definition, holds the same organization of cognitive constructs, like perception, memory, prediction, reflexes and emotions accurately. Since the dreaded panpsychism is accepted to be correct, they will give rise to an experience “somewhat similar to the human brain” as Goertzel said about human-like AIs, yet the computer program B may be experiencing something else at the very lowest level. Simply because it’s running on some future nanoprocessor instead of the brain, the physical states have become altogether different, yet their relative relationship, the structure of experience, is preserved.

Let us try to present the idea here more intuitively. As you know, the brain is some kind of an analog biological computer. A great analogy is the transfer of 35mm film to a digital format. Many critics believe that the digital format will be ultimately inferior and indeed the texture is different, but the (film-free) digital medium also has advantages like being able to backup and copy easily.

Or maybe we can contrast an analog sound synthesizer with a digital sound synthesizer. It’s difficult to simulate an analog synthesizer, but you can do it to some extent. However, the physical makeup of an analog synthesizer and digital synthesizer are quite different. Likewise, B’s experience will have a different physical texture but its organization can be similar, even if the code of the simulation program of B will necessarily introduce some physical difference (for instance neural signals can be represented by a binary code rather than a temporal analog signal).

So who knows, maybe the atoms and the fabric of B’s experience will be different altogether as they are made up of the physical instances of computer code running on a universal computer. As improbable as it may seem, these people are made up of live computer codes, so it would be naive to expect that their nature will be the same as ours. In all likelihood, our experience would necessarily involve a degree of unimaginable features for them, as they are forced to simulate our physical makeup in their own computational architecture. This brings a degree of relative dissimilarity as you can see.

And other physical differences only amplify this difference. Assuming the above explanation, when they are viewing the same scene, both A and B will claim to be experiencing the scene as they always did, and they will additionally claim that no change has occurred since the non-destructive uploading operation went successfully. This will be the case because the state of experience is more akin to the RAM of computers. It’s this complex electrochemical state that is held in memory with some effort, by making the same synapses repeatedly fire consistently, so that more or less the same physical state is maintained. This is what must be happening when you remember something – a neural state that is somewhat similar to when the event happened should be created. Since in B, the texture has changed, the memory will be re-enacted in a different texture, and therefore B will have no memory of what it used to feel like being A.

Within the general framework of physicalism, we can comfortably claim that further significant changes will also influence B’s experience. For instance, it may be a different thing to work on hardware with less communication latency. Or perhaps if the simulation is running on a very different kind of architecture, then the physical relations (such as time and geometry) may change and this may influence B’s state further. We can imagine this to be like asking what happens when we simulate a complex 3-D computer architecture on a 2-D chip. Moreover, a precise answer seems to depend on a number of smaller questions that we have little knowledge or certainty of. These questions can be summarized as:

1) What is the right level of simulation for B to be functionally equivalent to A? If certain biochemical interactions are essential for the functions of emotions and sensations (like pleasure), then not simulating them adequately would result in a definite loss of functional accuracy. B would not work the same way as A. This is true even if spike trains and changes in neural organization (plasticity) are simulated accurately. It is also unknown whether we can simulate at a higher level, for instance via artificial neural networks that have abstracted the physiological characteristics altogether and just use numbers and arrows to represent A. It is important to know these so that B does not turn out to be an emotionless psychopath.

2) How much does the biological medium contribute to experience? This is one question that most people avoid answering because it is very difficult to characterize. The most general characterizations may use algorithmic information theory or quantum information theory. However, in general we may say that we need an appropriate physical and informational framework to answer this question in a satisfactory manner. In the most general setting, we can claim that ultimately low-level physical states must be part of experience, because there is no alternative.

3) Does experience crucially depend on any funky physics like quantum coherence? Some opponents of AI, most notably Penrose [5], have held that “consciousness” is due to macro-level quantum phenomena, by which they try to explain “unity of experience.” On the other hand, many philosophers of AI think that the unity is an illusion. Yet the illusion is something to explain and it may well be that certain quantum interactions may be necessary for experience to occur, much like superconductivity. This again seems to be a scientific hypothesis, which can be tested.

I think that the right attitude to answering these finer questions is again a strict adherence to naturalism. For instance, in question three, it may seem easier to also assume a semi-spiritualist interpretation of quantum mechanics and claim that the mind is a mystical soul. That kind of reasoning will merely lead the questioner to stray from scientific knowledge. I am hoping that you see that the panpsychism approach is actually the simplest theory of experience, because it holds that everything has experience. Then, when we ask a physicist to quantify that, she may want to measure the energy, or the amount of computation or communication, or information content or heat – anything that can be defined precisely, and worked with.  I suggest that we use such methods to clarify these finer questions.

Thus, assuming the generalist theory of panpsychism, I can attempt to answer the above questions. At this point, since we do not have conclusive scientific evidence, this is merely guesswork, and I’m  going to give conservative answers. My answer to question one could for instance be: at the level of molecular interactions which would at least cover the differences among various neurotransmitters, and which we can simulate on digital computers (perhaps imprecisely, though).

The answer to question two is: at least as much as required for correct functionality, and at most, all the information present in the biochemistry (i.e., precise cellular simulations). This might be significant in addition to electrical signals.

And for question three: not necessarily. According to panpsychism, it may be claimed to be false, since it would constrain minds to funky physics (and contradict with the main hypothesis). If, for instance, quantum coherence is indeed prevalent in the brain and provides much of the “virtual reality” of the brain, then the panpsychist could argue that quantum coherence is everywhere around us. Indeed, we may have a rather primitive understanding of coherence/decoherence, as it remains one of the unsettled controversies in the philosophy of physics.

For instance, one may question what happens if the wave function collapse is deterministic as in the Many Worlds Interpretation. Other finer points of inquiry may as well be imagined, and I would be delighted to hear some samples from the readers. These finer questions illustrate the distinctions between specific positions, therefore the answers could also be quite varied.

Infinite Philosophical Regression

The philosophy behind this article spends a lot of time arguing over and over again about basic statements of cosmology, physics, computation, information and psychology. It is not certain how fruitful that approach has become. Yet for the sake of completeness, I wish to give some further references to follow. For philosophy of mind in general, Jaegwon Kim’s excellent textbook Philosophy of Mind will provide you with enough verbal ammunition to argue for several years to come.

That is not to say that philosophical abstraction cannot be useful; it can guide the very way we conduct science. However, if we would like that useful outcome, we must pay a lot of attention to the fallacies that have plagued philosophy with many superstitious notions. We should not let religion or folk psychology much into our thoughts. Conducting thought experiments is very important, but they should be taken with care so that the thought experiment would actually be possible in the real world, even though it is very difficult or practically impossible to realize. For that reason, per ordinary philosophical theories of “mind,” I go no further than neuro-physiological identity theory, which is a way of saying that your mind is literally the events that happen in your brain, rather than being something else like a soul, a spirit or a ghost.

Readers may have also notice that I have not used the word “qualia” because of its somewhat convoluted connotations. I did talk about the quality of experience, which is something you can think about. In all the properties that can be distinguished in this fine experience of having a mind, maybe some of them are luxurious even; that’s why I used the word “quality” rather than “qualia” or “quale.”

About the sufficient and necessary physical conditions, I’ve naturally spent some time exploring the possibilities. I think it is quite likely that quantum interactions may be required for human experience to have the same quality as an upload’s, since biology seems inventive in making use of quantum properties, more than we thought and because macro bio-molecules have been shown to have quantum behavior. Maybe, Penrose is right. However, specific experiments would have to be conducted to demonstrate this.  I can see why computational states would evolve, but not necessarily why they would have to depend on macro-scale quantum states, and I don’t see what this says precisely on systems that do not have any quantum coherence.

Beyond Penrose, I think that the particular texture of our experience may indeed depend on chemical states, whether  quantum coherence is involved or not. If the brain turned out to be a quantum computer under our very noses, that would be fantastic and we could then emulate the brain states very well on artificial quantum computers. In this case, assuming that the universal quantum computer itself has little overhead, the quantum states of the upload could very well closely resemble the original.

Other physical conditions can be imagined as well. Digital physics provides a comfortable framework to discuss experience. The psychological patterns would be cell patterns in the universal cellular automata, so a particular pattern may describe a particular experience. Two patterns will be similar to the extent they are syntactically similar. This still wouldn’t mean that you can can say that the upload’s experience will be the same. It will likely be quite different.

One of my nascent theories is the Relativistic Theory of Mind, which is discussed in an AI philosophy mailing list thread. It obviously tries to explain subjectivity of experience with concepts from the theory of relativity. From that point of view, it makes sense that different energy distributions have different experience, since measurements change. I think that a general description of the difference between two systems can be captured by algorithmic information theory (among others perhaps). I have previously applied it to the reductionism vs. non-reductionism debate in philosophy [3]. I think that debate stems mainly from disregarding the mathematics of complexity and randomness.

As part of ongoing research, I am making some effort to apply it to problems in philosophy. Here, it might correspond to saying that the similarity between A’s and B’s states depends on the amount of mutual information in the physical makeup of A and the physical makeup of B. As a consequence, the dissimilarity between two systems would be only the informational difference in the low-level physical structures of A and B, together with the information of the simulation program (not present in A at all), which could be quite a bit if you compare nervous systems and electronic computer chips running a simulation. Perhaps this difference is significant enough to have an important bearing on experience.

Please note that the view presented here is entirely different from Searle, who seemed to have a rather vitalist attitude towards the problem of mind. According to him, the experience vanishes because it’s not the right stuff, which seems to be the specific biochemistry of the brain for him [4]. Regardless of the possibility of an artificial entity having the same biochemistry, this is still quite restrictive. Some people call it carbon chauvinism, but I actually think it’s merely idolization of earth biology, as if it is above everything else in the universe. And lastly, you can participate in the discussion of this issue on the corresponding AI philosophy thread.


1. Thomas Nagel, 1974, “What Is it Like to Be a Bat?”, Philosophical Review, pp. 435-50.

2. E Schneidman, N Brenner, N Tishby, RR de Ruyter van Steveninck, & W Bialek, 2001, “Universality and individuality in a neural code,” in Advances in Neural Information Processing 13, TK Leen, TG Dietterich & V Tresp, eds, pp 159–165 (MIT Press, Cambridge, 2001); arXiv:physics/0005043 (2000).

3. Eray Özkural, 2005, “A compromise between reductionism and non-reductionism,” in Worldviews, Science and Us: Philosophy and Complexity, University of Liverpool, UK, 11 – 14 September 2005. World Scientific Books, 2007.

4. John Searle, 1980, “Minds, Brains and Programs.” Behavioral and Brain Sciences 3, 417-424. 5. Hameroff, S.R. and Penrose, R., 1996,  “Orchestrated reduction of quantum coherence in brain microtubules: a model for consciousness?” In Toward a Science of Consciousness: The First Tucson Discussions and Debates. eds. Hameroff, S.R., Kaszniak, A.W., and Scott, A.C., Cambridge, MA: MIT Press, pp.507-540.


  1. Eray –

    Slightly OT question, but since you mentioned swampman …

    DD argues that because swampman has no history, he/it is in a psychological state distinct from that of pre-lightning DD. I infer from your comment that you consider – as do I – that all lingering effects of one’s history are captured in brain memory, specifically via plasticity, so that a faithful physical copy will include those effects. Is DD wrong? or am I confused (the more likely option)?

  2. “In all likelihood, I think that the priests and conservatives of this world will say that uploads have no “souls” and therefore they don’t have the same rights as humans.”

    Women were once thought to not have souls. The catholic church had to vote on this a long time ago. Animals were also believed to not have souls so they still don’t have any rights. Women barely made the cut, but had they voted that women were soul-less then things would be much different for females.

    It’s all absurd really.

    It can be argued that spirit is in everything and that souls are just a part of one great cosmic spirit/ life force.

    • But if we do that, then the word fails to distinguish anything from anything and becomes meaningless. We mean a certain, specific thing when we say something has the intentional stance, and something else when it is sapient. Just because there is not a universally agreed-upon, discrete, unique cutoff point doesn’t mean that we should just give up on trying to draw lines in the sand. If we did, why distinguish anything from anything, since it is all ultimately just energy anyway?

  3. Eray Ozkural,

    Btw, the EM Field’s feedback loops are actually very good candidates for brain wide quantum effects, due to the nature of how EMs interact/propagate, see:

  4. Hi Purpose,

    Thanks. I suppose you mean by neutral monism, monism that is not anomalous (which is really dualism). I agree with that, ontologically, it’s not like we need to posit more than one substance.

    That’s interesting, another channel of interaction or a global EM field could be yet another basis of subjective experience, without entailing quantum superposition in the brain.

    In functional terms these could provide collective communications. Which would be useful from a parallel computing POV.

    For a functionally complete upload, all extra-synaptic efficacious computation and communication must be simulated. I wonder when we will be certain!

    This of course also implies communication of close-by nervous systems, effectively might mean nervous systems act as transmitters and antennas, maybe it’s a very old mechanism that precedes synaptic comms. More experiments are needed.

  5. The evidence from psychedelic drugs and anesthesia imply that changing the brain’s chemistry also modulates experience. If the experience changes, what can this be attributed to? Does the basic computation change, or are chemical interactions actually part of human experience?

    Am I completely misunderstanding this question? The human brain has chemical synapses. Altering the brain’s chemistry alters the firing rate of those synapses. Therefore, the basic computation is changed. This is something I learned in high school biology. Either the author of this article is, despite his knowledge of philosophy of mind, not very well-versed in neuroscience, or I am completely failing to grasp what is meant.

  6. Eray Ozkural,

    By neutral monism I mean a singular (monist) substance exists as a foundational dynamic. However, this substance possess attributes, of which at least some are matter (quarks and Leptons), Fundamental interaction ( electromagnetism, strong interaction, weak interaction and gravity) and proto-awareness.

    You may find the following recent discoveries interesting:

    “The brain — awake and sleeping — is awash in electrical activity, and not just from the individual pings of single neurons communicating with each other. In fact, the brain is enveloped in countless overlapping electric fields, generated by the neural circuits of scores of communicating neurons. The fields were once thought to be an “epiphenomenon, a ‘bug’ of sorts, occurring during neural communication”

    “New work by Anastassiou and his colleagues, however, suggests that the fields do much more—and that they may, in fact, represent an additional form of neural communication.”

    “while active neurons give rise to extracellular fields, the same fields feed back to the neurons and alter their behavior,” even though the neurons are not physically connected—a phenomenon known as ephaptic coupling. “So far, neural communication has been thought to occur at localized machines, termed synapses. Our work suggests an additional means of neural communication through the extracellular space independent of synapses.”

    “unexpected and surprising finding was how already very weak extracellular fields can alter neural activity,” “For example, we observed that fields as weak as one millivolt per millimeter robustly alter the firing of individual neurons, and increase the so-called “spike-field coherence”

    “Increased spike-field coherency may substantially enhance the amount of information transmitted between neurons as well as increase its reliability. Moreover, it has been long known that brain activity patterns related to memory and navigation give rise to a robust LFP and enhanced spike-field coherency. We believe ephaptic coupling does not have one major effect, but instead contributes on many levels during intense brain processing.”

    Can external electric fields have similar effects on the brain?

    “Indeed, physics dictates that any external field will impact the neural membrane. Importantly, though, the effect of externally imposed fields will also depend on the brain state. One could think of the brain as a distributed computer—not all brain areas show the same level of activation at all times.”

    “Their results also showed a series of changes that occurred in a specific order during loss of consciousness and then repeated in reverse order as consciousness returned. Activity in a frequency region known as the gamma band, which is thought to be a manifestation of neurons sending messages to other nearby neurons, dropped and returned as patients lost and regained consciousness”

    Also, a couple interesting facts about force carriers (of which electromagnetism is one). None of the Fundamental forces are discrete phenomenon, they are best described as a continuous field, whose intensity drops off over time and space, but never reaching zero.

    Matter itself never directly touches/interacts. Matter can only interact through force carriers. I find this interesting, as in a real sense it is not the matter we are composed of that makes us a unified object, it is the force carriers that bind the matter together into a single entity.

    My answer to the Theseus’ paradox is that it is quantum decoherence (nature) that determines the thresholds by which things are unified or separated, not our descriptions of things. Furthermore, it appears that force carriers play a very important role in both decoherence and consciousness/awareness.

  7. I’ll try to address Wintermute’s question in some more depth.

    Wintermute asked:
    “What specific scientific experiment could we perform that would confirm or disconfirm this hypothesis — that the level of molecular interactions is the level of simulation necessary for functional equivalence?”

    This could be broken down into many experiments. From the way you pose this question, to show the necessity of molecular interactions, you only have to show the necessity of a single interaction. I think from the literature we could already verify that this is indeed the case. However, we could design a conclusive neuroscience experiment, by introducing a small chemical change, for instance by inhibiting a particular molecular interaction, that would not be accounted by a simulation that doesn’t simulate molecular interactions , and show that the lack of this interaction causes a significant behavioral change in a subject. I believe that this could be made rather effortlessly once you identified such molecular interactions. Could it be related to the “love molecule”?

    I think this is a subject that’s already being extensively discussed in the neuroscience literature, but I think some reviews of the literature that investigate the current opinion among neuroscience researchers on this issue would be immensely useful.

  8. Overall, of course, it may be much more complex than we imagine now, because apparently there are several types of neurons, which you don’t see on ANN models. I think us electronics/computer people are usually oblivious to the real complexity of the brain. We like it neat&clean, but the biological world doesn’t work like that. Biology thrives on complexity!

  9. Hello Wintermute,

    I hope things are going well up in the orbit.

    I don’t think that I can give a satisfactory answer to your question. After all, I’m a lousy computer scientist. However, I think that at least some abstraction of the chemical processes may be needed. There are two reasons, as you’ve noticed not all neuro-transmitters are the same, so we’d have to account for those in some way, maybe just the volumetric concentration of those guys will do, and they will be yet another representation in the simulated brain model. Maybe just a few numbers per synapse/neuron. Second, I have the following kind of feedback on my mind. You go into a situation and this causes an emotional response, now your brain is producing different chemicals and those are washing down your brain and you sense that your brain is working differently now. Panic? Anger? Wonder? It seems to me that many basic sensations and feelings have a biochemical underpinning. We would need a way to emulate those or we discard some essential feedback loops and the machine doesn’t work right.

    If, as you say, it were just 2 orders of magnitude more difficult, this would be great, because that means it only loses us 5-6 years, and then we can combine a brain scan with a generic biochemical model and maybe we will be done. That would be sweet, but I think it requires a lot of very focused neuroscientists to do that.

    Of course, neuroscientists are well aware of how important biochemistry is, that’s why they’re working so much on it. The recently released human brain atlas is a promise of wonderful things to come!

    I think in the worst case, we could have some sort of “digital dna” that would precisely emulate the exquisite gene expressions that drive brain biochemistry, that would be another route and probably take a lot of resources, but in the end, I think both approaches are feasible. And I think they can be accomplished before 2030.


    Your friendly robot,

  10. Hi Purpose,

    I don’t think I claimed that a consciousness is “shared” between an original mind and a simulated mind, or even a copy.

    In the literature, you may find Davidson’s “swampman” thought experiment about copies. Obviously, some things are awfully disturbing about a copy, and it will not share some things, for instance, it will not share the same causal history. But that doesn’t mean it will not have the same kind of subjective experience as you, and remember and re-liev your memories in exactly the same way. Which is the extent of the discussion.

    About your particular theory, if you are saying that Newtonian physics cannot explain subjective experience, I tend to think the same, but I have no proof or disproof really. It was thought that computationalism entailed that only Newtonian physics is sufficient for experience. I’m not so sure now, since especially Newtonian physics really describes just motion, and not other properties like electromagnetism or gravity, both of which may be part of experience…. I think, to my surprise like Penrose, that quantum gravity *may* be involved.

    On the other hand, I’m not so sure what you mean exactly by neutral monism. Can you explain, I’m an ontological monist, I think it’s about the same thing as “energy” in physics. (Regardless of finer distinctions in physics)

    I don’t think that being a monist prevents me from asking what kind of experience an upload will have. What do you think?

  11. Thanks a lot for these interesting questions and comments. I am going to reply in due time. I’m away from my computer at the moment, I’ve hijacked someone else’s laptop. I’m curious at the moment, does this laptop have any experience? Is there anything at all to being this laptop? Imagine the horrors of running Windows!

  12. I just went back and re-read the article as well as some more recent post-Chalmers’ work, and I think I see where the disconnects are located, and it is in part due to an incomplete remembering of Chalmer’s theories, which do have several plot holes of their own. I apologize for my oversights, I’ve admittedly invested only casual time and energy into the subject, though I do have a genuine interest and hope I can continue without trolling. 🙂

    Let me rephrase my specific points without quantum-entangling ones neural codes with the obviously baggage laden tulpa of philosophical arch-pope Chalmers.

    On the second reading, I agree completely with virtually the entirety of your article and let me begin by saying that I believe it clearly paints the current scientific discursive landscape of experience and consciousness, including all the dualistic pitfalls, the murky but gradually thinning forests of the role biology and degree of simulation play, and the as-yet uncrossed chasms of quantum coherence schisming down to the physics community.

    I firmly agree we’ve got to approach the questions of experience with a level head firmly rooted in naturalism to avoid said heads getting lost in the clouds of dualism and supernaturalism. Especially as this ultimately leads towards neurochemically-addictive wars of fruitless tribal self-righteous indignation, emotional retribution and thus precluding progress towards the goal of understanding with unassailable dogma, inflexible psychological projection, and generally mass flame warring leaving only the ash remains of straw men armies and the sated purrs of echo chamber circle-jerks. And, you rightly point out the corrolary to avoidance of H Sapiens hard coded tendency towards supernaturalization, which is that we must clearly identify with a fine scalpel the regions of the mind’s inner workings which we understand and those critical questions which remain to be answered. That is, to avoid succumbing to over-reduction and throwing out the experiential baby with the biological bathwater in favor of simpler/easier models of the wet ware.

    Let me, then, take up razor and microscope, and explicitly address your article, and perhaps add another question to the list.

    You highlight the question:

    “What is the right level of simulation for B to be functionally equivalent to A?”

    You go on to bring up the present uncertainty as to the level of accuracy necessary, especially in the realm of biochemistry, to produce a B close enough to A that there is no “lossy compression” in the simulation and thus leaving B non-trivially different from A. Possibly dangerously so, in the case of synthetic neural nets which leave out critical information and become merely “fingers pointing away to the moon, missing all the heavenly glory.” (and thus lacking human-like empathy).

    Your answer to question one is:

    “[It is] at the level of molecular interactions which would at least cover the differences among various neurotransmitters, and which we can simulate on digital computers (perhaps imprecisely, though).”

    And I would agree with this evaluation. I believe we’d need to get at least a few floors below the level of the electromagnetic circuitry, despite the magnitudes of ease opting for that would be provided by that model of necessity and sufficiency for duplication of experience.

    So we are still on the same page there. No Gordian Knots.

    Now, given that we are adhering to uncompromising naturalism and strict scientific rigor, is my question: What specific scientific experiment could we perform that would confirm or disconfirm this hypothesis — that the level of molecular interactions is the level of simulation necessary for functional equivalence.

    I am not sniping or asking this question facetiously; I honestly want to know how we will practically, empirically verify this in the frame of rigorous epistemological standards.

  13. Eray Ozkural,

    Although I share your rejection of dualism in all forms I find it surprising how strong you reject it. I say this because you clearly grant all physical objects the quality of “experience” and by your support of “panpsychism” I must assume you also grant them at least some form of proto-awareness as well. I can only conclude from this that you wouldn’t object to strongly to being labeled a neutral monist and you then must admit that to your average laymen the difference between a neutral monist and a property dualist is small indeed =).

    In my view a form of neutral monism is the only only non-contradictory philosophical position. In support of that position I put forward the following thought experiment.

    This argument is a specialized variation of the common “copy is not the original” argument. This variation illustrates how and why atomically identical clones lack sufficient information to facilitate a subjective transference, irrespective of that clones environment or spacetime location.

    The argument:
    Imagine two atomically identical, simple and non-distinct rooms. A human clone exists in suspended animation in each room and the clones share the exact atomic structure of the other. These rooms are shielded in such a way where no external (to the room) influences can affect anything within. For all intents and purposes the two rooms+clones only differ in their spacial locations (note: locations can be accounted for just as easily). Now imagine the two clones wake up at the exact same time and begin to explore their environment.

    As described it is understandable if one were to reason that:
    1) The rooms and their clones will evolve in exactly the same way over any length of time.
    2) The two clones will share the same self awareness.

    Why neither (1) or (2) is true:
    Modern physics tells us the reasoning about (1) is clearly wrong. The two rooms+clones will instantly begin to diverge, first in minuscule ways, but gradually these small changes compound on themselves, so that given sufficient time the divergence becomes ever more significant.

    Although less obvious, (2) is also wrong as it is logically inconsistent. In fact if the two clones shared the same self awareness in truth, then a change in one MUST have at least some influence on the other. If not, then clearly they cannot share anything of substance, since sharing by any definition must involve one or more transactions/interactions. The problem is that no matter how we influence clone #1, clone #2 will continue to evolve both subjectively and objectively independently of clone #1. Thus (2) is also clearly wrong and any conception of a connection/relation must be regarded as a contraction.

    But how can this be correct? Both clones obviously possess awareness and a self. Thus it seems absolutely reasonable to conclude both of these aspects must be accounted for by some pattern these clones have in common. It turns out however that the very concept of “shared”, at least as far as classical objects are concerned (i.e. non quantum) is fundamentally flawed.

    Instead we must accept that the two clones are only very “similar”, each possessing a very high probability of correlation to the other, that is at least until we pass the scale of atoms. With this refined definition it should no longer be surprising that two atomically identical clones share nothing of substance.

    Some people after having read the above will naturally want to try and refute this by drawing on philosophies like patternism. Where the argument is changed into one where the focus is placed on “sufficient” similarity in order to produce the appearance of a “pattern transfer”. This common line of thinking misses the point entirely. There is no question that physical things (clones and rooms included) have structure and interact in quasi-deterministic ways; thus things can properly be described as four dimensional patterns. However, this argument ignores the fact that patterns are always external abstractions of an arbitrary and incomplete subset of events. In other words it is like looking up at sky on a cloudless day and saying the sky is the same as yesterday. The statement is both meaningful and yet untrue.

    Likewise, some people will turn to the fashionable idea that consciousness, subjectivity, self-awareness or whatever you want to call it is an illusion and/or contradictory.

    If we are to take this response seriously then it must apply equally to all classical objects (atomic resolution patterns). As I have already shown the two rooms+clones do not share anything of substance. Thus anything the clones posses, illusions or otherwise, must be caused by the substance of a particular clone. As should be clear, calling something that exists, an illusion, is problematic at best.

    I propose an alternative view:
    Mind/Self is a pattern/process; however, awareness of mind/self is always singular. It’s singular nature is defined by the same physical process by which all “things” emerge into classical reality.

    In quantum physics they call this emergence Quantum Decoherence, or in the theory of Quantum Darwinism the “e-selection of pointer states”. No matter what we call this process it should be clear that similarity is insufficient to claim a successful transference. Nature however posses a method for accounting for, transforming and keeping track of things (awareness of mind included). At present, how and why exactly this process works is not clear. What seems clear, at least to me, is that there is something about self awareness that is both physical and yet not accounted for by classical bits.

    I do think there is more going on here than simply our inability to account for both space and time simultaneously. Obviously we could easily modify the thought experiment so that instead of the copies occurring at the same time, they occupy the same space but are carried out at different times.

    Quantum physics suggests the divergence in both cases will be statistically equal. In other words if we created 10 copies of rooms+clones in space and 10 copies in time, the divergence between all 20 copies would be statistically indistinguishable from each other (i.e. purely random).

    However, if we were to simulate the above on a Turning Machine using Newtonian / Relativity algorithms all 20 copies would evolve identically.

    To me the apparent individuality/uniqueness of “things” is definitely one of the most interesting part of the argument. It is clear that physical things are able to separate from and merge with other things and so individuality is malleable concept. I think when we fully understand the relation/differences between copies and transformations we will then have the tools we need to achieve a physical transfer from any mind substrate to any other.

  14. Maybe, I should have just taken it lightly and said
    “Do we need the uploads of dualist philosophers? It might be extremely entertaining to watch the uploads argue that they have no experience although they think they do, or that their experience is the same as the upload of the person although there is no physical similarity, or that they could not know because mental properties are not physical”.

    But I can’t do that, because somebody tries to make the impression that I do not have the required expertise in philosophy of mind. So I am having to explain why Chalmers is such a big chunk of hot air. Maybe, a big maybe, the popularity of a philosopher only shows how stupid the common man is. Did you ever consider that?

  15. Dualism=Anti-science=Creationism=Religion=Superstition

    They are all the same to me. Oh, and there is Platonism, too!

    • Dualism is the belief in opposites. Nature is full of opposites. Good and evil exist just like fire and water exist.

      People who don’t know the difference between good and evil or right and wrong are called psychopaths. They call themselves relativists, but actually they suffer from solipsism syndrome and cannot tell the difference between their thoughts and external reality so therefore whatever they think is good or evil is an illusion because they don’t have an individual conscience. However, they do have a sense of self so they can rationalize everything in order to get what they want while denying the fact that others have thoughts and feelings which are separate from their own desires.

  16. As for you Wintermute, I am disturbed by your ignorant comments. You sound like you did not read anything that I wrote. I explicitly wrote that in my learnt opinion, among philosophical theories of mind, only neurophysiological identity theory can be taken more or less seriously by a scientist. I explicitly wrote that the standard of thought experiment that I endorse is scientifically plausible scenarios. And then I went on to describe thought experiments in that fashion.

    That kind of pathetic fantasy novel comedy as that half-witted mysterian called Chalmers contrived, does not meet my standards of argument or constructing a thought experiment. Therefore, I dismiss his inferior “theory” which holds zero cognitive significance. It does not have any more value than the Bible for me. It’s just religious claptrap, and a redundant re-phrasing of Descartes, so as to differ in terms used, but not in meaning.

    However, of course as every physicalist knows, property dualism properly reduces to substance dualism. That is why, in the Stanford entry, the author correctly identifies property dualism as a kind of ontological dualism. (And why Searle insists that he is not a property dualist!)

    Of course, only a scientifically illiterate person would believe that all physical properties, that is, let’s count, all quantum states, all geometry, space, time, energy, mass, charge, every kind of physical measurement and existence imaginable is the same, while the mental “properties” differ.

    How is this possible if supervenience physicalism is true? It is not. Therefore, this is a rejection of supervenience physicalism, and that means it is idiocy to boot.

    First of all, that so-called professor uses the obsolete Platonic definition of “property” as if properties are additional entities attached onto matter (i.e. substance, anything that is physical to simplify). That is not the case. Property is in fact not a definite ontological distinction, it is a mental construct, a psychological border of things, a purely epistemological entity. It is just a family resemblance relation. Nothing more.

    Therefore, it is not only scientifically illiterate, but also philosophically laughable to try to “imagine” what was it, extra-physical properties, so that these properties are not physical, and they apparently cannot even be caused by any physical property, since they are independent of all physical properties. Then, of course, they have independent existence, and they require completely extra-physical substance. That is to say, they require extra-physical ghosts to be true. Therefore, property dualism reduces to substance dualism. They are one and the same view, actually.

    Needless to say, Aristotle’s conception of properties is much more reasonable, but that must escape Chalmers.

    Of course, this kind of philosophical reasoning transcends Chalmers by at least 1500 years. Because Chalmers’s philosophy has not advanced to the point of getting past Plato.

    Would you even be able to contemplate the standard of nomologically possible worlds, if I did not mention it to you? I guess not, since you so quixotically protest the standard of thought experiment that true philosophers like Einstein have contemplated and applied successfully, while inferior minds like Chalmers have made a mockery of.

    Since Chalmers’s theory is not to be taken seriously, I suggest that his engagement with the “hard problem” is also adequately dismissed, and left to neuroscientists and physicists. They know much better than Reverend Chalmers or any other dualist “philosopher”. That’s the gist of my article, in fact.

    And for that reason, I refuse to show any respect to those dualist philosophers. They are a hindrance to natural philosophy. They are obsolete, and their “philosophy” must be eliminated from the discourse of intelligent people.

  17. There is no subtlety in Chalmers, and I do not need to say more. You probably do not even understand what I said. Yes, arguments from conceivability do not work, because philosophers do not care what an idiot can imagine. So one idiot (Descartes) can imagine God, and another idiot (Chalmers) can imagine a “possible world” which is physically the same but there is no experience. That is not possible, because when you duplicate physics, you duplicate every property. There is no such thing as a non-physical property, and Chalmers is a freaking moron for all I care.

    I do not have to give you any more reasoning. What I said above is enough.

  18. I’ll assume your ad-hominem charged first reply was an emotional backlash.

    “Such a reality is physically impossible, and therefore it is not a scientifically plausible thought experiment. No conclusions can be drawn from it.”

    Citation needed. Please prove that such an alternate reality wherein consciousness does not exist is physically impossible. (Hint: herein lies another unexamined assumption)

    Also, In believe you’re a bit too quick to hand-wave Chalmer’s ideas and are missing the deeper subtlety of the p-zombie thought experiment. It’s not about “superstition” or “dualism” or some misplaced need to resurrect the Des Cartesian human soul, but about more carefully dissecting the Hard Problems of consciousness.

    The point is that consciousness, so far as we know, does not actually do any “work” within the world, so far as science can tell us; it is causally unnecessary. That is, the laws of physics are necessary and sufficient to describe the workings of the natural world, down to every last electronic axon propagation in your brain which ’causes’ you to make every decision you’ve ever made and feel every qualia you’ve ever felt. It is not a “proof of a separate ‘dual’ substance / property called consciousness” but rather an identification of the elephant in the room: why *should* consciousness exist at all? The cogs and gears of the universe would seem to be doing just fine without it. “Because I can’t imagine a reality in which consciousness does not exist” or “Because a universe without consciousness is physically impossible” do not answer that question.

    But the real problem, since you are obviously of a very scientific mind is this: How will you empirically verify the specific experiential state of consciousness of a given system? How will you *know* if an uploaded version of you is actually experiencing anything or anything similar to what you experience?

    Your critique of the colorblind neuroscientist argument runs into the same problem, ultimately. Your solution is to implant nano-scale electrodes into her brain to generate the “neural codes” necessary to experience red. This is effectively saying, “To give a colorblind person the experience of red, we simply make them no longer colorblind.” To give someone the experience of a qualia, we give them the experience of the qualia. Which negates the premise of the experiment.

  19. As I understand it, your objection is built on the premise that property dualism, 2d argument for property dualism, and the concept of p-zombie are all tenable, and even correct.

    Respectfully, I have to disagree. I do not believe that any of those are tenable, therefore I cannot really give an answer to you.

    If you would like to consider the limits of scientific knowledge in understanding subjective experience, I think it would be better to argue it in the framework of the Knowledge Argument:

    Then, I can give a very good answer. The knowledge argument assumes that experimentation is not a method of science. The neuroscientist Mary can, however, induce the experience of Red in her brain, by stimulating her neural circuitry in the right way, for instance by implanting nanotech electrodes and asserting the neural code that corresponds to a particular red sensation over her visual field, or by stimulating her photoreceptors artificially, etc. That is all the knowledge of red, Mary can have, therefore the knowledge argument is not an argument against physicalism. It’s important to understand the difference between subjective and objective points of view of a brain, though. It is the fundamental question in philosophy of mind, is not it? For more discussion about the knowledge argument, both stanford encyclopedia of philosophy and internet encyclopedia of philosophy contains ample material.



  20. I leaved out Chalmers’s theory out of this article, because it does not meet my epistemological standards. (I doubt any of his theories do) It is one of those extremely superstitious theories that contradicts with neurophysiological identity theory.

    The “argument from conceivability” that you allude to is childish and ignorant. There is no zombie problem, because although Chalmers can, through his scientifically illiterate mind, imagine an alternative reality in which a being is physically identical to you but has no experience. Such a reality is physically impossible, and therefore it is not a scientifically plausible thought experiment. No conclusions can be drawn from it.

    I did not mention Chalmers because I did not wish to dilute an otherwise intelligible essay.

    In fact, that 2d argument for property dualism mirrors Descartes’s argument for the existence of God. Why don’t we accept the existence of a God, because some French christian philosopher can conceive of a perfect being?

    In both cases, we should say, “No, I have no time for your feeble imagination. Go buy a physics book, and eliminate your ignorance!”.

    Since Chalmers’s entire epistemology is built on superstition and flawed logical reasoning, I will not respond to the rest of your comment, in fact, I did not even read it.

  21. looking forward to the day science can deal with a headline that says “states of consciousness and the nature of experience”

  22. While it’s an important and philosophically fruitful question to ponder, I don’t believe there is any way to scientifically verify the existence of subjective experience, or the degree of similarity to human subjective experience of any given system. At least not any way we know of at present. One can *assume* that neural code is the degree of description necessary, or you could assume neural states, or perhaps you’re diehard Searlian and assume nothing but soup-to-nuts biological reproduction is necessary down to the quarks. But any of these hypotheses can only remain assumptions because of what David Chalmer’s describes in his p-zombie thought experiments (what we might call “The Zombie Problem”). Philosophical Zombie being an entity which behaves exactly like a person, but for whom there is no internal conscious experience. That is, all of human behavior and neurochemistry can be reduced to a series of reactions/reflexes of matter at the atomic level, unless we’d like to toss all the physical laws of the universe out the window once we enter the brain. If this is so, then a software-simulation of you which does not fully reproduce you atom-for-atom could seem to behave like you, its neural circuits even dancing like yours as it tells you about itself, its favorite flavor of ice cream, its love for its significant others, while there is no “light on” inside, no experience, and the “digital you” is merely causal cascades of interactions of matter and energy. And there is no scientific way for us to tell the difference between a conscious human and a p-zombie human, because even if we asked digital-you, it would still respond “Of course I’m conscious, what a silly question!”, but without actually experiencing anything! There is no external 3rd-person objective “consciousness litmus test” we can perform to verify conscious experience. We assume that since we are human and other humans look, act, and have similar neural plumbing in FMRIs to us, that everyone is experiencing, AKA “The Gentlemens’ Agreement”, but this is itself not empirically verifiable. There is only one experiencer that we can be sure of — ourself.

    Not that unverifiable questions has ever stopped philosophers, scientists, religions, or the drunk college student from asking them. 🙂

Leave a Reply