Sign In

Remember Me

Against Naive Uploadism: Memory, Consciousness and Synaptic Homeostasis

Part 1 – The Assumptions

Uploading one’s consciousness into a computer is seen by many as an inevitable step in human evolution. It’s considered to be a means of achieving immortality and freeing our consciousnesses from the meaty, bone-bound prison currently encasing our substrate-neutral souls. This seems prima facie logically consistent and I am not disputing the possibility of this vision for the far distant future of humanity. However, I would like to counter some of the naïve assumptions that lead anyone to profit from claiming such an eventuality is close, or even that we have some idea of what needs to be accomplished scientifically to achieve it.

I think the most pernicious and obviously wrong assumption about the human brain is any comparison of it to a digital, serial computer. One line of reasoning, which asserts we are approaching computing power comparable to the human brain is almost absurd and displays a complete lack of understanding about how the brain operates. This is voiced most clearly by the futurist Ray Kurzweil in his book “The Age of Spiritual Machines.” On page 103, paragraph 2, he states:

The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation … (but) only 200 calculations per second…. With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate…. In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion calculations per second…. This capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.”

A Spike is not a FLOP

To thoroughly discredit this foolish notion once and for all, let’s begin by looking at what’s actually being claimed and what some of the unspoken assumptions inherent in this argument are.

Assumption 1: A spike is in some way comparable to a FLOP.

This is the most fundamental error any theorist can make when prognosticating about the computational power of a biological brain. A FLOP, a floating point operation technically defined by the Institute of Electrical and Electronics Engineers’ (IEEE) Standard for Floating-Point Arithmetic (IEEE 754), is a single mathematical operation – generally an arithmetical or logical operation performed on an integer value. The power of computer processors is commonly measured in FLOPS, or the number of operations the processor can perform per second. For example, the computer I’m writing this article on has four processing cores, each capable of operating at 2.4GHz. A performance analysis program just told me my CPU is capable of reaching 38.4 GFLOPS. This is a modest amount compared to some of the peaks of computer power that now exist in the world’s supercomputer facilities.

The crucial assumption in this line of reasoning is that an action potential (spike) fired by one neuron, upon branching into every axonal terminal, equals one calculation at each synapse. Even a cursory examination reveals this doesn’t make sense. A spike is the result of upstream network activity. For a given neuron to spike, a huge amount of contingent information must have flowed through the earlier stages of the network before arriving at the neuron in question. It will then fire or not depending on the state of the network prior to receiving its input. In a way, each spike is a report of that neuron’s tuned appraisal of all previous network activity. Instead of being an elemental calculation, that spike is a report of an enormous amount of previous calculation. Just how much at an arbitrary unit is unknown, but we can take a look at some simple examples to get an appreciation of how much information must be conveyed by the lonely spike.

In the retina, light is collected by specialized cells called rods and cones. Cones detect the wavelength and presence of light, while rods just detect presence. Rods are sensitive to low light conditions while cones require more light to be active. Each of these is connected to a ganglion cell through bipolar cells. The bipolar cell signals to the ganglion the state of the light receptor by firing spikes into the ganglion cell’s dendritic arbors. The three types of retinal ganglion cell receive inputs from either a few, many or very many rod and cone cells. Each time these cells send a spike, that single spike represents the collective activity of a few to hundreds of other cells: rods and cones to detect the presence and type of light, bipolar cells to collect and transform this signal into something the rest of the retina can understand, horizontal and amacrine cells to add a layer of processing to amplify differences in contrast and rate of change of illumination and the ganglion cells themselves which send their reports on to the thalamus for further processing before going to the extremely large vision processing regions in the cortex. Even at this retinal level, one spike represents a very large number of elemental detection events. The physiology of the cells themselves adds information to the content of the spikes. Complex interactions between the membrane, the receptors, the detailed morphology of the cytoskeleton, which soluble proteins exist in which parts of the cytoplasm, the enormous amount of information in the nucleus stored as DNA and written into RNA as needed to control the dynamics of how the cell responds to its environment, all this information goes into whether or not there will be a spike and a spike represents all of this information.

And this is at the most basic primary sensory level. What happens when the spike arrives in the cortex? It meets countless other spikes, constant and ongoing network activity that its presence modifies. As cells further and further downstream from our sensory spike receive their inputs and spike in turn, these represent in some way all the previous spikes. The information-dense spike from the ganglion cell now explodes into spikes laden with context and memory, an enormous amount of information that is embodied by the physical matter of the cells themselves.

What does a spot of light mean? If you are underground in a cave and want to escape, it means something much different than if you are in a desert looking for shade. There has to be memory, so one knows what this event means for the organism. One must remember being trapped in a cave or being stranded in a desert. This information-dense context, a galaxy of spikes in its own right, takes the single ganglion spike, representing hundreds of other cells, as a meaningful stimulus that has relevant implications for the organism.

One can make a counterargument that each individual neuron, however complex, only knows if it receives a spike or not and does not need to know the previous state of the whole network for it to respond in a meaningful way. This is true, at least for a physical system like a brain. To simulate this in a computer would require more information than just an abstract spiking function at every synapse at an average rate. The computer would need to hold a physical model of each neuron and their activity constantly in memory because each unit is constantly being used, modifying the function of a huge number of the network’s other units, or potentially all of them.

One could say a digital computer FLOP needs to know the previous results of the prior FLOPs in order to make its calculation. After all, it operates in a serial fashion but this is not the same. The results of the operation are stored in a central addressed memory until they are needed again, so the history is stored in a memory bank separate from the processing units.

A neuron is both processor and memory. Its changeable physical structure is a record of its previous activity. Therefore, for a spike to be meaningful, it must take into account the complete biophysical configuration of the neuron that fired it. Since this is a dynamic and constantly changing process, a digital computer must model all this complex physics to decide to fire a spike, adding enormously to the computational complexity.

We can now see that comparing a spike to a FLOP is a spurious argument, based on a superficially hasty analogy between neural action potentials and serial processor operations. A CPU call is an elemental calculation in which some, in my case, 64-bit binary number is pulled from memory and subjected to a rule which transforms it into another 64-bit number and sends it back to memory. The FLOP needs only enough information to address the memory and hold the actual number. The spike depends on the entire biophysical configuration of the neuron as well as the entire history of the network and its activity that led to its being fired.

Therefore, we can see how doubting the claim that a spike is in any way equivalent to an elemental arithmetic calculation – a FLOP – is reasonable and persuasive.

Assumption 2: The spike is the unitary, constant unit of information transmission in the brain.

There are a variety of subtle neurobiological facts that render this assumption unsound, but let’s take first the most glaringly obvious. The brain is not composed entirely of neurons. Astrocytes are a critical cell type absolutely necessary for brain function. Many, if not most, synapses in the brain are abutted by an astrocytic process. These cells uptake excess glutamate to protect the neurons from the toxic effects of too high a concentration of the neurotransmitter in the synapse. They also bring nutrients to every neuron in the brain by reaching out to blood vessels and shuttling energy molecules and removing waste. They also communicate with each other through beautiful curtains of calcium waves that propagate through astrocytic gap junctions, affecting their duties to the neurons. These is even some evidence that neurons can communicate directly with astrocytes and vice versa through releasing a neurotransmitter, although this is still not widely agreed upon. Astrocytes are also the cause of the BOLD signal that is detected in functional Magnetic Resonance Imaging (fMRI) research. They bring oxygenated blood to the neurons which are highly active and they remove the deoxygenated blood that forms the basis of the signal. The dynamics of this process are constrained by the physiology of the astrocyte, lending a layer of complexity to interpretation of fMRI experiment results.

The number of astrocytes in a human brain outstrips the number of neurons by an order of magnitude. There are 10 times more astrocytes than neurons and the kind and quantity of information processing they add to the network is largely unknown at this point. Any putative comparison between a computer’s speed and the brain’s “speed” that does not take in to account the contribution of the astrocytes is meaningless.

Next, neurons are constantly signaling to each other without spiking. A spike only occurs if the membrane potential reaches a threshold value at the initial segment of the axon. Neurons continuously send transmitters to each other such as peptides, growth factors, steroid hormones, paracrine and autocrine signals and small molecules which cause subthreshold changes in the downstream neurons. These changes can manifest as an oscillation in the subthreshold potential, allowing temporal gating of a continuous input signal. Only when the intrinsic oscillation reaches its peak coincident with an incoming signal will the neuron spike. This process takes energy and must be maintained. It takes a great deal of information to describe these kinds of interactions. Subthreshold signaling can also lead to rearrangement of the internal environment of the cell, movement of proteins in the membrane, removal of proteins from the membrane, rearrangement of their relative positions or the configuration of the organelles supporting complex physiology. All this constitutes detailed, information-dense events that are represented by the presence of a spike.

Therefore, there are a number of fairly well understood processes within the brain that exchange information in the absence of spikes. Ignoring these just to take the low-hanging fruit of one spike = one FLOP is superficial and unrealistic. If we are really to be able to understand the brain in order to simulate it one day, we must above all be realistic.

Part 2 – A Detailed Look at a Non-Spike Information Process

“Whoever has will be given more, and he will have an abundance. Whoever does not have, even what he has will be taken from him.” – Matthew 13:12

The Matthew Principle holds just as true in the brain as it does on the internet or in a social network. The rich get richer and websites with huge numbers of links attract the most attention, adding even more links. An already potentiated synapse is more likely to become activated and therefore potentiate again. Though this is pleasant for the wealthy and excellent business for large websites as they gain material advantages from such uneven and self-perpetuating resource allocation, were this to happen in the brain, the whole delicate dance of synaptic transmission would come quickly crashing to a halt as out-of-control positive feedback pushes the neural tissue into a runaway, epileptic, excitotoxic death spiral. Indeed, this very basic theory for how the brain learns, from the days of neuropsychology pioneer Donald Hebb to today, holds within it the seeds of its own destruction. Yet, as you and I can attest from our very existence, sitting here now, reading or writing these words, this does not happen.

Sure, the brain can get out of control and some people can experience periods of runaway excitation. Epilepsy is a serious, devastating disease that affects millions of humans and animals throughout the world. But this is not the normal state. Somehow the brain compensates for this mechanism of learning and memory in a way that undoes this destructive positive feedback. Yet this very compensation leads to difficult questions about the very nature of memory and consciousness in the brain, questions that must be recognized and addressed as we continue to develop our understanding of how the brain works.

The basic mechanism of long-term potentiation follows Hebb’s rule, at least schematically. Those that fire together, wire together. When one neuron synapsing on a second sends a strong enough signal to activate the second neuron, the connection between them becomes stronger. The second neuron (we call it the post-synaptic neuron) is aware of its own firing through somatic mechanisms involving calcium concentration mediated by L-type voltage gated calcium channels. This signal, embodied by the calcium is transduced through members of the calcium-calmodulin kinase family which lead to a variety of cellular metabolic processes. The binding of transmitters onto receptors in the particular synapses that were activated enough to cause this somatic depolarization causes local changes that increase the depolarization upon subsequent binding of the same amount of transmitter. In a later phase, substances made by the metabolism processes caused by increased somatic calcium are thought to be detected by the activated synapse and incorporated, leading to a longer term strengthening of the target synapse.

This means a subsequent signal from the first to the second at the same strength will be more likely to cause the second neuron to activate. The actual physiological mechanisms behind this are legion. The balance between the fluid number of AMPA receptors, the stately and discriminating NMDA receptors, the forest of post-synaptic active zone material, dynamic cytoskeleton, dendritic polyribosome availability, synaptic mRNA and spine apparatus configurations, among others; all these work together to actually underlay the potentiation in ways we are learning about every day.

One might think the way to prevent positive feedback from getting out of control would be to depress the potentiated synapses, or at least depress other synapses onto the same neuron. A little thought reveals this is no answer because to depress the potentiated synapses would erase the differences gained through potentiation, erasing a memory or an association. This would defeat the brain’s purpose entirely. Depressing other synapses on the same neuron would also not help. You can only depress something to zero, but an out-of-control positive feedback loop can go to infinity or at least until the system destroys itself.

The method the brain seems to employ to prevent this destructive feedback is called synaptic homeostasis. Before, we said the cell was aware of its own activity. It detects this by calcium concentration in the soma. Beyond this, it also can detect its level of activity. An activity level that is too high develops when the calcium concentration gets too big. This causes a negative feedback process leading to the elimination of excitatory receptors and even whole synaptic spines throughout the entire cell.

One would imagine that only the synapses that had strengthened too much would have their efficacy lowered, but this is not the case. Where a dendritic spine had 10 receptors before (this is a hypothetical example), after the downscaling it would have 5 whereas a neighboring spine which had 4 receptors now has 2. This causes the entire cell to regulate its ability up and down to sense incoming information from the network. However, once we have a mechanism that prevents synapses from becoming too strong, despite their continued co-activation we are left with a series of difficult conceptual problems.

Now we have as a constant the relative strengths of all the synapses on a given neuron. We say it is constant though the strength of a given synapse can move up or down depending on activity, but we mean that the scaling takes the current relative strength of all synapses and preserves it. We are now left with the input-to-output problem. If synapse X leads to activation pattern (spike train) X1, and synapse Y leads to spike train Y1, a compensatory synaptic scaling might lead pattern Y1 to look like X1, destroying the ability of the neuron to report differences in its input by its output. Indeed, different regions of the brain probably encode information differently. The presence of a spike in one region may signify something but in another, some rate of spikes or the change in the rate of some constant spike train might be the meaningful event.

If there is a process that can destroy the unique mapping of some input onto some output, what does that mean for cognition and memory? If memory is this differential ratio between strong and weak synapses that the neuron scales itself to preserve, outputs should lose a lot of information encoded as the absolute strength of these connections. How can we think memory is merely the strength of some synapses?

Really understanding which processes within the brain are correlated with, lead to or actually are cognition, memory, and consciousness is absolutely necessary before any fantasy of uploading or simulation can ever hope to be realized. Not only will we have to know how each of these conceivably separate processes underlie aspects of cognition or consciousness, or indeed if they do at all, we must also understand how physical changes at a level at least at the scale of individual proteins lead to changes in network activity and onward to higher order mental abstractions including consciousness and self.

We cannot say the brain is a computer and try to reconcile poorly understood facts about it with an old-fashioned concept of a digital process. For example, one cannot in any way meaningful way try to claim the brain is a processor that computes at X operations per second. People like Kurzweil have attempted this and these phantasms become gaping holes in their arguments because intellectual weight is placed upon their being true, when in fact there is no way to evaluate the truth value of these kinds of propositions.

True, the human brain has about 100 billion neurons, but this is far from certain. No one has ever counted; this is an estimate. There are far more cells of other kinds within the skull than neurons, astrocytes for example. Also, one cannot extrapolate from this number, multiply by the average number of synapses per cell, then multiply by the average rate of spiking and claim this is a digital process equal to that many events per second. A spike is not a FLOP. What one spike represents is almost always unknown. There are a very few instances when it can be claimed that a single spike leads to the detection of some external event. This is the case in some touch receptors on the skin. However, in this case a single spike represents a huge amount of information: pressure, location and duration. Once it arrives at the brain, it explodes into contextual meaning modified by the local environment and the entire history of the organism.

Thinking of how these complex metabolic and homeostatic processes work with the network is informative if we truly wish to understand how cognition, memory, and consciousness are supported by brain activity. Which components map to what? What parts of the brain map to which parts of the mind? Can we even think of cognition and mind like this? What is the role of the shape of the plasma membrane of a neuron? Where is the information stored? Is it the kind and density of molecules in a synapse, the configuration of receptors and scaffolding skeleton in a post-synaptic density? The shape and extent of organelles like the endoplasmic reticulum? The metabolic pathways connecting the genetic material in the nucleus to the environment? Any of it? All of it? Any hope of uploading or simulation is a mere mirage until we know which parts of the brain lead to which parts of the mind.

%d bloggers like this: