Against Naive Uploadism: Memory, Consciousness and Synaptic Homeostasis

Part 1 – The Assumptions

Uploading one’s consciousness into a computer is seen by many as an inevitable step in human evolution. It’s considered to be a means of achieving immortality and freeing our consciousnesses from the meaty, bone-bound prison currently encasing our substrate-neutral souls. This seems prima facie logically consistent and I am not disputing the possibility of this vision for the far distant future of humanity. However, I would like to counter some of the naïve assumptions that lead anyone to profit from claiming such an eventuality is close, or even that we have some idea of what needs to be accomplished scientifically to achieve it.

I think the most pernicious and obviously wrong assumption about the human brain is any comparison of it to a digital, serial computer. One line of reasoning, which asserts we are approaching computing power comparable to the human brain is almost absurd and displays a complete lack of understanding about how the brain operates. This is voiced most clearly by the futurist Ray Kurzweil in his book “The Age of Spiritual Machines.” On page 103, paragraph 2, he states:

The human brain has about 100 billion neurons. With an estimated average of one thousand connections between each neuron and its neighbors, we have about 100 trillion connections, each capable of a simultaneous calculation … (but) only 200 calculations per second…. With 100 trillion connections, each computing at 200 calculations per second, we get 20 million billion calculations per second. This is a conservatively high estimate…. In 1997, $2,000 of neural computer chips using only modest parallel processing could perform around 2 billion calculations per second…. This capacity will double every twelve months. Thus by the year 2020, it will have doubled about twenty-three times, resulting in a speed of about 20 million billion neural connection calculations per second, which is equal to the human brain.”

A Spike is not a FLOP

To thoroughly discredit this foolish notion once and for all, let’s begin by looking at what’s actually being claimed and what some of the unspoken assumptions inherent in this argument are.

Assumption 1: A spike is in some way comparable to a FLOP.

This is the most fundamental error any theorist can make when prognosticating about the computational power of a biological brain. A FLOP, a floating point operation technically defined by the Institute of Electrical and Electronics Engineers’ (IEEE) Standard for Floating-Point Arithmetic (IEEE 754), is a single mathematical operation – generally an arithmetical or logical operation performed on an integer value. The power of computer processors is commonly measured in FLOPS, or the number of operations the processor can perform per second. For example, the computer I’m writing this article on has four processing cores, each capable of operating at 2.4GHz. A performance analysis program just told me my CPU is capable of reaching 38.4 GFLOPS. This is a modest amount compared to some of the peaks of computer power that now exist in the world’s supercomputer facilities.

The crucial assumption in this line of reasoning is that an action potential (spike) fired by one neuron, upon branching into every axonal terminal, equals one calculation at each synapse. Even a cursory examination reveals this doesn’t make sense. A spike is the result of upstream network activity. For a given neuron to spike, a huge amount of contingent information must have flowed through the earlier stages of the network before arriving at the neuron in question. It will then fire or not depending on the state of the network prior to receiving its input. In a way, each spike is a report of that neuron’s tuned appraisal of all previous network activity. Instead of being an elemental calculation, that spike is a report of an enormous amount of previous calculation. Just how much at an arbitrary unit is unknown, but we can take a look at some simple examples to get an appreciation of how much information must be conveyed by the lonely spike.

In the retina, light is collected by specialized cells called rods and cones. Cones detect the wavelength and presence of light, while rods just detect presence. Rods are sensitive to low light conditions while cones require more light to be active. Each of these is connected to a ganglion cell through bipolar cells. The bipolar cell signals to the ganglion the state of the light receptor by firing spikes into the ganglion cell’s dendritic arbors. The three types of retinal ganglion cell receive inputs from either a few, many or very many rod and cone cells. Each time these cells send a spike, that single spike represents the collective activity of a few to hundreds of other cells: rods and cones to detect the presence and type of light, bipolar cells to collect and transform this signal into something the rest of the retina can understand, horizontal and amacrine cells to add a layer of processing to amplify differences in contrast and rate of change of illumination and the ganglion cells themselves which send their reports on to the thalamus for further processing before going to the extremely large vision processing regions in the cortex. Even at this retinal level, one spike represents a very large number of elemental detection events. The physiology of the cells themselves adds information to the content of the spikes. Complex interactions between the membrane, the receptors, the detailed morphology of the cytoskeleton, which soluble proteins exist in which parts of the cytoplasm, the enormous amount of information in the nucleus stored as DNA and written into RNA as needed to control the dynamics of how the cell responds to its environment, all this information goes into whether or not there will be a spike and a spike represents all of this information.

And this is at the most basic primary sensory level. What happens when the spike arrives in the cortex? It meets countless other spikes, constant and ongoing network activity that its presence modifies. As cells further and further downstream from our sensory spike receive their inputs and spike in turn, these represent in some way all the previous spikes. The information-dense spike from the ganglion cell now explodes into spikes laden with context and memory, an enormous amount of information that is embodied by the physical matter of the cells themselves.

What does a spot of light mean? If you are underground in a cave and want to escape, it means something much different than if you are in a desert looking for shade. There has to be memory, so one knows what this event means for the organism. One must remember being trapped in a cave or being stranded in a desert. This information-dense context, a galaxy of spikes in its own right, takes the single ganglion spike, representing hundreds of other cells, as a meaningful stimulus that has relevant implications for the organism.

One can make a counterargument that each individual neuron, however complex, only knows if it receives a spike or not and does not need to know the previous state of the whole network for it to respond in a meaningful way. This is true, at least for a physical system like a brain. To simulate this in a computer would require more information than just an abstract spiking function at every synapse at an average rate. The computer would need to hold a physical model of each neuron and their activity constantly in memory because each unit is constantly being used, modifying the function of a huge number of the network’s other units, or potentially all of them.

One could say a digital computer FLOP needs to know the previous results of the prior FLOPs in order to make its calculation. After all, it operates in a serial fashion but this is not the same. The results of the operation are stored in a central addressed memory until they are needed again, so the history is stored in a memory bank separate from the processing units.

A neuron is both processor and memory. Its changeable physical structure is a record of its previous activity. Therefore, for a spike to be meaningful, it must take into account the complete biophysical configuration of the neuron that fired it. Since this is a dynamic and constantly changing process, a digital computer must model all this complex physics to decide to fire a spike, adding enormously to the computational complexity.

We can now see that comparing a spike to a FLOP is a spurious argument, based on a superficially hasty analogy between neural action potentials and serial processor operations. A CPU call is an elemental calculation in which some, in my case, 64-bit binary number is pulled from memory and subjected to a rule which transforms it into another 64-bit number and sends it back to memory. The FLOP needs only enough information to address the memory and hold the actual number. The spike depends on the entire biophysical configuration of the neuron as well as the entire history of the network and its activity that led to its being fired.

Therefore, we can see how doubting the claim that a spike is in any way equivalent to an elemental arithmetic calculation – a FLOP – is reasonable and persuasive.

Assumption 2: The spike is the unitary, constant unit of information transmission in the brain.

There are a variety of subtle neurobiological facts that render this assumption unsound, but let’s take first the most glaringly obvious. The brain is not composed entirely of neurons. Astrocytes are a critical cell type absolutely necessary for brain function. Many, if not most, synapses in the brain are abutted by an astrocytic process. These cells uptake excess glutamate to protect the neurons from the toxic effects of too high a concentration of the neurotransmitter in the synapse. They also bring nutrients to every neuron in the brain by reaching out to blood vessels and shuttling energy molecules and removing waste. They also communicate with each other through beautiful curtains of calcium waves that propagate through astrocytic gap junctions, affecting their duties to the neurons. These is even some evidence that neurons can communicate directly with astrocytes and vice versa through releasing a neurotransmitter, although this is still not widely agreed upon. Astrocytes are also the cause of the BOLD signal that is detected in functional Magnetic Resonance Imaging (fMRI) research. They bring oxygenated blood to the neurons which are highly active and they remove the deoxygenated blood that forms the basis of the signal. The dynamics of this process are constrained by the physiology of the astrocyte, lending a layer of complexity to interpretation of fMRI experiment results.

The number of astrocytes in a human brain outstrips the number of neurons by an order of magnitude. There are 10 times more astrocytes than neurons and the kind and quantity of information processing they add to the network is largely unknown at this point. Any putative comparison between a computer’s speed and the brain’s “speed” that does not take in to account the contribution of the astrocytes is meaningless.

Next, neurons are constantly signaling to each other without spiking. A spike only occurs if the membrane potential reaches a threshold value at the initial segment of the axon. Neurons continuously send transmitters to each other such as peptides, growth factors, steroid hormones, paracrine and autocrine signals and small molecules which cause subthreshold changes in the downstream neurons. These changes can manifest as an oscillation in the subthreshold potential, allowing temporal gating of a continuous input signal. Only when the intrinsic oscillation reaches its peak coincident with an incoming signal will the neuron spike. This process takes energy and must be maintained. It takes a great deal of information to describe these kinds of interactions. Subthreshold signaling can also lead to rearrangement of the internal environment of the cell, movement of proteins in the membrane, removal of proteins from the membrane, rearrangement of their relative positions or the configuration of the organelles supporting complex physiology. All this constitutes detailed, information-dense events that are represented by the presence of a spike.

Therefore, there are a number of fairly well understood processes within the brain that exchange information in the absence of spikes. Ignoring these just to take the low-hanging fruit of one spike = one FLOP is superficial and unrealistic. If we are really to be able to understand the brain in order to simulate it one day, we must above all be realistic.

Part 2 – A Detailed Look at a Non-Spike Information Process

“Whoever has will be given more, and he will have an abundance. Whoever does not have, even what he has will be taken from him.” – Matthew 13:12

The Matthew Principle holds just as true in the brain as it does on the internet or in a social network. The rich get richer and websites with huge numbers of links attract the most attention, adding even more links. An already potentiated synapse is more likely to become activated and therefore potentiate again. Though this is pleasant for the wealthy and excellent business for large websites as they gain material advantages from such uneven and self-perpetuating resource allocation, were this to happen in the brain, the whole delicate dance of synaptic transmission would come quickly crashing to a halt as out-of-control positive feedback pushes the neural tissue into a runaway, epileptic, excitotoxic death spiral. Indeed, this very basic theory for how the brain learns, from the days of neuropsychology pioneer Donald Hebb to today, holds within it the seeds of its own destruction. Yet, as you and I can attest from our very existence, sitting here now, reading or writing these words, this does not happen.

Sure, the brain can get out of control and some people can experience periods of runaway excitation. Epilepsy is a serious, devastating disease that affects millions of humans and animals throughout the world. But this is not the normal state. Somehow the brain compensates for this mechanism of learning and memory in a way that undoes this destructive positive feedback. Yet this very compensation leads to difficult questions about the very nature of memory and consciousness in the brain, questions that must be recognized and addressed as we continue to develop our understanding of how the brain works.

The basic mechanism of long-term potentiation follows Hebb’s rule, at least schematically. Those that fire together, wire together. When one neuron synapsing on a second sends a strong enough signal to activate the second neuron, the connection between them becomes stronger. The second neuron (we call it the post-synaptic neuron) is aware of its own firing through somatic mechanisms involving calcium concentration mediated by L-type voltage gated calcium channels. This signal, embodied by the calcium is transduced through members of the calcium-calmodulin kinase family which lead to a variety of cellular metabolic processes. The binding of transmitters onto receptors in the particular synapses that were activated enough to cause this somatic depolarization causes local changes that increase the depolarization upon subsequent binding of the same amount of transmitter. In a later phase, substances made by the metabolism processes caused by increased somatic calcium are thought to be detected by the activated synapse and incorporated, leading to a longer term strengthening of the target synapse.

This means a subsequent signal from the first to the second at the same strength will be more likely to cause the second neuron to activate. The actual physiological mechanisms behind this are legion. The balance between the fluid number of AMPA receptors, the stately and discriminating NMDA receptors, the forest of post-synaptic active zone material, dynamic cytoskeleton, dendritic polyribosome availability, synaptic mRNA and spine apparatus configurations, among others; all these work together to actually underlay the potentiation in ways we are learning about every day.

One might think the way to prevent positive feedback from getting out of control would be to depress the potentiated synapses, or at least depress other synapses onto the same neuron. A little thought reveals this is no answer because to depress the potentiated synapses would erase the differences gained through potentiation, erasing a memory or an association. This would defeat the brain’s purpose entirely. Depressing other synapses on the same neuron would also not help. You can only depress something to zero, but an out-of-control positive feedback loop can go to infinity or at least until the system destroys itself.

The method the brain seems to employ to prevent this destructive feedback is called synaptic homeostasis. Before, we said the cell was aware of its own activity. It detects this by calcium concentration in the soma. Beyond this, it also can detect its level of activity. An activity level that is too high develops when the calcium concentration gets too big. This causes a negative feedback process leading to the elimination of excitatory receptors and even whole synaptic spines throughout the entire cell.

One would imagine that only the synapses that had strengthened too much would have their efficacy lowered, but this is not the case. Where a dendritic spine had 10 receptors before (this is a hypothetical example), after the downscaling it would have 5 whereas a neighboring spine which had 4 receptors now has 2. This causes the entire cell to regulate its ability up and down to sense incoming information from the network. However, once we have a mechanism that prevents synapses from becoming too strong, despite their continued co-activation we are left with a series of difficult conceptual problems.

Now we have as a constant the relative strengths of all the synapses on a given neuron. We say it is constant though the strength of a given synapse can move up or down depending on activity, but we mean that the scaling takes the current relative strength of all synapses and preserves it. We are now left with the input-to-output problem. If synapse X leads to activation pattern (spike train) X1, and synapse Y leads to spike train Y1, a compensatory synaptic scaling might lead pattern Y1 to look like X1, destroying the ability of the neuron to report differences in its input by its output. Indeed, different regions of the brain probably encode information differently. The presence of a spike in one region may signify something but in another, some rate of spikes or the change in the rate of some constant spike train might be the meaningful event.

If there is a process that can destroy the unique mapping of some input onto some output, what does that mean for cognition and memory? If memory is this differential ratio between strong and weak synapses that the neuron scales itself to preserve, outputs should lose a lot of information encoded as the absolute strength of these connections. How can we think memory is merely the strength of some synapses?

Really understanding which processes within the brain are correlated with, lead to or actually are cognition, memory, and consciousness is absolutely necessary before any fantasy of uploading or simulation can ever hope to be realized. Not only will we have to know how each of these conceivably separate processes underlie aspects of cognition or consciousness, or indeed if they do at all, we must also understand how physical changes at a level at least at the scale of individual proteins lead to changes in network activity and onward to higher order mental abstractions including consciousness and self.

We cannot say the brain is a computer and try to reconcile poorly understood facts about it with an old-fashioned concept of a digital process. For example, one cannot in any way meaningful way try to claim the brain is a processor that computes at X operations per second. People like Kurzweil have attempted this and these phantasms become gaping holes in their arguments because intellectual weight is placed upon their being true, when in fact there is no way to evaluate the truth value of these kinds of propositions.

True, the human brain has about 100 billion neurons, but this is far from certain. No one has ever counted; this is an estimate. There are far more cells of other kinds within the skull than neurons, astrocytes for example. Also, one cannot extrapolate from this number, multiply by the average number of synapses per cell, then multiply by the average rate of spiking and claim this is a digital process equal to that many events per second. A spike is not a FLOP. What one spike represents is almost always unknown. There are a very few instances when it can be claimed that a single spike leads to the detection of some external event. This is the case in some touch receptors on the skin. However, in this case a single spike represents a huge amount of information: pressure, location and duration. Once it arrives at the brain, it explodes into contextual meaning modified by the local environment and the entire history of the organism.

Thinking of how these complex metabolic and homeostatic processes work with the network is informative if we truly wish to understand how cognition, memory, and consciousness are supported by brain activity. Which components map to what? What parts of the brain map to which parts of the mind? Can we even think of cognition and mind like this? What is the role of the shape of the plasma membrane of a neuron? Where is the information stored? Is it the kind and density of molecules in a synapse, the configuration of receptors and scaffolding skeleton in a post-synaptic density? The shape and extent of organelles like the endoplasmic reticulum? The metabolic pathways connecting the genetic material in the nucleus to the environment? Any of it? All of it? Any hope of uploading or simulation is a mere mirage until we know which parts of the brain lead to which parts of the mind.


  1. Funny to read back old article, everything seems to be changed now:

  2. 1.1
    Weisberg: The FLOP needs only enough information to address the memory and hold the actual number. The spike depends on the entire biophysical configuration of the neuron as well as the entire history of the network and its activity that led to its being fired.

    what i think: of course the spike is the product of something that is found at the end of a chain, but, at the end of the day, it has no mind of its own, and it is forced to either fire or not fire based on physical, simplifi-able rules. To claim anything else while dealing with a (relatively simple,) biological construct is to claim magic.
    is the water coming out of the faucet indicative of the entire structure of the waterways system?!
    I was not convinced.

    Weisberg: Therefore, there are a number of fairly well understood processes within the brain that exchange information in the absence of spikes. Ignoring these just to take the low-hanging fruit of one spike = one FLOP is superficial and unrealistic.

    what i think: okay, that “exchange of information” in the absence of spikes can still be codified by using flops. There is no “ghost in the machine”.
    I was not convinced.

    Even if we accept Weisberg’s “the brain is too complicated and we are never going to get it” argument, because the brain is 10 times, 1000 times, 1 billion times more complex than what we currently think it is — with the exponential growth in computer resolution, we will eventually get there in 4, 7, 30 additional years (respectively).

    The blue brain project by IBM is simulating every single part of the brain. In about 7 years time they will have finished. If you build something, accurately, from the ground up, unit by unit, while paying attention to all of the internal structures, you end up getting something either identical or very similar to that you started out to achieve. There’s no way around it, I think.

  3. I’ve never been an advocate of brain uploading. I prefer the Stem Cell nano-repair-augmentation route to immortality. Within 20 to 30 years computers will supersede human brain capability. An artificial brain does not need to be as complex as a human brain, but despite the lacking complexity the AI brain can be more powerful with greater capacity and capability. Consider a mechanical digger; it doesn’t have the complexity of a human arm but despite the simplicity of a mechanical digger, the digging arm is vastly stronger than a human arm based on flesh, DNA.

    We can create more powerful brains (AI) via a more efficient and simpler process than the construction of the human brain.

  4. I like my meaty, bone-bound prison. What’s wrong with it?

    Probably only a minority of humans will transfer themselves to computers where they won’t have consciousness. I like the flesh. What’s wrong with You don’t like the bodies there? You’d prefer them to be androids and machines? You’re like one of those guys that has sex with cars, if that story on the web was true.

    I think the bodies at are perfect.

    And’s this word “uploadism” — what the hell does that mean? Bing search pulls up this article immediately so you invented it. But did you even think about the word? Does it make sense?


    Is there a downloadism?

  5. The cochlear implant, the dobell eye, planar arrays that serve as retinas, cameras that pump signals directly to the visual cortex, and other direct inorganic implanted sensory stimulus generators have already crossed the line from “meat-bag” tissue to engineered material substrate — and the rest of the “meat-bag” brain cannot tell the difference.

    Why then should the chemical or electrical signals from a engineered material substrate based hypothalamus be perceived as different from those originating from an original “meat-bag” hypothalamus, or pineal, anterior cingulate nucleus – etc. etc.?

    Taking a reasonable unemotional view — there is already proof of principle that [parts of] the brain can be converted from flesh to something other than flesh — in what for all purposes is transparent to the conciousness inhabiting said brain.

    Converting the rest of the “meat bag” to something other than flesh is simply an exercise in materials science, chemical engineering, and signal processing — maybe a decidedly non-trivial one — but acheiveable nonetheless.

    The more likely path towards a “non meat-bag” conciousness is a piecewise transition of the brain from flesh to other — rather than some instant upload.

    • I thought very much the same thing a year or so ago. Replace parts while cordoning the consciousness in its quarters until construction is complete, then move it into the new section while the other half of the mechanical mind is built and connected. Of course that is the simplest way of analogizing it. And as mentioned so many other times here, we just can’t define the mind, so an independent AI with its own consciousness can’t be concieved of in the serial fashion in which we think of all our other computers’ software. I mean, you try (in simplistic, non-programming English) and get this:
      STEP 1: Um… just make your own independent descisons, like a human would, computer.
      Doesn’t exactly work. And it’s a concern of mine that creating a conscious computer is impossible, a paradox (at least by current means), in the fact that commanding a machine to be free-willed contradicts itself. But that’s just my take.

  6. Seth,

    You don’t go far enough. It’s more than the entire history of organism behind the current behavior of a single neuron, it’s the entire universe, maybe even endless multiverses connected to ours by undetected higher dimensions. Really? A node in a neural net has many inputs feeding it, arbitrarily many. The right combination, or combination of inbound data can trigger a pulse. The node needn’t have had to store the history of the entire neural net, or entire universe. It’s an event driven device. It triggers, or it doesn’t, involving its few components representable by a very small circuit diagram if it’s a hardware node, or a small algorithm if it’s a software node. Question: Is an integer operation a Floating Point Operation? I pretty stopped reading after this display of expertise.

    • Wut? Are you arguing against the IEEE definition of a FLOP? Why feel the need to write that you didnt read the whole article?

  7. I think a balanced view is called for. Many, if not most, of Kurzweil’s visions will probably come true, to the extent that we are talking about problems solvable by raw processing power. That appears on track, with Intel’s new 3D chip and nanocomputing structures, as well as other nanotech currently on the horizon. Many of us have been skeptical of the idea that the “ghost in the shell” (forgive me…) will ever be so easily captured. Kurzweil’s brain uploading argument misses the premise that there is a human soul in the equation. If you could copy a person’s brain completely, you would not have the same person, but rather a copy.. possibly someone who could appear to be the first person at least superficially, but a copy nonetheless. You don’t have to be a neuroscientist to see this. I believe we will continue to see tremendous advances in computer power, with corresponding tremendous advances in fields computing power affects. But I also believe copying a person with brain uploading, or brain uploading at all, will elude us.

  8. “For every expert there’s an equal and opposite expert.” – Arthur’s Ghost = Best part of this whole discussion. Frodo lives.

  9. Michio Kaku once made the remark that if one could connect all the supercomputers of the world, the integrated system would not have the intelligence or survival skills of a “retarded cockroach”.

    This statement is profoundly true. No inert contraption such as a computer can gain any self-awareness, or feelings, or intuition or deep insight. Binary coding is strictly factoid, sterile, lifeless and should be viewed as such. It is a different matter, however, if inert computer circuitry and software is added to an existing life-form, connecting with and supplementing a living neural network. And this is, indeed, being done today with alarming rates of success…the cyborg concept is being experimented with in unknown labs big-time. We know nothing about the progress.

    I am a proponent of leaving nature alone and do not at all agree with “adding” or altering anything that providence has given to me. To a very great extent, “science” has befooled us all into believing that technology is our savior and we must pursue its plunderings unto infinity. This is a human tragedy and travesty.

    One peek at the so-called “work” of genticists in their fanatic curiosity is enough to loosen one’s bowels…mixing spider genes with that of a goat, as one published example. I dare say that experiments are being conducted that would not only raise the hair on our heads, but our hackles as well.

    Alas, this is an age of technical and scientific sterility, where humanness is being trampled over by imposturous imps in laboratories, where intellectual cleverness is hailed as the “messiah”, reducing the masses to slaves of gadgets which absorb all one’s precious life-force.

    • I do think the author effectively undermines Kurzweil’s estimate of the human brain’s information processing powers. It could be less, or it could be a great deal more than the “conservative estimate” that Kurzweil calculates, since our understanding of how the brain generates consciousness is still crude.
      But in Kurzweil’s defense, he has consistently stated that he believes we will not succeed in creating true AI until we successfully reverse engineer the human brain, so in that sense, the author and Kurzweil are in general agreement. Kurzweil has been pretty clear that merely throwing vast processing power at the problem is insufficient. Without an understanding of what gives rise to consciousness in the brain, all you’ll get is an increasingly impressive imitation of general AI, which will no doubt be quite useful, but which will nonetheless never have consciousness.
      His prediction for when we are able to successfully reverse engineer the human brain and implement it is 2029 or so, and he supports that prediction with statistics on the progress of the study of the human brain…after applying the Law of Accelerating Returns of course. His unshakable faith in the Law of Accelerating Returns is, of course, wide open to criticism, especially as he applies it further and further afield.
      But since Kurzweil’s predicted timetable for true AI (along with the “uploading” potential that would imply) is not based on any particular FLOP target but on his estimate of the progress we’ll have made in reverse engineering the human brain, this article does constitute a bit of a straw man argument if the author’s intent is to debunk this prediction.

  10. Thank you for the post, it was inspiring!

  11. please do I would love to see. I really dont think there is enough information to even begin to put a number on it.

  12. Who exactly makes assumptions 1 and 2? I actually haven’t heard they have ever been made. I didn’t base my calculations on such naive assumptions. Perhaps it’s time to publish my calculations 🙂

  13. The only problem with your argument is that you assume that the deepest cognitive level that matters (the deepest level that affects anything we care to duplicate) is governed by such details as spikes and (maybe) astrocytes.

    There is simply no argument available to support that assumption: it might turn out to be the case that all that detailed structure is just there to support “software” objects that have a granularity at the cortical (micro-)columnar scale, and if that were the case, we would already have much, much more processing power than we need to upload a complete human mind.

    To reinforce the point: nowhere in your article do you insist that the atomic details of the brain have to be duplicated before uploading becomes feasible. Why not? Well, you (like almost everyone) considers that the specific atoms are actually not relevant — that we could replace that particular functionality with something equivalent, and the upload would be functionally identical to the original. So, okay, we all agree that the cutoff point is somewhere ABOVE the atomic level.

    But… if you are going to impose a cutoff of that sort, you need some kind of justification for it. In the case of cutting off “somewhere above the atomic level” our justifications are fairly weak, but accepted by everyone. However, if you insist that there are reasons to suppose that a duplicate of the mind MUST involves a replication of the spike transmission machinery, I am entitled to ask “Why?”. You give no reason why. All you do is assume that it must be, and then base the rest of your argument on that assumption.

    Me, I make a different assumption (that all the important functionality resides at the level of the micro-columnar structure and above. That gives me the need to duplicate the functionality of somewhere in the region of 1 to 10 million units. You may disagree with that assumption, but it is not self-evident that it is any less valid an assumption than the spike-level assumption.

    • It may be true that the information stored on the atomic level (spin, charge, etc…) would be necessary to replicate if one wants to simulate consciousness and other functions of the brain. This seems unlikely to me however, because at the molecular level, all of the atomic information governs the formation and behavior of each individual molecule, and each type of molecule will behave in it’s own unique, but specific way when encountering other molecules. Thus it seems that one could simulate the properties of molecules themselves, without having to simulate the atoms that build them.

      However, I see little reason to work under the assumption that one could simulate the high level functions of the brain without simulating the structure and interactions of neuron spikes and other events of similar scale. Of course this might be the case, perhaps even microcolumns are unnecessarily granular. But I see know reason to assume this. You state “Me, I make a different assumption (that all the important functionality resides at the level of the micro-columnar structure and above.” I ask you the same question that you posed to Seth, that is, why? On what grounds do you make this assumption?

      • First, see my more detailed reply to Seth, just below, which partially answers your question.

        BUT … 🙂 … please also bear in mind that I suggested the “column” level partly as a way to make it clear that even without any arguments martialed in its favor, that claim is prima facie just as plausible as the claim that spikes (etc) are the relevant level.

        Case in point: your comment above. You say “I see little reason to work under the assumption that one could simulate the high level functions of the brain without simulating the structure and interactions of neuron spikes and other events of similar scale”. If you look at that statement carefully, it is only a declaration …. you are, in effect, saying that you see no reason to abandon the spikes-level assumption. But that just begs the question of what reason you might have had, in the beginning to *adopt* the spikes-and-neurons as the appropriate level.

        It sometimes seems to me that people have been repeating to one another the spikes-and-neurons doctrine (the idea that this is the functional level at which cognition takes place) for so long that they have forgotten where it came from, or what the arguments are in favor of it.

        In truth, that assumption is about as plausible as a naive computer user assuming that one set of logic gates in their CPU is dedicated to Microsoft Word, another set is dedicated to Firefox, and so on.

        • Beautifully to the point!

          From my own reply today to the related argument about consciousness: The only thing we can deduce from temperature changes and shifts in electrical potential between transistors in computer CPU is that some areas seem to be more responsible for integer operations than the others, when there are ares which seem to be activated during graphic processes. It’s very hard to understand that temperature changes are byproduct and it’s almost impossible to arrive this way to the idea that there is a controlling software which is responsible for that changes of potential which just _might_ not be dependent on the underlying hardware. Moreover it’s hard to add another layer of logic and assume that controlling software can actually make changes in wiring in the hardware itself , if you study changes in potential , structure and temperature within the same logical construction implemented in FPGA array.

    • I do disagree about the column as being the fundamental unit of neural activity. Animals exist that do not have columns.

      To me, the column seems like a very high level to assume to be basic. Please lay out some of your reasoning, I am curious.

      My main point in the article was basically that we cannot in any meaningful way try to claim the brain is like a digital computer and go further to commit the silliness of trying to quantify it in terms of a digital processor.

      • The argument in favor of (micro)columns as the main functional unit is a little involved, but it boils down to two main considerations.

        1) Damage can be sustained at the neural level, with little or no impairment to cognition. Clearly, it is difficult to draw the line above which the damage does cause significant impairment (where “significant” means that we might judge that person to have become a different person), but it does appear that quite dramatic amounts of damage can, under some circumstances, change the personality and cognitive capacity in only minor ways, or not at all. That seems to imply that the functionality needed to encode the “person” is not at the level of the spikes.

        2) From cognitive psychology we can look at what is needed to implement a comprehensive model of cognition, and (this is my own work here) it appears that the only way to capture concepts and their interactions properly is to map “concepts” onto some computational entities that are a good deal more sophisticated than single neurons, or small clusters. (Main example is the need to encode rapidly changing configurations of connections …. this is very difficult to do if the low-level neural hardware is what is representing concepts). This implies that larger units are required. The size of columns (perhaps microcolumns, where necessary) is roughly the right order of magnitude for those flexible structures. To really do justice to this line of argument would require a great deal more, but I am trying to convey the main idea.

        If you put these two ideas together, it seems very likely that something at about the column level is what corresponds to a “concept” (which does not just mean the nameable concepts, of course). If that were the case, then all the spikes, different kinds of neurons, astrocytes, etc etc could be seen as implementing that higher level architecture, using the only hardware that Nature could cook up in a hurry.

        You finish your comment by saying that “we cannot in any meaningful way try to claim the brain is like a digital computer” … and yet, to me, that is not supported by any argument that points us to where the functional level actually IS.

        I think what I have just given is at least the beginning of a positive argument in support of the “columns” claim, but as far as I can see all the arguments that treat spikes, or neurons, or other types of fine-grained stuff, as the basic functional units … well, there are not really any *positive* arguments, only a default appeal to how reasonable that assumption sounds. I see no reference to the psychology (where we observe some kind of functional units), telling us that, for example, there are specific reasons why the cognitive level units *must* be something like neurons. Most of the arguments that appear to support that claim are, in the end (I believe), not properly grounded.

        • So your argument against individual neurons/spikes is that large portions of the cortex can be removed with out noticeable changes in cognitive ability? Why doesn’t this argument also indicate that cortical columns are also unsuitable for representing the mind?

          • Good question. I guess the first answer involves looking at the quantities involved. The kind of damage that would not cause a substantial change to the “person” can take out many thousands, or even millions of neurons. By contrast, the number of columns taken out might be of the order 1 – 10.

            In that context, I would find it credible that the column is basic functional unit that DOES need to be duplicated in order to make the upload the same person, but that if some of the columns were duplicated badly the person would be slightly different. But if many thousands or millions of neurons could be lost in the duplication process, without affecting the personality of the copy, surely this would imply that the neuron level was just too fine-grained to be considered necessary?

            There are more details that could be added, to address your question, but I can only sketch them. In the Loosemore & Harley paper, we tried to explain the granularity of representation that seemed to be implied by the Quiroga et al paper, and what we suggested was that columns (or something at that approximate level) could be functioning as hosts for moveable entities that represented active concepts. Now, if you buy that theory then the implication would be that dormant concepts (long term memory) is probably stored in a distributed form across a number of columns. That story would then imply that columns are the basic units for active concepts, but that damage would not cause a lot of trouble until large numbers of columns were hit.

            This does make me think that I need to expand on what we wrote in that paper, to explain the implications of this approach in more detail.

      • Just one quick clarification. When I suggest columns, I am referring to these as the locus of the current workspace (working memory, i.e.). The question of long-term storage is different, and for that I would assume that there is capacity for maybe a thousand times as much capacity. However, this does not impact the argument much, for reasons that are too detailed to go into here.

        So all that it means, to say that the columns are basic functional units, is to say that only about 1 million concepts are active at anyone time, and the rest of the supporting machinery in the brain just serves to feed concepts in and out of that set as required. For more details of how this can be used to explain some real neuroscience data (Quiroga et al, in particular), see the Loosemore & Harley paper that at

  14. Actually recent research has shown that neurons can communicate using electromagnetic field. Thus even those neurons which are not connected by synapses are able to and do communicate with each other. So number of 100 trillion connections is underestimated to the unknown degree.

  15. it’s true this article is difficult ! the blue brain has one hundred billion laptop ? can’t they give us a hundred (0.000000001%) for our cosmist school project ? ahah just kiddin’

  16. Simulating neurons always reminded me of early inventors that attached mechanical bird wings to their arms and flapped them in an attempt to fly.

    Since we couldn’t make ourselves identical to birds, we instead had to identify the underlying principals of flight and apply them using other engineering methods.

    By the same notion, I think we need to avoid a similar mistake with AI.

    Identify the underlying principals that make it possible, then engineer it “our” way.

    • I totally agree with you. This way of thinking is wonderful.

  17. For every expert there’s an equal and opposite expert.

    • And that is the beauty of this. Maybe, due to all brilliant minds involved in thinking this problem, we may achieve the brain upload. I understand Kurzweil’s approach to the matter: In my opinion, he isn’t saying that those 20 million billion neural connection calculations per second is the answer of brain upload. Perhaps his arguments lack the details that are necessary for this technology to be developed, but I believe that the reason behind this is that this scientific breakthroughs haven’t happened yet. Raymond believes this will be achieved in parallel with the computation power, and that this will make it possible, I’m sure he doesn’t suppose that a FLOP is a spike. Thing is, to believe in the singularity as seen by Ray Kurzweil’s vision one have to, ironically, develop some kind of faith in science. If analyzed purely through known knowledge, then his assumptions are a failure. But if analyzed through logics and assuming several predictable breakthroughs in science, you can believe that in a twenty years period such a thing may start to be possible. Great article!

  18. I find this to be a bad article.

    For starters, you define a neurosimulation strawman based upon kurzweils ideas and thoughs as if he were the leader in the field of neurosimulation which he is not, ever heard of The Blue Brain project and Henry Markram? They use an approximate laptop worth of computing power per neuron to achive a very detailed simulation.
    The reason why they use a very detailed neuron model is not just to get an accurate simulation, but because Henry Markram is not doing his neurosimulation projects because he wants to achive mind uploading as his main goal. He’s aiming for simulated neurosurgery and pharmaceutical simulation, that is he wants his simulation to represent an in vivo brain, which means he needs a physiologically accurate picture of neurons. You argue that physiological accuracy is required for simulating minds too, which you support by lining up a lot of neurophysiology terminology that probably put off a good few readers before they got halfway through the article. Now of course if you postulate that real mind simulations have to be ultra-precise to the point you need simulated oxygen to not die then, well yes, I guess you’re right, although I’d be happy enough with non-real mind simulations.

    If you’re going to have your mind simulated, why would you want to keep the physiological limitations, if my simulated person would have a CMOS camera sensor as a retina that puts out a 1080p image 24 times a second, should I insist on transcribing this data stream to cones spiking during daytime and then to rods spiking during nighttime(or use at high ISO lowlight conditions), while maintaining the layout of rods on the retina and thus end up with reduced central vision? Or would it not be more sensible to find out what preprocessing is required to feed the data directly to the terminal synapses of the optic nerve? Should I also perhaps insist on simulating insoluble protein depositions to enable me to suffer neruodegenerative disease or would it be enough to keep the genetic heritage of daltonism and similar defects? Why not use ideal neurons and optimize away defects, speed up thought processes and expand memory, if you’re not in a human body anymore why insist on keeping it human unless you’re a romantic and like to remain the person you were in real life forever?

    As for your argument that the entire history of the organism would have to be remembered: Do you actually live in the past or are you like everyone else only existing right now, in this particular configuration, but being able to remember thanks to (the current state) of your memory?

    • Markham dosent make claims like Kurzweil. This article seems targeted towards the sympathizers of the latter. Also, if you think blue brain is actually a good enough representation of even a single cortical column that can allow pharmaceutical testing or practice neurosurgery, you are too gone on the Markham cool-aid. How many papers doing this have come from his group? Besides, simulating mouse or rat cortical columns does not have much to do with transferring consciousness into a computer.

      Since we dont know what parts of the brain are consciousness or mind, we dont know what must be simulated to achieve it.

  19. What about education ? When you teach a kid about your experience and your theories, It can be viewed as a sort of mind uploading, though at a much higher level of abstraction than the details of a synapse. And you will be able to do it with an AGI (strong AI).

    • Markham dosent make claims like Kurzweil. This article seems targeted towards the sympathizers of the latter. Also, if you think blue brain is actually a good enough representation of even a single cortical column that can allow pharmaceutical testing or practice neurosurgery, you are too gone on the Markham cool-aid to be objective.


    Remember the myth of Dante as well? Eternal life, so he is not happy in his perpetual quest of knowledge: it is a bargain with the devil … and he loses it all at once: that Dante is happy or unhappy, the devil won, Dante LOST HIS LIFE!

    But your weapon will not save you. Quite the contrary, he will hurl you into the disaster, even if you are five or ten or twenty millard pedaling, you can not make it fly. It is in freefall since the beginning, and this fall will soon be completed. Daniel Quinn, Ishmael (The man disappeared once it will there be hope for gorilla?)

    We all met, we finished ALL KNOW of any power

    IF you continue in the same direction: you die, all without exception even captains oligarchs die

    You must change direction, or flee

    You must share and take over the company in hand or flee

    In a world where everything is done by machines: it is not the time to argue AGAIN AND AGAIN

    You must change direction, or flee

    A large part of the transhumanist and (pseudo) scientists are fools (not me) … they want to get the matter in the universe for creating beautiful supercomputers or Millard Millard humans can live in a world of simulated LIVE … they do not even know what it means, and they are UNABLE TO DEFINE WHAT IS LIVE, they do not know how one lives. They do not see any more than humans are useless, that politics and economics knows very well: the human is as useless as his body than his mind. If you continue, you will die: it’s as simple as that. That will not happen as in this nightmarish transhumanist utopia CRAZY


    These small DANTE transhumanist and (pseudo) science is STRICTLY unable to define the economic or political utopias of their … they are strictly unable to obtain it. If there is a change in society will be brutal and will not be done by the Gentiles Care Bears.

    “The definition of insanity is doing the same thing over and over and over again and expecting different results” Albert Einstein … Ship of Fools … still more growth, full employment predict when we destroyed everything, absolutely all human beings … predict further growth in computronium: when the reality itself of growth signifies the total destruction of human beings, BODIES AND MINDS: ALL PORTIONS.

    You must change direction, or flee

    ESCAPE is taking the tangent to the tangent now, at time T

Leave a Reply