Ghost in the Shell: Why Our Brains Will Never Live in the Matrix
When surveying the goals of transhumanists, I found it striking how heavily many of them favor conventional engineering. This seems inefficient and inelegant, since such engineering reproduces slowly, clumsily and imperfectly, what biological systems have fine-tuned for eons, from nanobots (enzymes and miRNAs) to virtual reality (lucid dreaming). Recently, I was reading an article about memory chips. (See Resources) In it, the primary researcher makes two statements that fall in the “not even wrong” category: “Brain cells are nothing but leaky bags of salt solution,” and “I don’t need a grand theory of the mind to fix what is essentially a signal-processing problem.”
And it came to me in a flash that many transhumanists are uncomfortable with biology and would rather bypass it altogether for two reasons, each exemplified by these sentences. The first is that biological systems are squishy — they exude blood, sweat and tears, which are deemed proper only for women and weaklings. The second is that, unlike silicon systems, biological software is inseparable from hardware. And therein lies the major stumbling block to personal immortality.
The analogy du siècle equates the human brain with a computer — a vast, complex one performing dizzying feats of parallel processing, but still a computer. However, that is incorrect for several crucial reasons that bear directly upon mind portability. A human is not born as a tabula rasa, but with a brain that’s already wired and functioning as a mind. Furthermore, the brain forms as the embryo develops. It cannot be inserted after the fact, like an engine in a car chassis or software programs in an empty computer box.
Theoretically speaking, how could we manage to live forever while remaining recognizably ourselves to us? One way is to ensure that the brain remains fully functional indefinitely. Another is to move the brain into a new and/or indestructible "container,” whether carbon, silicon, metal or a combination thereof. Not surprisingly, these notions have received extensive play in science fiction, from the messianic angst of The Matrix to Richard Morgan’s Takeshi Kovacs trilogy.
To give you the punch line up front, the first alternative may eventually become feasible but the second one is intrinsically impossible. Recall that a particular mind is an emergent property (an artifact, if you prefer the term) of its specific brain –- nothing more, but also nothing less. Unless the transfer of a mind retains the brain, there will be no continuity of consciousness. Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die –- and be aware of the death process. Furthermore, the newly minted person/ality will start diverging from the original the moment it gains consciousness. This is an excellent way to leave a detailed memorial or a clone-like descendant, but not to become immortal.
What I just mentioned essentially takes care of all versions of mind uploading, if by uploading we mean recreation of an individual brain by physical transfer rather than a simulation that passes Searle’s Chinese room test. However, even if we ever attain the infinite technical and financial resources required to scan a brain/mind 1) non-destructively and 2) at a resolution that will indeed recreate the original, several additional obstacles still loom.
The act of placing a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, and be able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.
To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain… well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia. Additionally, processing at light speed will probably result in madness, as everything will appear to happen simultaneously or will change order arbitrarily.
Finally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory… except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.
Let’s now discuss the possible: in situ replacement. Many people argue that replacing brain cells is not a threat to identity because we change cells rapidly and routinely during our lives — and that, in fact, this is imperative if we’re to remain capable of learning throughout our lifespan.
It’s true that our somatic cells recycle, each type on a slightly different timetable, but there are two prominent exceptions. The germ cells are one, which is why both genders — not just women — are progressively likelier to have children with congenital problems as they age. Our neurons are another. We’re born with as many of these as we’re ever going to have and we lose them steadily during our life. There is a tiny bit of novel neurogenesis in the olfactory system, but the rest of our 100 billion microprocessors neither multiply nor divide. What changes are the neuronal processes (axons and dendrites) and their contacts with each other and with other cells (synapses).
These tiny processes make and unmake us as individuals. We are capable of learning as long as we live, though with decreasing ease and speed, because our axons and synapses are plastic as long as the neurons that generate them last. But although many functions of the brain are diffuse, they are organized in localized clusters (which can differ from person to person, sometimes radically). Removal of a large portion of a brain structure results in irreversible deficits, unless it happens in very early infancy. We know this from watching people go through transient or permanent personality and ability changes after head trauma, stroke, extensive brain surgery or during the agonizing process of various neurodegenerative diseases, dementia in particular.
However, intrepid immortaleers need not give up. There’s real hope in the horizon for renewing a brain and other body parts: embryonic stem cells (ESCs). Depending on the stage of isolation, ESCs are truly totipotent – something, incidentally, not true of adult stem cells that can only differentiate into a small set of related cell types. If neuronal precursors can be introduced to the right spot and coaxed to survive, differentiate and form synapses, we will gain the ability to extend the lifespan of a brain and its mind.
Having brain replacement would rank way higher in the trauma scale.
It will take an enormous amount of fine-tuning to induce ESCs to do the right thing. Each step that I casually listed in the previous sentence (localized introduction, persistence, differentiation, synaptogenesis) is still barely achievable in the lab with isolated cell cultures, let alone the brain of a living human. Primary neurons live about three weeks in the dish, even though they are fed better than most children in developing countries — and if cultured as precursors, they never attain full differentiation. The ordeals of Christopher Reeve and Stephen Hawking illustrate how hard it is to solve even “simple” problems of either grey or white brain matter.
The technical hurdles will eventually be solved. A larger obstacle is that each round of ESC replacement will have to be very slow and small-scale, to fulfill the requirement of continuous consciousness and guarantee the recreation of pre-existing neuronal and synaptic networks. As a result, renewal of large brain swaths will require such a lengthy lifespan that the replacements may never catch up. Not surprisingly, the efforts in this direction have begun with such neurodegenerative diseases as Parkinson’s, whose causes are not only well defined but also highly localized: the dopaminergic neurons in the substantia nigra.
Renewing the hippocampus or cortex of a Alzheimer’s sufferer is several orders of magnitude more complicated and in stark contrast to the “black box” assumption of the memory chip researcher we will need to know exactly what and where to repair. To go through the literally mind-altering feats shown in Whedon’s Dollhouse would be the brain equivalent of insect metamorphosis. It would take a very long time – and the person undergoing the procedure would resemble Terry Schiavo at best, if not the interior of a pupating larva.
Dollhouse gets one fact right: if such rewiring is too extensive or too fast, the person will have no memory of their prior life, desirable or otherwise. But as is typical in Hollywood science (an oxymoron, but we’ll let it stand), it gets a more crucial fact wrong: such a person is unlikely to function like a fully aware human or even a physically well-coordinated one for a significant length of time — because her brain pathways will need to be validated by physical and mental feedback before they stabilize. Many people never recover full physical or mental capacity after prolonged periods of anesthesia. Having brain replacement would rank way higher in the trauma scale.
The most common ecological, social and ethical argument against individual quasi-eternal life is that the resulting overcrowding will mean certain and unpleasant death by other means unless we are able to access extraterrestrial resources. Also, those who visualize infinite lifespan invariably think of it in connection with themselves and those whom they like — choosing to ignore that others will also be around forever, from genocidal maniacs to cult followers, to say nothing of annoying in-laws or predatory bosses. At the same time, long lifespan will almost certainly be a requirement for long-term crewed space expeditions, although such longevity will have to be augmented by sophisticated molecular repair of somatic and germ mutations caused by cosmic radiation. So if we want eternal life, we had better first have the Elysian fields and chariots of the gods that go with it.
This is my understanding of the human race.
The anti-singularity argument based on the supposed-impossibility of duplicating the parallel processing mode of biological brains will be invalid with the appearance of quantum computing within s few years. The additional advent of ambient temperature superconductors means imho human mind boosts to the virtual cloud within 50 years.
Does not count unless person’s consciousness is present and aware during and after transfer. Without that “continuity qualified by awareness and consciousness during transfer”, only a fascimile results.
Direct replacement/re-colonisation of individuals’ cells is possible provided but the loss of the non-quantifiable and unrecordable soul (unless technology is not yet released) is another matter entirely neglected.
While satellite neurotech is already present, the question of the material neuron based thought as opposed to the spiritual body is the other important matter that most ‘non-psychic’ individuals fail to see.
The last thing we need is perfected nanotech beings who were replaced neuron by neuron or cell by cell only to die or go insane from loss of their souls but later resurrected in the form of digital automatons without spirit. If there already is such ‘soul holding’ technology, return the souls to their rightful owners not hold those in waiting for their deaths, it is too cruel.
Honest treatment of the work on this before rolling out of test cases PLEASE. Also INFORMED CONSENT. Where is the space for TRUE human freedom then? Educate and release the information on the nature of reality via official channels like in Social Studies and BE HONEST about governing your fellow man.
Any response will be appreciated on these above concerns and those one the link below :
Various Sites On Mental Autonomy Or the Electronic Infringement Thereof
http://www.facebook.com/topic.php?uid=318515515322&topic=15792
i love all this stuff but i agree a upload wouldn’t be you only a copy, and how that dis-proves a soul is simply pathetic as its a copy and not a human so soul or no soul hasn’t been proven at all,twins come from the same egg there still 2 separate people with or with out a soul
i love this whole idiots argument of god or no god its like saying evolution dis proves a creator wen in fact it does no such thing it only makes it look like earth and its life are a evolving experiment or game like are selves playing a sims type game on a computer whos to say we aint all software LOL …..
i like the idea of we couldnt live in a matrix’s like set up WHOS TO SAY WE ARE NOT ALREADY
prove it just like proving the big bang a theory isnt proof
This is an awesome article. I’ve always viewed the possibility to “upload” a mind to a machine with skepticism. And I agree that the “original” you will still experience old age and death – the one uploaded will be but a copy, not the “real” you.
On the other hand, if such an experiment were ever to be performed successfully, I’d be very interested to hear what Christians and other religious people would say to this – since it would deny the existence of a soul, and confirm that human mind is essentially a product of the (physical) human brain, not some mystical and immortal “soul”.
As someone that have done some advanced work in neuroscience and computer science. I have some good news and bad news.
The science that still have to be done for an upload scenario is humongous. not to mention the engineering work which is almost as bad.
The good news is that it can be done but will be very difficult, expensive and long. Also the Supercomputers around 2050 will be a real big help. Also nothing in biology and fundamental science makes it impossible, so it is a game of practicality.
“Without a body, these functions will flail around and can result in the brain… well, losing its mind.”
If the brain problem has been solved designing ad-hoc body models will be a WALK IN THE PARK.
“Finally, without context we may lose the ability for empathy”
As we are doing a direct replica here, that empathy lies somewhere in the neural structure, which our new intern hopefully included…. obviously the environment should be familiar and comfortable.
Part of the brain is feedback from the body. As for empathy, it’s not coded in the neurons; it arises from interactions with the environment (broadly defined).
Seriously? Why are there smart people who STILL insist mind-uploading isn’t possible, because “it’d only be a copy”… -_-
Advanced enough technology CAN overcome that concern. There is no doubt in my mind that advanced enough Nanobots could accomplish the goal, and the uploaded mind would NOT be a copy at all! The solution is by accomplishing a very intricate live-networked transfer of the “network-activity” of the brain. By transferring, gradually – not all at once, the functions of individual brain cells, one by one if need be, while still keeping them flawlessly linked to and communicating with all their neighboring-connected braincells.
I’ve written a few posts on the process, and the only real leap of faith is whether we can ever create advanced enough molecular nanotech, to accomplish the goal.
Here’s my posts on the possible process:
The process would be a gradual replacement, along the lines of cell-by-cell, just as the body naturally replaces cells. But that’s only the first step..
This gradual replacement method would most likely preserve the original consciousness and experience-line of that person. There would be no sudden moment of transfer or copying, and the vast majority of cell-to-cell communication would continue completely unharmed. To the person experiencing it, they would simply be living daily life, although with eventual cognitive enhancements. But even at this point, it is not yet an “upload”. A physical brain still exists.. So after this step is done..
After the replacement process, each nano-brain-cell would then transfer its function to a virtually run emulation of itself, still kept in continual communication with all other “cells” it is linked too. This too would have to be a gradual process, done in small numbers at a time, but likely much faster than the first process.
By preserving communication with all neighboring “cells”, and between the physical and virtual, using a high speed network communication of some sort, the “uploadee” would be unharmed.
In order to upload a person’s mind, in a way that the original consciousness will experience first hand, you first need a simulation. A precise simulation of every tiniest detail, likely almost on the molecular scale or smaller.
You will also likely need true Nanobots, ridiculously advanced, custom made to be able to function as true brain cells, behave as such, and communicate with all the neighbors a real brain cell can. As well, they will need to have far more advanced features, possibly such as wireless communication, and an advanced ability to manipulate that communication and the features of their surrounding network.
Now I’m just guessing based on my loose knowledge of these things, more or less may be necessary… But I’d wager a process along these lines would be need to take place:
Keep in mind, this would be a gradual process, not an instant one.
Basically, almost cell for cell, the nanobots would have to couple with existing brain cells, monitoring, mimicking, and gradually taking on the functions/responsibilities of each cell, AS WELL AS taking on and maintaining all of the communications of each cell, to their neighboring cells. As each original brain cell is taken over, killed off by it’s respected nanobot equivalent, it’s responsibilities and communications are flawlessly maintained by these nanobots. Overall communications, and the consciousness-result of the brain’s “network” remain unharmed. Your brain cells die all the time, and are even replaced by new ones(?), so this isn’t much of a stretch.
During this gradual process, to the remaining original braincells, it would be as if nothing is happening, business as usual. And the person experiencing this wouldn’t likely notice either, cept maybe for some cognitive enhancements, or possibly a headache.heh
Eventually the person would be carrying around a mass of completely converted nanobot-brain in their skull, but not for long. Things could end here, this result alone would give you amazing enhancements, and a far more durable-longliving brain. But we’re talking about how to upload a mind out of the meatbody completely. 😉
It is at this point an advanced form of wireless networking would likely be necessary. These nanobots now have a complete map of your brain, and all it’s communication. They send this information to a simulated computer construct of some sort, creating a digital copy of you and your brain, not yet activated. As of now it is just a copy.
Next, each nanobot-brain-cell begins wirelessly communicating to it’s computer simulated counterpart, and all it’s uploaded neighbors as well. This process is not unlike the first process, all existing communication is completely maintained, and kept in constant communication from physical to simulated. One by one each physically located nanobot-brain-cell’s functions and communications are transferred to their digitally simulated counterparts, and again, back and forth communication is maintained between simulated and real neighbors, as each transfer is made. One fully functional brain exists throughout the process, just not all in the same physical location. But from the point of view of the brain as a whole, and the conscious person, not much is happening, other than a scene change.
After this point, you now have a fully uploaded conscious mind! And it is not a copy, the original person is experiencing it all, and the original has not died!
All external senses, sight, sound, touch, etc, are all being simulated to the uploaded brain, so that they feel normal.
Of course, this means a full simulated brain, cell for cell, would need to be constantly simulated.. Maybe even with multiple backups of each cell, also kept in communication, ready at a moments notice to have functions offloaded to them, in case of emergencies, if something like a physical hardware issue somewhere occurs. This all takes incredible computer hardware, management software and safeguards, and insane bandwidth.
Your post is too silly to read as fact, too boring to read as science fiction and too long either way.
The original author brings up some good points worth reading.
So, shut up woman and get on your horse.
Now I know what Aibo (Robodog) was about and what all that ‘furry’ culture is about. Animals of limited intelligence are being ‘soul’ transferred already. Probably in some hidden lab, animal intelligences and animal souls are already powering someone’s nanotech brained sex toy. The issue is if ‘kidnapping’ of human intelligences and souls are being carried out and how long it has been happening. Try an HONEST and spiritually fair consideration of land and wealth distribution first :
Utopia – Capitalism with Socialist Caps on Personal Wealth
http://www.facebook.com/group.php?gid=36665503866
;and make sure that INFORMED CONSENT and right of choice of traditional ‘generational’ lifestyles than immortality continues to exist as a CONSCIOUS option. Nice to know we are so advanced, NOT NICE that everyone is being kept out of the loop and even preyed upon.
Stop it and just lift humanity from it’s suffering, all that tech is being abused and not put to good use. Stop playing god, and drop that god complex too. It’s like having a gun and shooting defenceless wildlife that have as much right to the world as you. Intelligence if limited by neural structure could be equalled for any animal.
The value of their soul if already measurable could be far greater while simply on transition in another form. Give an intelligence enhanced animal reason to drop it’s instincts, give mankind an environment and social paradigm to foster sharing and sympathy to his fellow man.
Two comments:
1) I agree that AI is much easier than uploading
2) Uploading is theoretically possible. Athena argues that uploading can only produce a “fuzzy copy.” We are in fact fuzzy copies of the people we used to be.
Physical continuity is not what people care about- it’s the continuation of someone who acts, thinks, and remembers the things we do. In fact, we wouldn’t know if we were more similar to our past selves than merely sharing those things. This issue, I think, shares a logical structure with the P-zombie argument, which similarly proposes an extra property (p-con) that does no explanatory work.
To put it another way: The mind is a functional/information phenomena. It might be hard to reproduce the one in our brains, but it’s not impossible. If that were done, it’d be another you. There’s no paradox in there being two of you, both continuations of your identity.
3) VR need not be navel-gazing if it’s shared and looks out on reality (see Egan’s Diaspora for a relatively thorough VR think-through). And we’d have a huge advantage in getting to the stars if we shed our bulky meat-bodies. Unfortunately, I’m betting that we’ll only be able to do that with the help of a self-improving AI, something like Yudkowsky’s Transition Guide:
http://www.singinst.org/upload/CFAI//guidesysop.html
Actually, I never used the word “fuzzy” because that’s irrelevant. Nor did I say it would create a paradox, because it would not. Uploading could be as “sharp” as you can possibly achieve, but it would still not be the original. This is obviously something you agree with, so it’s unclear what you’re arguing against — certainly not the conclusions of my article.
VR is navel-gazing insofar as it encourages activities that prevent action in real life, where they would make a real difference. Planting real trees and real kisses is far better than doing so in Second Life. Besides how real versus ersatz actions affect humans, the immense production costs and heat output of VR servers worsens anthropogenic climate problems, whereas planting real trees ameliorates them. And so forth, for most ersatz activities. Think of the humans in Wall-E, that’s the best extrapolation I’ve seen.
I am not sure the good Professor is correct in the analysis that this supertask of uploading is manifestly impossible. I do sense that many transhumanists gloss over the difficulties that exist. As far as the feedback system, causing the uploaded person to “lose their mind”, I would suggest that the designers of the network would provide brain mind emulations that duplicate to the brain or mind, the feedback it needs to function properly. It could be autonomic immitation, sleep simulation, digestion, breathing, whatever the individual would need to function as a living body.
How difficult is this? I will say Very. But progress may be made if the newly formulated study of quantum computing proves valid. If quantum computing proves itself, we will have a ferocious potential at our disposal. Something that will provide truly exponential, in its crunching capacity. This would seemingly do the trick, if this ‘trick’ truly proves to be something we really will desire.
I suggest you read this before you make bold statements about the nature of self and personal identity:
http://www.scribd.com/doc/26828344/A-casual-analysis-of-Personal-Identity
I agree that contiguous identity is a fundamental problem with mind uploading, unless a way can be found to download back into the brain, but there are several problems with the article:
“Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, and be able to rewire for all physical and mental functions.”
Who says there must be an “original occupant”? I don’t see how it is fundamentally impossible to inhibit the growth of the brain while growing the rest of the body. And while I do see the need to rewire for physical functions (something that doesn’t strike me as fundamentally impossible), I don’t see why this same obstacle should involve mental functions, like reasoning. Are we to suppose that every mental faculty is inextricably joined to our various limbs and organs? Where’s the evidence for that? And are we to imagine that keeping the brain alive and intact during the transplant is also impossible?
Also: “Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.”
I don’t see why accomplishing this is fundamentally impossible either. Nature has shown that reconstructing neural pathways is possible. I understand the delicacy of the brain itself, but surely it is possible to graft the outlying circuitry of the spinal column to a host brain.
In summary, I think the article takes impossibility altogether too lightly. Why is it so common for scientists to look at the things nature has already done and say, “impossible!”? History has taught us that such assumptions are usually wrong.
I’m a bit surprised. Someone who knows the term “transhumanism” thinks that “it would be just a copy, ‘real you’ would still remain with your original brain”. I thought it was basics of basics in this field to understand what ‘self’ is and how it reacts to copying or deleting.
Other than that misconception, it was weird how you bash attempts to do something more than just reverse engineer solutions our blind watchmaker has planted on us. Sure, there’s lot to be gained from studying the brains, but biological stuff simply lacks programmability. Evolution works by adding insane net of hacks on top of each other, and even reverse engineering that is insanely difficult. Trying to alter the workings of that net is far beyond our current reach. As those less elegant engineering solutions, in spite of being less elegant, actually manage to promise some sort of function within foreseeable future, there really isn’t even any sort of competition. If you want functionality, there is only one choice.
Also, claiming that immortality requires us to move to space is pure speculation at best, but you presented that as some sort of a fact. It almost sounded deathist’ish, as in “people should die to make way for new people”.
Also, I got the impression that you claimed that planting brains to a new body would require murdering someone. I’d guess I’m misunderstanding something here though, but I’m not sure what.
No you are not. Andrea has repeatedly bashed most transhumanist ideas in this article and others. In her opinion the vast majority of Transhumans are crazy nutcases who she wishes would disappear so that “real” Transhumans could get on with the important stuff, like proving that life extension is impossible, human enhancement is impossible, and that the only hope for humanity is to colonize space asap.
She has some very strong opinions, however, her delivery of those opinions as “facts” and her consistently negative views against most transhumanist ideals tend to make me discount most of what she has to say. So far I have yet to see any commentary from her which has not been negative.
As I said to another “don’t mess up my religion with facts” whiner in the thread: Web Flatunauts and Electronic Tribbles. Or, in Jim Oberg immortal words, “Keeping an open mind is a virtue, but not so open that your brains fall out.”
And two illustrations of Valkyrie Ice’s arguments. Someone call a waaahmbulance!
1.
2.
here i’ll respond at your level
My my. A big named, supposedly intellectual professor, and this is the best refutation you can come up with to my pointing out that you dismiss anything that you personally do not believe in with insults and derision, and treat anyone who you feel is intellectually not in your peer group as if they are “mindless cooing pigeons”?
I’m crushed. I could have gotten that level of response from a five year old. At least Marvin Minsky had the grace to critique my writing style, even though he also refused to answer my questions.
But that does indeed illustrate quite well how little current academics actually welcome questioning minds. Considering your reaction on the web, I do have to wonder how many of your students you’ve flunked for doing the duty of all scientists, and asking questions.
So sorry you’ve had your feelings hurt by my refusal to accept your opinions as the edicts of god, but I’m not one of your students and have no need to please you to pass your class.
Oh my Athena, you really embarrass yourself, who is the whiner here!?!
The sense of self would seem to include the awareness of biological death as its ultimate end.
Assuming that somehow the rest of the persona is maintained through the replacement processes, how will people respond to this new reality? Perhaps I am betraying a lack of insight by asking this question, but it is asked in good faith and all due humility.
This whole discussion is ludicrous, like cavemen discussing the future possibilities of the human race, envisioned entirely in terms of whether stone weapons can ever solve the cave-bear problem. We have literally no idea whatsoever what the lives of our descendants will be like in 10,000 years — not what their problems will be, nor what their capabilities will be.
Yet another web flatunaut! How rare… not. More on the phenomenon here:
Web Flatunauts and Electronic Tribbles
http://www.starshipreckless.com/blog/?p=988
I love that idea. Recently, I read about the bizarre hypnotic sleepwalking behavior of people using Ambien and was wondering about how various states of consciousness are fairly discontinuous. It is an interesting theory that that “switching” awareness between the brain and a complex external system (electronic or something else we haven’t yet conceived) through some sort of interface may be possible due to this discontinuity of awareness.
In that sense, William Gibson’s “flatlining” scenes in 1984’s Neuromancer make sense. Case’s awareness is trapped in the Matrix and therefore his brain is shutdown without the key pattern to awaken the awareness in the organic system. It’s a wild idea that consciousness may be an emergent pattern that transcends the elemental ground or base. Exciting, though.
If human consciousness is an emergent process, it’s fair to ask what the substrate is, and fair to wonder whether you can substitute that substrate. In other words: water is an emergent behaviour of oxygen and hydrogen, and oddly enough, if you replace one of those elements with something else you may get similar behaviour at the micro-level, but the macro-level entity will no longer be water.
Why is this relevant? Well, the very same question can be leveled at replacing, say, neurons with memristors. They may to all intents and purposes behave identically on the individual level, but that’s not to say that their emergent behaviours will not be different. We don’t really know how emergent properties work (an ensemble of twenty atoms that form a solid looks pretty much like one that forms a gas); reality, unlike fantasy, is substrate-dependent.
This problem is even worse with simulation and representation. The idea, for instance, that by simulating the individual actions of 100 billion neurons, you would get the same emergent result as the real physical system is just ridiculous. By definition, the simulation is not reality, and there are no known physical models whose equations can be solved from the subatomic level all the way up to the macro-level. No one knows if that sort of thing is even possible; a faithful simulation of the behaviour of 100 billion neurons would likely not result in a conscious being, any more than the faithful quantum-mechanical simulation of 100 trillion hydrogen and oxygen atoms is likely to simulate a drop of water. The laws that govern the drop of water seem to exist at the level of the drop of water, rather than at the level of the atoms. (If you doubt me, take up your argument with physicist Robert B. Laughlin, who gives a highly lucid account of this problem in his book A Different Universe.)
Physical theories have restricted domains of applicability, and so do models, both physical and software. The problem with a property like consciousness is that it is not a model, therefore is physically unrestricted, and cuts across physical domains, and unless there exist models that do the same, its properties cannot be imitated except one domain at a time. There is no indication that either physical or software models exist or could be constructed that can cut across the number of domains that a physical object such as the brain inhabits.
Once again, don’t take my word for it. Read Laughlin.
“The idea, for instance, that by simulating the individual actions of 100 billion neurons, you would get the same emergent result as the real physical system is just ridiculous. . .
No one knows if that sort of thing is even possible; a faithful simulation of the behaviour of 100 billion neurons would likely not result in a conscious being, any more than the faithful quantum-mechanical simulation of 100 trillion hydrogen and oxygen atoms is likely to simulate a drop of water”
Is this really so ridiculous? Simulations certainly work, and they can work for any physical system, including interesting ones with emergent properties. From weather to the internals of a nuclear power plant, we can simulate any system given adequate theoretical knowledge and computing power. In fact, all of modern science and engineering is built on simulation.
Smaller scale simulations of biological neural networks exist in some fashion now and they certainly work – they can perform the same types of information processing as their biological equivalents – recognizing hand writing, or faces, and so on. The success of these small scale models proves that neurons and their networks are computable and simulatable – indeed as probably all physical systems are (although that is besides the point).
So it seems reasonable that in the future large scale simulations of entire brains will exhibit the same properties as their biological equivalents. When this occurs, consciousness should eventually develop in the simulated brain, given an adequate simulation and proper time in a robotic or virtual embodiment. Its certainly a testable hypothesis, and it would be very very odd indeed if it turns out to be impossible to create virtual consciousness. Its still a ways out due to the immense computational demands, but the first initial steps are already underway, such as project blue brain.
Simulation and consciousness are distinct: you can have automata and programs that appear perfectly lifelike but are clearly non-sapient. Nevertheless, I agree that a sufficient complex system may develop consciousness — though its thinking/feeling processes will be entirely different from ours, due to the differences in substrate (broadly defined).
This, incidentally, is a separate issue from individual immortality, which is the focus of the article. My point here is not that alternative ways consciousness can never develop, but that we cannot use such settings to prolong our individual ones.
I’m still curious where you define substrate dependence. In your article you dismiss transfer to a non-biological substrate but believe gradual biological neural replacement could work. Why should there be any difference between that and gradual replacement of neural circuitry with equivalent nanobots?
There must be a level of detail where the substrate no longer matters, and the entire functional essence of the system can be emulated in any other sufficient substrate. The question is just what level of detail is sufficient. Its already clear that even relatively simple artificial neural networks can perform the same types of learning and pattern recognition tasks as their biological inspirations. The neocortex and hippocampus are undoubtedly far more complex systems, but they are built out of the same basic building blocks.
Why would an emulation think or feel any different? For that to be true, there would have to be some aspect of thinking or feeling which is inherently not based on physical information processing and has some mystical component.
Depends what you mean by nanobots. If your definition includes the cellular machinery, those repair/replacement mechanisms are already operating and highly refined. However, if by nanobots you mean exclusively cogs of metal and/or silicon, no. Carbon neurons and silicon chips operate in fundamentally different ways across scales, which is why the brain=computer analogy is misleading. Nothing mystical of the “disembodied aura” kind about this.
But if the replacement nanobots are functionally equivalent, why would it matter? Its not clear to me if you oppose the idea as being just vastly infeasible or actually philosophically impossible (as in, even if the gradual replacement scenario is technically feasible in the far future, it would always result in some new or different form of consciousness).
Whether carbon neurons or silicon chips operate in the same fashion is irrelevant – any computer of sufficient speed and memory can simulate any other computational system of lesser complexity, and it doesn’t matter whether its made of silicon, carbon neurons, or carbon nanotubules – there is a fundamental embedding principle.
You are fully correct that singularity and uploading enthusiasts are often guilty of undervaluing biology. And I am at least personally skeptical about how quickly or completely nanotechnology can compete or even overtake biology. But even so, from a philosophical point of view, there isn’t any fundamental difference between the different forms of uploading, and it looks like at least destructive scanning and computer recreation could be possible this century.
The problem with a lot of h+ theory is how easily it skips across basic and fundamental problems of mind in order to simplify it for new theories. The assumption that mind is purely objective is an age old stumbling block for many new theorists and not even close to being solved. This all leads to the Theseus paradox when the discussion turns to the transfer or upgrading of consciousness. Our state of ‘being’ (as far as I can tell), is created from a sense of continuity of experience and not just the sum of it’s parts. This sense of continuity is controlled by a hitherto unknown and highly complex process that can’t be understood, much less replicated.
The core issue here is knowing what consciousness is and how it is created (or how a conscious mind is created). That’s why this article has generated such a debate. Everything else is just technicalities.
If we know what causes consciousness (our feelings, emotions , etc.), then we win.
Why is continuity strictly necessary? If you really think about what happens when you sleep, you “discontinue” (ie, the ‘software’ part of concious you) every single night, as your brain switches over into fundamentally different processes that don’t sustain your awareness at all except later on when REM sleep kicks back in. In the morning, you restart, and piece together weird dreams from whatever random stimuli are left over.
“Software” doesn’t just mean what your brain processes when you’re aware that you’re conscious. Your brain is active during sleep, fainting and anesthesia. There’s no discontinuity. When brain activity ceases for more than a very short time interval, you lose continuity and neuronal integrity — and you’re dead de jure and de facto.
“dead de jure and de facto”
Honey what did I tell you about using big words that you don’t understand?
You just wrote “metaphorically and literally dead”: If you are literally dead, there is nothing metaphoric about it. If you are metaphorically dead, then you are Terri Schiavo (before they let her go).
This must be like, the most basic form of being a pseudo-intellectual. If this is what the state of higher education is like, (outside of having to be cheap and lower the bar of professor admission) I don’t think funding is the biggest problem.
-Random passer-by and the author’s mom
To be more precise, during pre-REM sleep the brain is still up to 80% active, it never stops sorting and functioning, if it did, there would be no waking up. The ‘rebooting’ you experience is an illusion created by your senses that have been disabled from conscious analysis, responding to the sudden mass input of data.
Just to be clear, I don’t think Mr. Cannel’s proposition would involve creating a clone or duplicate. As I understand it, from an interview with Hans Morovec (sp?), the idea would be more akin to gradual prosthetic replacement of the brain. Essentially, and there are other models, the idea would be that you take a living person’s brain and gradually replace each part of it with an electronic prosthetic that perfectly replicates the function of that part of the brain or mind. Morovec talked about doing it practically neuron by neuron so that you can fine tune the adjustment so that the person experiences a perfect replication of their consciousness. Like brain surgery, the patient would remain conscious and aware so that the neurosurgeons could be assured that his consciousness was not affected. At the end of the procedure, there is no organic brain, but the consciousness would remain.
The question is whether there is anything inherent to the organism, to the meat, that cannot be replaced by an electronic or technological system. Also, there is the possibility of creating some sort of wetware that would allow a similar sort of transfer between brains. Like artificially grown neurons that could not only graft with a living brain to repair damage, but also be transferred to other bodies and bases.
Very theoretical, of course, and, again, it seems like at that point of advancement, artificial intelligences would be preferable to running human minds on some virtual reality system (as Vinge points out). Even if we transferred a human mind into a machine, it seems unlikely it would remain human for long.
Replacing the brain almost neuron by neuron? Are you serious? Do you have any idea how many neurons are in a human brain? You’d need to replace about 100 neurons per second for 100 years. I hope the subject and all those highly skilled surgeons etc weren’t planning to do anything else with their lives.
Nitpick to the original post: in fact, neurons can and probably do regenerate, just slowly. http://www.ncbi.nlm.nih.gov/bookshelf/br.fcgi?book=dbio&part=A2894#A2901
Possibly a double post (still haven’t figured out this system). It seems likely that to transfer consciousness from the brain to a computer, since it is an emergent biological system, would require computers with processing power and complexity far greater than the human brain. At that point, why bother? You basically be transferring a limited, handicapped system – the human personality – into a jungle of much more powerful beings.
Putting aside the fact that uploading would create a clone, you make an interesting point that’s relevant to the interactions between divergent intelligences, whatever their origin.
Its not clear that uploading would require a computer of far greater complexity than the brain – there is an argument that the most efficient route is to build specialized neural net hardware that maps much more directly to the brain’s circuitry, instead of general purpose processors. And it could happen first with computers less powerful than the brain, running in slower than real time. Indeed, this is certainly the case of most large scale neural simulations today.
That being said, you may be right that uploading will turn out to be far more costly than super-human AI. Uploading requires huge advances in scanning technology on top of the computational advances, so it would appear that super-human AI develop first. On the other hand, AI’s will need to be raised and educated – trained – if you will, just like humans. So uploading could actually be a shortcut if the scanning advances quickly enough – because uploads will retain all their knowledge, whereas the 1st generation of AI’s will start out as children – expensive long term investments.
But in the end the economic, social, and political forces will shape these future technologies as much or more than the intrinsic technical obstacles.
Why transfer that ‘limited, handicapped system’ into the scary jungle? A shot at immortality! Besides that, its likely that uploads will be able to expand their virtualized minds in myriad ways inaccessible to flesh and blood: Expanded consciousness, faster consciousness, more neurons or connections and direct control over mental states – just to name a few.
A well written and interesting article, even though I admit the title and principle point is what attracted me, for it is off the mark.
“Unless the transfer of a mind retains the brain, there will be no continuity of consciousness. Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die –- and be aware of the death process”
As several other commenter’s have pointed out, there are non-destructive uploading scenarios which singularly invalidate your primary point (uploading is impossible). Uploading could proceed by gradual replacement of individual neurons with nanotech equivalents which functional identically. Eventually, the entire brain could be replaced, and the subject would have uploaded without forming a clone or death of the original. You seem to posit that consciousness is dependent somehow on your original, biological neurons, which is a rather strange philosophical conception. All current evidence points to the information patterns encoded in the synapses as the physical (albeit distributed) locus of personal conscious identity. If some process (nanotech, drug, whatever) permanently destroyed all the synaptic connections, but left the neurons perfectly intact, you would literally be erased. Dead. On the hand, replacing the neurons with equivalents which maintain the exact synaptic information and processing wouldn’t change anything.
If you are really hung up on the physical matter of the neurons, consider that they only exist as molecular patterns – the exact molecules changing over time. Its the encoded patterns which are important, and its hard to see how they are not transferable.
You could argue that during the gradual upload scenario you gradually lose conscious identity, but there is no objective reason to believe this, and there would be no objective evidence to support this, as the person undergoing the gradual therapy could pass all objective tests of identity. To say that at some point they had instantly or gradually been replaced by a doppledanger (conscious but somehow not the same person) or unconscious philosophical zombie is unsupported nonsense.
Furthermore, our brains/minds change significantly over longer periods of time. We are not exactly the person we were as a five year old child, but we maintain a stream of conscious identity none the less, so change in of itself is not an obstacle to uploading or personal identity.
Copying is certainly the more interesting philosophical problem: an upload could be copied, and from the perspective of an outside observer, the stream of conscious identity would fork. This is really difficult to intuitively grasp. But then again, so is quantum mechanics, and just because this is difficult to understand subjectively does not make it impossible, nor does it in any way raise doubts on the prospects for uploading, which is wholly unrelated to copying.
I won’t spend time debating you, since you’re not commenting on the article I wrote.
Ahena; I am slightly disheartened by an omission in your last reply, to Tossrock. There seems to be some merit in his ‘Ship of Theseus’ method, though we are many generations of scientists away from accomplishing something so wonderfully subtle, if indeed it is ever possible on even partial basis – as in ‘at least part of who I am will live on.’ Of the various perspectives offered on this page, this one seems the most relevant (along with the aforementioned axe handle and blade analogy, though plainly the axe is no longer the same tool except in the degree it has absorbed the essence of its owner) to the problem of continuity – which is the core of your essay’s thesis. I agree wholeheartedly that even some of the more adept thinkers among popular immortality theorists and experimenters seem incapable of ‘fessing up to that one critical point. Without continuity, there is no survival of the individual. One can play about with conjuring parallel existences or mathematical trickery, but the sad fact remains that we are a product of our bodies, not the maker of them. The medium most definitely IS the message in the case of our awareness, our being. To think otherwise is to wax religious, and that way lies only a fictional immortality.
If I have misunderstood, my apologies. But your centipede reference makes it seem otherwise. Plainly some ‘centipedes’ are capable of learning to regulate each of their numerous leg movements by means of conscious impetus – witness the profound, almost superhuman skill of many great performers of music, dance, martial arts, et cetera. Without very conscious rehearsal, the actions of such individuals would never manifest so splendidly. While such may be too much to ask of a centipede, it is not the case with us, is it? I type almost unconscious of the movements of my fingers. If I use discipline and think about every key stroke, surely my typing will slow to a crawl… but then with persistence it will regain speed until eventually achieving the same or perhaps even increased fluency as before, only now with my awareness keeping pace of every detailed movement and the slipping of tendons beneath skin and so on, at least insofar as I have nerves with which to sense. I’d not expect to monitor movement of blood cells through tissues in relation to the typing, but doubt that’s what was being proposed, and it seems your reaction is a bit of a step to the side of the argument.
Ah well, my main point in commenting is to thank you for this sort of clarity in approaching this subject, one so dear to the hearts of dreamers. I’d love immortality, have always been befuddled by those whose first impulse is ‘but wouldn’t you get bored?’ as that seems fatuous and depressing (really? you wouldn’t want an eternity to explore this amazing place, this life?), and hope people with your sort of focus and expertise are able to move us towards something closer to it while I am alive. Immortality is an inherently selfish topic. Oh sure, we want it for our kids… but even that smacks a little of self interest, as no one wants to outlive their child, it’s just too sad a thought to bear. Your words on the subject lend it rigor, help raise the subject above the usual arguments on both sides.
Thank you for the good words, Gerard! I think that, unless we come to terms with the fact that our bodies and brains are not just passive containers we inhabit, we won’t make significant concrete progress in helping them (us) last longer and, crucially, intact.
In my response to Tossrock I focused on two aspects: autonomous function control (which would overwhelm our executive functions, hence the centipede analogy) and “quantum microtubules”. Regarding the Ship of Theseus paradox, Heraclitus is best known for saying that you never wade into the same river twice, and in Japan all famous buildings are copies that get renewed every few decades — which explains why the Golden Pavilion looks so spiffy compared to, say, the Parthenon. It seems that as long as the image and function of a copy of an object is the same, people are content to keep the original label, especially if their cultural conditioning imbues such objects with an inalienable essence.
But notice that all these items are non-living. I did discuss slow replacement of neurons in the article. In truth, nobody yet knows if such replacement will retain the original consciousness. A priori, it seems that it should, IF (an ironclad if) the substitutes recapitulate the synaptic configuration of their predecessors. But we haven’t done it yet and won’t know until we do. And doing it in non-humans won’t suffice, since they don’t possess language to describe whether their memories and personalities are intact.
“But we haven’t done it yet and won’t know until we do. And doing it in non-humans won’t suffice, since they don’t possess language to describe whether their memories and personalities are intact.”
Then maybe we should do something about it, rather than sitting with our thumbs up our asses and whining, aka “ethicists” and “God lovers”.
Technical barriers aside, there’s your real problem! A child can’t learn if there’s a censor-happy parent nearby.
If we are aiming for immortality then, by definition, we must be planning for a whole lot of future. Given enough time, everything will happen. If we want to live forever then at some point we’re going to fall into a volcano, or a rancor is going to chew on us, or we’ll somehow end up in the North Atlantic ocean in our boxer shorts.
Given the fact that the longer we’re alive the greater the certainty that something will happen to us that will utterly destroy our fragile meat-based bodies, the only reasonable option is to replace them with something mechanical.
We’ll need either a body that can survive a trip through a sun, or some sort of distributed structure that will maintain our consciousness even if it loses some pieces. I don’t think either of those are practically achievable with wet-ware. The environmental and spatial conditions required to maintain biological systems are just too limited. Nothing organic can survive a howitzer to the face, so the only way to hedge against risk would be to spread out copies so that they can’t all die at once. That’s why populations tend to grow whenever they can; but the strategy only perpetuates species, not individuals.
It seems probable that the mechanical structures we come up with to perpetuate individuals indefinitely will end up being similar to biological structures in many ways. They will probably be incredibly subtle and interconnected, but they will still have to give us a second chance if we trip and stumble out of an airlock.
Organic bodies are indeed fragile, but so is every complex structure. Few inorganics can survive a howitzer blast either — and almost nothing beyond certain elementary particles can survive a trip through a star.
I agree that future engineering will have to rely increasingly more on biological structures/paradigms. They’re resilient, flexible — and have been refined for millions of years.
I suppose my point was less about how it’s a challenge no matter which way you approach it, and more about how the only way to actually meet the challenge is with the mechanical approach. Even ignoring every threat external to the body, given enough time a biological system would probably start mutating; nano-machines won’t have that problem; they’ll just need to be kept up to spec.
Anywho,
“Regardless of what the post-transfer identity may think, the original mind with its associated brain and body will still die…”
That would be pretty weird, wouldn’t it? Which copy do you think the original would end up in? I mean, if I went in to get my mind copied into a younger body, would I wake up in the younger body or would I still be in the old body? It seems like that problem could be easily solved by simply euthanizing the old body; just put it to sleep. One in, one out.
A better question, I think, is one of resolution. Lets say we scanned the brain and recreated it in a simulation. What level of resolution would it take to recreate the mind? Or at least, a mind? How much of what goes on in the brain is necessary and how much of it is just noise? If the brain was a JPEG, how much could we compress the raw information without losing the mind? We aren’t even sure “where” consciousness is yet. For all we know it will fit on a flash drive.
“The act of placing a brain into another biological body …[would be]… de facto murder of the original occupant…”
Not necessarily. Theoretically a body should be a lot simpler to “grow” than a brain. It would be complicated, but we could just make a body, without a brain, and boot it up for the first time when our brain gets deposited in it. Or, people could save their embryonic stem cells and use them to grow a clone. The process could be “adjusted” (sabotaged) so that the clone is born a vegetable. If it’s never alive, taking its body isn’t murder.
“After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds.”
So, some kind of capacitor is in order?
Why couldn’t we just install “T” junctions on every pipe going into the brain? Then we throw the switches all at the same time and the machines take over providing chemical and electrical signals. The brain won’t even know it’s not connected to the old body anymore. Could the body’s metabolic processes be slowed down far enough that the neurons don’t start to deteriorate right away?
“Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination…”
Sure, but that won’t necessarily result in destruction of the consciousness. People go through drug and sensory-deprivation fueled hallucinations all the time and come out the other side with nothing more than a crazy story. As long as the brain isn’t actually damaged, so the “I” is preserved in the imagination, it doesn’t really matter what crazy images get sent in. Perhaps only certain people would be able to undergo that process. . .ones with a strong sense of self or something.
“without context we may lose the ability for empathy…Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers.”
Without context? That sounds pretty flimsy.
At first it sounded like you said we couldn’t have high-order intelligence without empathy, but now it sounds like you carefully skirted that idea and jumped to the idea that it would merely be undesirable. I get that empathy is necessary for successful teamwork, and teamwork is necessary for survival. Why do you think transfering a brain to another body or simulating their mind or whatever would strip empathy out of someone’s personality?
“The technical hurdles will eventually be solved. A larger obstacle is that each round of ESC replacement will have to be very slow and small-scale, to fulfill the requirement of continuous consciousness and guarantee the recreation of pre-existing neuronal and synaptic networks.”
Well, at the moment the process would be slow. Maybe we can inject nanobots with the ESC and the bots can just sort of use them like Legos to recreate the existing structure faster than they would have mimicked it on their own. Actually, I think a more significant question is that since you seem to think replacing pieces of the brain one at a time will still preserve the same mind, why do we need to do it with squishy things? Why couldn’t the neurons just be replaced with electro-mechanical “cells” that mimic the actions of the neurons? Once all the neurons are replaced with “cells” we can monitor, it should be relatively straightforward to download all the activity and simulate the mind in a virutal machine.
Very enjoyable article. Your combination of biological acumen and pithy snark had me giggling several times but your point is quite well made (no matter the number of commenters who seem to have missed it completely). It’s much more likely that an independent artificial mind might someday be created than the “uploading” of consciousness from one venue to another. At least not from a philosophical pov and it seems not from a biological one either.
In other words, enjoy it while you have it, because you can’t take it with you.
OTOH the human imagination IS a many splendid thing – it’s the impetus behind every advancement made (as long as the brains don’t fall out).
Glad you enjoyed it! Pithy snark? I resemble that remark… And I agree, human imagination is astonishing. That small bundle of “meat” (a term as repulsive as it is inaccurate) can encompass everything, from quarks to universes, within its finite volume.
Athena is dead right about microtubules, and much else. Penrose told me once he posed his view as a “half-serious” idea, and I asked, “Which half?”
Athena’s views parallel mine — the horizon is so far away, it shadows the muddy tracks we must cross, making our technical problems very large.
Glad you enjoyed it, Greg! I wish Penrose hadn’t started the quantum microtubule nonsense, it puts him into the crank category — like many other well-known scientists who wander too far off their domain of expertise and fall in love with neat-o sounding ideas that don’t have a shred of supporting evidence. I can name a slew of names but will refrain, though I touched on this subject in my essay On Being Bitten to Death by Ducks (http://www.starshipnivan.com/blog/?p=196).
As for the technical hurdles to overcome, “We’ll sleep when we’re dead!” — to slightly paraphrase Warren Zevon’s immortal words. It’s not for nothing that Prometheus is such a central figure in my visions. But my goal is to enable us to take to the stars, not to live watching our navels in VR (*snore*).
There is an interesting solution to this problem. If technology evolves to the point where we can create artificial parts of the brain, then theoretically each individual lobe of the brain can be replaced incrementally, thus preserving continuity of consciousness until the entire brain is replaced with synthetic parts. As long as it is done piece by piece and the individual has time to adapt to each upgrade, over time the entire nervous system can be replaced without loss of perspective, memory, or continuity. This would also mitigate the problem of going insane while trying to adapt to a whole brain upgrade all at once.
This is like the old saying, if you have a worn-out axe and replace the handle, and then a few years later you replace the dulled blade, is it still the same axe? Reality exists in such a way that as long as changes happen slowly and incrementally then organisms have time to adapt to new configurations and the underlying continuity of reality is preserved, even though it is always changing piece by piece.
You’re quite right on both your points, James. The pace of replacement is a major hurdle: too fast and it may unhinge the brain/mind; too slow and it may not replace failing parts in time.
I used to think that way too, but now I’m not so sure. How fast is too fast when scanning/replacing the brain, and why is there this notion of ‘too fast’ in the first place? Is losing consciousness and then abruptly regaining conscious awareness as an upload in one shot really worse than slowly replacing your brain in a piecemeal fashion? The philosophical problem of identity still stands with either process (indeed it also stands right now as our brains diverge with each time step of the universe), the only thing in contention is the pragmatic argument over which is more feasible with envisioned probable future engineering. You can argue that limitations of neuroengineering will introduce errors that the piecemeal method would compensate for, sure. But hypothetically speaking, if scanning the entire brain and then re-instantiating it on a different substrate is done with enough precision, should it not result in the same upload as you’d get with the slower process?
Haig, if you read the article you would see that I don’t think uploading will retain the original consciousness. Recreate it, perhaps — given almost fantastic technology. Retain it, no.
So the issue focuses on how to keep the physical brain functioning, which brings up the replacement of neurons in situ. I discuss this in the article and also in my reply to Gerard in the comment thread.
I did read the article, but you still haven’t given a satisfactory reason why ‘consciousness’ is not ‘retained’ vis a vis uploading. To preempt any possible interjections regarding the physical feasibility from your end, lets just say not only the brain, but the entire body is scanned down to the atomic level and then recreated on a quantum computer so that even the schroedinger wave equation of each particle is simulated and accounted for. Would you still argue that the upload in this thought experiment is not a complete recreation of the original? The original person will still be alive and kicking, and that’s why possibly the most humane (yet very counterintuitive) way to resolve that problem would be to destructively scan the person, in effect killing the biological version (voluntarily hopefully).
All this being said, I’m still probably closer to your end of thinking on the upload issue. Given the option of scanning or in situ replacement, I’d choose the latter (for now), but I still can’t deny the physics that uploading of the type you deem impossible, is feasible, however counterintuitive or far off in actual technology.
((I think the argument is really over identity, neurons or otherwise. Is functional equivalence sufficient (software programs) or is each piece of individual string underlying the matter of spacetime unique in some special way that makes the possibility of uploads prima facie different from their biological counterparts.))
“the entire body is scanned down to the atomic level and then recreated on a quantum computer so that even the schroedinger wave equation of each particle is simulated and accounted for”
IINAP but this is an excellent observation! If this were to be carried out under the accepted physical laws, it is *in theory* possible but would *require* destruction of the original to surpass the hiesenburg uncertainity limit and is commonly referred to as the ‘no cloning principle’. Thus this solves the forking of your essence (to borrow a programming term) as if you want 100% fidelity it requires destruction of the original. The state of the art is only applicable to simple particles/molecules but in theory could be scaled somehow.
But as a related example, imagine the case where two perfect copies were made in such a way as to be not 100% perfect (but as close as your specific molecular arrangement is typically preserved over 1 second) and there was no way to tell who the original was (enough time could be taken, oblivious to the original/copy, to complete the process). Both copies would have the same continuity you do when awaking from a nap – yet be separate people. Both could be said to be the same person at the instant both were awakened if the copy process was accurate given the homogeneous nature of particles and physics. Yet both would diverge as time progressed growing into two distinct people similar to identical twins – a forking of your ‘soul’ or ‘essence’. The most puzzling thing to me is why people think there has to be a ‘fixed’ boundary as to your essence when it is obvious it has been in flux from pre-zygote to the time you read this.
If there is nothing but physics we have to entertain the notion that we are nothing but information run on the operating system of physics. Since the atoms/molecules that make you up have changed many times over your lifetime so what makes you *you* since you have retained the same name, identity and ‘soul’. After all, don’t all of us suffer from a large discontinuity every sleep/wake cycle? And don’t your neurons rewire themselves every second already making the notion that inserting synthetic ‘immortal’ neurons that replace full function a rock solid case for the proof that immortality (as limited by the entropy of the universe of course!) is not only obviously possible but not necessarily even infeasible given our current progress and the possible centuries we have to go?
In addition, while we are on the topic of what makes a person’s identity – I almost never see the concept of immortality get the opposite angle of approach, – due to the properties of your essence being information run on the physics of the universe, then every person is already immortal in the sense that your informational (physical) state was already a valid possibility at the instant of the big bang and will always be a valid solution space satisfying the physics of our universe and it’s initial conditions. Your possibility of being a real physical instance is forever immortal and could be said to be transcendent to the universe as abstract concepts such as math since presumably it could contain the physical universe as a subset of all possible math if the physical universe is equivalent to comprehensible physical laws. While not immortal in the sense of some religions, it does seem as if this is a close scientific analog.
An interesting outcome of this is that your specific essence, say defined as the instantaneous makeup of your body at the time your read this with a fidelity as the limit approaches 100% is a fantastically small fraction of the possible. Even if you extend the definition of ‘your essence’ to extend as in the multiple worlds interpretation from conception to your death in all possible states, though fantastically larger than the previous subset is still a fantastically small subset of the possible. With either definition you and I are nothing more than a subset (albeit a realized subset) of possibility within the universe. What never ceases to amaze me is the impossibly small odds of my existence – even with the latter definition of essence, I have already exceeded the odds of the rough equivalent of winning every single lottery in history with a single try at each. Yet this level of improbability is shared by every person and through the anthropomorphic principle is what any entity would observe if observation took place.
In addition, while we are on the topic of what makes a person’s identity – I almost never see the concept of immortality get the opposite angle of approach, – due to the properties of your essence being information run on the physics of the universe, then every person is already immortal in the sense that your informational (physical) state was already a valid possibility at the instant of the big bang and will always be a valid solution space satisfying the physics of our universe and it’s initial conditions. Your possibility of being a real physical instance is forever immortal and could be said to be transcendent to the universe as abstract concepts such as math since presumably it could contain the physical universe as a subset of all possible math if the physical universe is equivalent to comprehensible physical laws. While not immortal in the sense of some religions, it does seem as if this is a close scientific analog.
An interesting outcome of this is that your specific essence, say defined as the instantaneous makeup of your body at the time your read this with a fidelity as the limit approaches 100% is a fantastically small fraction of the possible. Even if you extend the definition of ‘your essence’ to extend as in the multiple worlds interpretation from conception to your death in all possible states, though fantastically larger than the previous subset is still a fantastically small subset of the possible. With either definition you and I are nothing more than a subset (albeit a realized subset) of possibility within the universe. What never ceases to amaze me is the impossibly small odds of my existence – even with the latter definition of essence, I have already exceeded the odds of the rough equivalent of winning every single lottery in history with a single try at each. Yet this level of improbability is shared by every person and through the anthropomorphic principle is what any entity would observe if observation took place.
The weak version of the anthropic principle is a tautology; the strong version is religion. Both use circular reasoning. Certainly, insofar as everything in the universe came into existence with the Big Bang, we are indeed “collapsed” probabilities. However, calculating probabilities after the fact is equivalent to placing a bet after the race has been run.
These issues are important — and because they are, I addressed them all either in the article itself, or in replies to other comments.
Although, as a social scientist I am not an expert in this field, I am inclined to agree on the technical challenges – it is very difficult, but I would not necessarily say impossible – to adequately copy/recreate the “neuronal configurations” of a person. What I do not agree on are some of the “philosophical” points.
“A human is not born as a tabula rasa, but with a brain that’s already wired and functioning as a mind. Furthermore, the brain forms as the embryo develops. It cannot be inserted after the fact, like an engine in a car chassis or software programs in an empty computer box.”
I agree that the brain is already wired in some way as the child is born, but as I have heard, the majority of connections form within the first years after birth and the brain rewires constantly as we gain more knowledge, experience, sensory input etc. However I see not point why it should be impossible to copy the state of a brain at a given moment (provided the availability of sufficiently advanced technology) and run a simulation about how it may rewire over time given certain stimuli. If this is really a hypothetical “copy” of yourself at the given point in time cannot be answered in my view (due to lacking empirical data). Even worse: it may be the case that one could never find out if the simulation is really identical to you (it may say “yes”, but how to really evaluate if it is true?)
And if you would be really able to model and simulate the signaling processes, chemical reactions etc. that happen in your brain in a computer model, why should there be a difference between what your brain does and what the computer does?
(I mean, I can record my voice on a digital system and when you play it and the recording and output are sufficiently good, you hear no difference between what the computer does and what your vocal cords do – because both is about producing certain frequencies, which is all what counts).
“Furthermore, the newly minted person/ality will start diverging from the original the moment it gains consciousness.”
What does “divergence” mean – “divergence” from what? Every ‘second’ (or “Planck-time” instance if you wish) I somewhat diverge from a hypothetical being I would have become if the environmental conditions/experiences were different the instant before… I mean if I would have grown up somewhere else my “hypothetical alternative identity” might be totally different from what I am now – but both (or all possible) alternatives would (have) be(en) “me”. So I personally do not care much about divergence as such…
Another interesting question: what about dreaming – interestingly I realize myself as “me” in most of my dreams… Also people who “lost consciousness” for a while, still realize themselves as themselves afterwards, although with a time lag, i.e. they continue to perceive the “continuity” as if it started from the point they regain consciousness, although their mental configuration might have changed in the meantime.
“Without a body, these functions will flail around and can result in the brain… well, losing its mind.”
What is the body? In regard to what may be relevant in this context, the body is a sensory system and things like touch, vision, hearing, smell could be simulated. I mean we already have robots that can see, hear, touch and sense chemicals (smell), although they still do not have not enough ‘brains’ to analyze all this to the fullest.
Also there are people who are severely paralyzed or lack body parts and organs and can think very clearly (e.g. Stephen Hawking)! So I am not sure in how far a real physical body is necessary.
“Finally, without context we may lose the ability for empathy”
May! This is hypothetical.
I have addressed most of your points in the article, although briefly due to length constraints. If H+ decides to continue this collaboration, I may revisit some of them in more detail.
You’re right that most of the brain wiring specific for each of us is mostly done after birth. But this is fundemantally different from inserting a new brain into an already formed body. Dreaming and fainting is irrelevant to continuity of consciousness, since the brain is active during both these processes. And changing any bodily component does have repercussions on the brain/mind. The fact that we are able to overcome (some of them, some of the time) speaks to the plasticity of our neurons.
As I said to earlier commenters, the point I’m making about uploading as a means of attaining personal immortality is not that a computer simulation cannot intrinsically attain consciousness if it becomes complex enough — but that it won’t be the original, which is embedded in the original physical substrate.
The divergence between an “original” and a “clone” (regardless of how the latter is generated — genetic manipulation, “brain copying” or any other technique) is real, not hypothetical in the multiple-universes interpretation. The two co-exist in this universe. In fact, cloning creates the opposite problem than the one most commonly trotted out. In my opinion, a fully conscious clone is a real person and cannot be “mined” to augment/immortalize the original without creating slavery/autonomy moral issues.
Last but not least, yes, there are hypotheses here. It’s the province of science to make them, otherwise scientists would be reduced to technician status. What makes such theories more than philisophy is the requirement of testable predictions.
Continuity of consciousness is an illusion anyway. It has to be because people are always changing, and these changes ultimately break down into discrete particle interactions. Yes, the uploaded backup copy of me may be a slightly different person than the original; but by the time the upload’s finished, so will the organic copy of me be a slightly different person.
Are you familiar with Nick Bostrom’s hypothesis of substrate independence, and can you give a good reason to reject it when all the successful work in neuroprosthetics seems to support it?