There are two major questions surrounding the concept of mind uploading. There is the question of feasibility: Can we build a model of a brain complete enough to allow a conscious mind to emerge? The other question is concerned with identity. Some people argue that, if a copy of a conscious mind is identical by all measures (ignoring the fact that one is biological and the other is neuromorphic software/hardware) it should be thought of as a continuation of the mind that was mapped and uploaded. Others argue that a copy cannot be considered the same as the original, so the newly awakened consciousness must be another person.
Various attempts have been made to imagine the benefits of mind uploading. Assuming continuation of the mind, these benefits include indefinite lifespans and upgrading the mind. When your current brain no longer works well enough or not at all, you transfer your conscious mind to another (perhaps better) artificial brain. None of these benefits are tempting to those who see uploads as different people. In The Spike, Damien Broderick declared “copies are not you” and asked, “Would you be prepared to die (sacrifice your current embodiment) in order that an exact copy of yourself be reconstructed elsewhere, or on a different substrate?” He goes on to argue that this is not a procedure he would be willing to undertake. Let’s assume Broderick is right and a copy is indeed “not you.” Does it then follow that mind uploading offers no benefits?
WHAT’S IN IT FOR ME?
Throughout Broderick’s argument, there is the impression that the uploaded consciousness would be a stranger to whom one would have few ties. But I think the copy would have the same status as Hans Moravec’s 4th generation universal robots. Moravec said, “I consider these future machines to be our progeny, ‘mind children’ built in our image and likeness.” I think “mind children” is an appropriate term for uploads. After all, when you have offspring some of your genes are duplicated. Each person is the result of two channels of heredity: genetic data encoded in DNA, and culture. Vernor Vinge has suggested humans can be defined as the species that learned to outsource aspects of cognition. This began with the evolution of complex language and the ability to communicate thoughts and intentions to other minds. It continued with the emergence of writing and the preservation of memories in external systems, and now includes technologies that are gradually assuming functions once thought to be exclusively biological. Mind uploading would mark the point at which culture becomes completely independent of biology — when all (not just aspects of) a person’s cognition could be duplicated. Can it really be the case that people would treat their upload as a stranger? I would imagine there would actually be a connection that is closer than that which can exist between identical twins.
It would also offer a solution to Terry Grossman’s complaint that death wastes a colossal amount of knowledge. If you assume each person’s life experience amounts to one book, the 52 million deaths that occur annually are equivalent to burning the Library of Congress three times every year. We do attempt to preserve knowledge and pass it on to future generations, but it takes decades for biological children to reach maturity. However, a “mind child” uploaded from an adult brain would, from day one, possess a lifetime’s worth of knowledge. As the problems society faces grows more complex, surely it makes sense to have offspring that can contribute to problem-solving from day one?
FROM “DIGITAL INTERMEDIARIES” TO “DIGITAL TWINS”
Must we wait until mind uploading is realized before we get to know our “mind children,” those future progeny who possess our memories and yet grow into different people? Perhaps not. Today, some people use online worlds like Second Life to create and develop “digital people.” This involves emphasizing the roleplaying possibilities of shared online spaces. So, whereas Emily Brontë used the medium of literature to create and develop the character of Heathcliff, now some people create and develop characters who exist exclusively within online spaces. A digital person is best not thought of as comparable to the relationship between “George Elliot” and “Mary Evans” (the former being a pseudonym of the latter) but more like “George Elliot” and “Silas Marner” (the latter being a character created by the former).
If you try to argue ‘an upload of your mind would not be you’ with a digital person (which, remember, is a term used to describe a character roleplayed in virtual worlds), you are missing the point. The scanning procedure would not be performed on the brain of the digital person. After all, it does not really have a brain of its own, any more than a literary or movie character does. No, the brain being scanned would belong to the person who actually does the roleplaying. Subjectively, there may already seem to be a degree of separation. He or she may imagine the character to be an individual in its own right. Once the upload is complete and that digital person truly develops independent of the mind that thought it up, perhaps that would not trigger the psychological issues that might arise if one expected the upload to be a continuation of oneself.
A related question is often asked: To what extent is a digital person separate from the person who is roleplaying them? The question is often asked. Many consider it impossible to create and sustain a personality that is substantially different from the RL persona. Others argue that current virtual reality is too crude to enable deep immersion into an alternate identity. The first argument may be true, but it hardly rules out the possibility of developing a digital person that is somewhat different from the RL self. As for the second argument, the crudeness of current VR may be an advantage. It enables those who want to play around with alternate identities or character creation, but does not yet raise the kind of existential questions that plagued people in Greg Egan’s Permutation City or the Matrix movies. As online worlds grow in sophistication, it should enable increasingly complex explorations of alternate identities. The end result may seem very strange to us, but perhaps less so to the people who get to use them, especially if they emerged from many small steps from the previous generation of online worlds.
Developments in AI may also allow avatars to become increasingly autonomous. With eye-tracking software, a user could look at a point in the virtual space, and their avatar would head toward it. This might seem like nothing more than a convenient tool, but if an avatar can infer objects of interest from the direction of the user’s gaze, that would represent a basic step toward developing theory of mind.
This is something that interests search engine providers. Yahoo’s Usama Fayyad commented, “With more knowledge about where you are, what you are like and what you are doing at the moment…the better we will be able to deliver relevant information when people need it.” This would suggest mind uploading might benefit from progress in search software. After all, to develop as complete a model of a person’s state of mind as possible, eventually you must build something close to a copy of that mind. At Google, Peter Norvig sees us moving away from typing words into a search engine. Instead, “people will discuss their needs with a digital intermediary…the result will not be a list of links, but an annotated report (or a simple conversation) that synthesizes the important points.”
A “digital intermediary” sounds less like a tool and more like a person that collaborates with users, helping to gather and organize information. As storage capacities grow and computers and sensors shrink, some researchers foresee the emergence of computing ecosystems that can automatically capture and store “digital memories” of everything that happens in a person’s daily life. Such a gigantic store of accumulated data would require an AI competent at organizing information. Positive feedback might occur: the better digital intermediaries get at finding meaningful patterns in data, the more they know about the user. The more they know about the user, the better they get at finding meaningful patterns.
Digital intermediaries might develop into what Ben Goertzel has called “digital twins”: “An AI-powered avatar…embodying one’s ideas and preferences and [making] a reasonable emulation of the decisions one would make.” Some have argued that this approach is unlikely to capture enough information about a person to equal a mind upload. I agree. I think “digital twins” are unlikely to convince people that they are the person they claim to be, if subjected to lengthy interrogation by close family and friends. But they are more likely to be developed before “mind children”, as will software designed around partial understandings of the structure and functions of the brain. How will regular interactions with digital intermediaries, digital twins, and a clearer understanding of how the mind works affect concepts of “self”?
NO FIXED ESSENCE OF IDENTITY
Traditionally (in the West at least), the self has been attributed to an incorporeal soul, making “I” a fixed essence of identity. But neuroscience is revealing the self as an interplay of cells and chemical processes occurring in the brain — in other words, a transitory dynamic phenomena arising from certain physical processes. German philosopher Thomas Mezinger’s “Phenomenal Self Model” moves away from a notion of “I” as a substance (incorporeal though it may be) and replaces it with representations of the information that is processed in the brain. The phenomenal self model challenges the “fixed essence of identity” that underlies expressions such as “she is no longer herself.” There isn’t any self in that sense; rather (in Lone Frank’s words) “life is not so much about finding yourself but choosing yourself or molding yourself into the shape you want to be…. The neurotechnology of the future will likewise produce the means for transforming the physical self — be it through various cognitive techniques, targeted drugs, or electronic implants…our individual self will simply be a broad range of possible selves.”
Maybe enabling technologies and knowledge gained during the reverse-engineering of living brains will turn current conceptions about "self" upside down.
By the time mind uploading is generally available, perhaps people will have forgotten a time when a singular self was “normal.” They will be used to multiple viewpoints, their brains processing information coming not only from their local surroundings, but also from the remote sensors and cyberspaces they are simultaneously linked to. They will have already become familiar with mental concepts migrating from the brain to spawn digital intermediaries within the clouds of smart dust that surround them. Every idea, each inspiration, would give birth to software lifeforms introspecting from many different perspectives before integrating the results of their considerations with the primary consciousness that spawned them. Each and every brain (whether robot, human, or a hybrid) will continually send and receive perceptions etc. to and from their personal exocortex, operating within the Dust. Just as computers can already cluster together to create temporary supercomputing platforms, perhaps many exocortices will cluster together to form metacortices within…what?
Well, that’s another question. I will not attempt to answer it, but return instead to the question: “who would want mind uploading?” Perhaps Greg Egan is right, and only the terminally ill who have exhausted all other possibilities for life extension would be prepared to undergo the procedure. Maybe it would appeal only to those who have created and developed “digital people” and who seek to give their “offspring” autonomy by arranging to create a duplicate of their mind, installed in hardware that will enable the digital person to grow in ways its “parent” could scarcely imagine. Or, just maybe, enabling technologies and knowledge gained during the reverse-engineering of living brains will turn current conceptions about “self” upside down. Maybe debates over whether or not “the copy is you” will turn out to have been precisely the wrong kinds of questions to ask.
The right questions can only be known to the future society that must co-exist with practical mind uploading technologies.