Sign In

Remember Me

Clearing Up Misconceptions About Mind Uploading

Substrate Independence
The term substrate-independence denotes the philosophical thesis of Functionalism – that what is important about the mind and its constitutive sub-systems and processes is their relative function. If such a function could be recreated using an alternate series if component parts of procedural steps, or can be recreated on another substrate entirely, the philosophical thesis of Functionalism holds that it should be the same as the original, experientially speaking.

However, one rather common and ready-at-hand misinterpretation stemming from the term “Substrate Independence” is the notion that we as personal selves could arbitrarily jump from mental substrate to mental substrate, since mind is software and software can be run on various general purpose machines. The most common form of this notion is exemplified by scenarios laid out in various Greg Egan novels and stories, wherein a given person sends their mind encoded as a wireless signal to some distant receiver, to be re-instantiated upon arrival.

The term substrate independent minds should denote substrate independence for the minds in general, again, the philosophical thesis of functionalism, and not this second, illegitimate notion. In order to send oneself as such a signal, one would have to put all the processes constituting the mind “on pause” – that is, all causal interaction and thus causal continuity between the software components and processes instantiating our selves would be halted while the software was encoded as a signal, transmitted and subsequently decoded. We could expect this to be equivalent to temporary brain death or to destructive uploading without any sort of gradual replacement, integration or transfer procedure. Each of these scenarios incurs the ceasing of all causal interaction and causal continuity between the components and processes instantiating the mind. Yes, we would be instantiated upon reaching our destination, but we can expect this to be as phenomenally discontinuous as brain death or destructive uploading.

There is talk in the philosophical and futurist circles – where Substrate Independent Minds is a familiar topic and a common point of discussion – suggesting that the mind is software. This sentiment ultimately derives from functionalism, and the notion that when it comes to mind it is not the material of the brain that matters, but the process(es) emerging therefrom. And due to the fact that almost all software is designed to as to be implemented on general purpose (i.e. standardized) hardware, that we should likewise be able to transfer the software of the mind into a new physical computational substrate with as much ease as we do software. While we would emerge from such a transfer functionally isomorphic with ourselves prior to the jump from computer to computer, we can expect this to be the phenomenal equivalent of brain death or destructive uploading, again, because all causal interaction and continuity between that software’s constitutive sub-processes has been discontinued. We would have been put on “pause” in the time between leaving one computer, whether as static signal or static solid-state storage, and arriving at the other.

This is not to say that we couldn’t transfer the physical substrate implementing the “software” of our mind to another body, provided they were equipped to receive such a physical substrate. But this doesn’t have quite the same advantage as beaming oneself to the other side of Earth, or Andromeda for that matter, at the speed of light.

But to transfer a given WBE to another mental substrate without incurring phenomenal discontinuity may very well involve a second gradual integration procedure, in addition to the one the WBE initially underwent (assuming it isn’t a product of destructive uploading). And indeed, this would be more properly thought of in the context of a new substrate being gradually integrated with the WBE’s existing substrate, rather than the other way around (i.e. portions of the WBE’s substrate being gradually integrated with an external substrate.) It is likely to be much easier to simply transfer a given physical mental substrate to another body, or to bypass this need altogether by actuating bodies via tele-operation instead.

In summary, it is substrate independence for mind in general, and not for a specific mind in particular (at least not without a gradual integration procedure, like the type underlying the notion of gradual uploading, so as to transfer such a mind to a new substrate without causing phenomenal discontinuity.)

 

Uploading is a bad term.

The term “Mind-Uploading” itself has some drawbacks and creates common initial misconceptions. It is based off terminology originating from the context of conventional, contemporary computers – which may lead to the initial impression that we are talking about uploading a given mind into a desktop PC, to be run in the manner that Microsoft Word is run. This makes the notion of WBE appear more fantastic and incredible – and thus improbable – than it actually is.

Another potential misinterpretation of Mind-Uploading is that we seek to upload a mind into a computer – as though it were nothing more than a simple file transfer. This, again, connotes modern paradigms of computation and communications technology that are unlikely to be used for WBE. It also creates the connotation of putting the mind into a computer – whereas a more accurate connotation, at least as far as gradual uploading as opposed to destructive uploading is concerned, would be bringing the computer gradually into the biological mind.

It is easy to see why the term initially came into use. The notion of destructive uploading was the first embodiment of the concept – the notion of gradual uploading so as to mitigate the philosophical problems pertaining to how much a copy can be considered the same person as the original, especially in contexts where they are both simultaneously existent, came afterward.  In the context of destructive uploading it makes more connotative sense to think of concepts like uploading and file transfer.

But in the notion of gradual uploading, portions of the biological brain – most commonly single neurons, as in Robert A. Freitas’s and Ray Kurzweil’s versions of gradual uploading – are replaced with in-vivo  computational substrate, to be placed where the neuron it is replacing was located. Such a computational substrate would be operatively connected to electrical or electrochemical sensors (to translate the biochemical or more generally biophysical output of adjacent neurons into computational input that can be used by the computational emulation) and electrical or electrochemical actuators (to likewise translate computational output of the emulation into biophysical input that can be used by adjacent biological neurons). It is possible to have this computational emulation reside in a physical substrate existing outside of the biological brain, connected to in-vivo biophysical sensors and actuators via wireless communication (i.e. communicating via electromagnetic signal).

This notion is I think not brought into the discussion enough. It is an intuitively-obvious notion if you’ve thought a great deal about Substrate-Independent-Minds and frequented discussions on Mind-Uploading. But to a newcomer who has heard the term Gradual Uploading for the first time, it is all too easy to think “yes, but then one emulated neuron would exist on a computer, and the original biological neuron would still be in the brain. So once you’ve gradually emulated all these neurons, you have an emulation on a computer, and the original biological brain, still as separate physical entities. Then you have an original and the copy – so where does the gradual in Gradual Uploading come in? How is this any different than destructive uploading? At the end of the day you still have a copy and an original as separate entities.”

This seeming impasse is I think enough to make the notion of Gradual Uploading seem at least initially incredible and infeasible — until people take the time to read the literature and discover how gradual uploading could actually be achieved (i.e. wherein each emulated neuron is connected to biophysical sensors and actuators to facilitate operational connection and causal interaction with existing in-vivo biological neurons). The connotations created by the term I think to some extent make it seem so fantastic (as in the overly-simplified misinterpretations considered above) that people write off the possibility before delving deep enough into the literature and discussion to actually ascertain the possibility with any rigor.

 

The Computability of the Mind

Another common misconception is that the feasibility of Mind-Uploading is based upon the notion that the brain is a computer or operates like a computer. The worst version of this misinterpretation that I’ve come across is that proponents and supporters of Mind-Uploading are claiming that the mind is similar in operation current and conventional paradigms of computer.

Before I elaborate why this is wrong, I’d like to point out a particularly harmful sentiment that can result from this notion. It makes the concept of Mind-Uploading seem dehumanizing, because conventional computers don’t display anything like intelligence or emotion. This makes people conflate the possible behaviors of future computers with the behaviors of current computers. “Obviously computers don’t feel happiness or love, and so to say that the brain is like a computer is a farcical claim.”

It also makes people think that advocates and supporters of Mind-Uploading are claiming that the mind is reducible to basic autonomous operations, like cogs in a machine, which constitutes for many people a seeming affront to our privileged place in the universe as humans, in general, and to our culturally-engrained notions of human dignity being inextricably tied to physical irreducibility, in particular. The intuitive notions of human dignity and the ontologically-privileged nature of humanity have yet to catch up with physicalism and scientific materialism (a.k.a. metaphysical naturalism). It is not the proponents of Mind-Uploading that are raising these claims, but science itself – and for hundreds of years I might add. Man’s privileged and physically-irreducible ontological status has become more and more undermined throughout history since at least as far back as the Darwin’s theory of Evolution, which brought the notion of the past and future phenotypic evolution of humanity into scientific plausibility for the first time.

It is also seemingly disenfranchising to many people, in that notions of human free-will and autonomy seem to be challenged by physical reductionism and determinism – perhaps because many people’s notion of free-will are still associated with a non-physical, untouchably-metaphysical human soul (i.e. mind-body dualism) which lies outside the purview of physical causality. To compare the brain to a “mindless machine” is still for many people disenfranchising to the extent that it questions the legitimacy of their metaphysically-tied notions of free-will.

Just because the sheer audacity of experience and the raucous beauty of feeling is ultimately reducible to physical and procedural operations does not take away from it. If it were the result of some untouchable metaphysical property, a sentiment that mind-body-dualism promulgated for quite some time, then there would be no way for us to understand it, or to change it (e.g. improve upon it) in any way. There is no reason Man’s urge to discover and determine the underlying causes of the world should not apply to his own self as well.

Moreover, the fact that experience, feeling, being and mind result from the convergence of singly-simple systems and processes makes the minds emergence from such simple convergence all the more astounding, and amazing, not less! If the complexity and unpredictability of mind were the result of complex and unpredictable underlying causes (like the metaphysical notions of mind-body dualism suggest) then the fact that mind turned out to be complex and unpredictable wouldn’t be much of a surprise. The simplicity of mind’s underlying mechanisms makes mind’s emergence all the more amazing, and should not take away from our human dignity but should instead raise it up to heights yet-unheralded.

Now that we have addressed such potentially-harmful second-order misinterpretations, we will address their root: the common misinterpretations likely to result from the phrase “the computability of the mind”. Not only does this phrase not say that the mind is similar in basic operation to conventional paradigms of computation – as though a neuron were comparable to a logic gate or transistor – but neither does it necessarily make the more credible claim that the mind is like a computer in general. This makes the notion of Mind-Uploading seem dubious because it conflates two different types of physical system – computers and the brain.

The kidney is just as computable as the brain. That is to say that the computability of mind denotes the ability to make predictively-accurate computational models (i.e. simulations and emulations) of biological systems like the brain, and is not dependent on anything like a fundamental operational similarity between biological brains and digital computers. We can make computational models of a given physical system, feed it some typical inputs and get a resulting output that approximately matches the real-world (i.e. physical) output of such a system.

The computability of the mind has very little to do with the mind acting as or operating like a computer, and much, much more to do with the fact that we can build predictively accurate computational models of physical systems in general. This also, advantageously, negates and obviates many of the seemingly dehumanizing and degrading connotations identified above that often result from the claim that the brain is like a machine or like a computer. It is not that the brain is like a computer – it is just that computers are capable of predictively modeling the physical systems of the universe itself.

Uploading represents the desire for a safer and longer existence, not the Wanton Want of Superintelligence or the Eschewing of our Humanity to Emigrate into Machines.

Too often is uploading portrayed as the means to superhuman speed of thought or to transcending our humanity. It is not that we want to become less human, or to become like a machine. For most Transhumanists and indeed most proponents of Mind-Uploading and Substrate-Independent Minds, meat is machinery anyways – in other words there is no real (i.e. legitimate) ontological distinction between them to begin with. Too often is uploading seen as the desire for superhuman abilities. Too often is it seen as a bonus, nice but ultimately unnecessary.

I vehemently disagree. Uploading has been from the start for me and I think for many other proponents and supporters of Mind-Uploading a means of life-extension, of deferring and ultimately defeating untimely, involuntary death, as opposed to and ultimately unnecessary means to better powers, a more privileged position relative to the rest of humanity or to eschewing our humanity in a fit of contempt-of-the-flesh. We do not want to turn ourselves into Artificial Intelligence, which is a somewhat perverse and burlesque caricature that is associated with Mind-Uploading far too often.

The notion of gradual uploading is implicitly a means of life-extension. Gradual uploading will be significantly harder to accomplish than destructive uploading. It requires a host of technologies and methodologies – brain-scanning, in-vivo locomotive systems such as but not limited to nanotechnology or else extremely robust biotechnology – and a host of precautions to prevent causing phenomenal discontinuity, such as letting each non-biological functional replacement time to causally interact with adjacent biological components before the next biological component that it causally interacts with is likewise replaced. Gradual uploading is a much harder feat than destructive uploading, and the only advantage it has over destructive uploading is preserving the phenomenal continuity of a single specific person. In this way it is implicitly a means of life-extension, rather than a means to the creation of AGI, because its only benefit is the preservation and continuation of a single, specific human life, and that benefit entails a host of added precautions and additional necessitated technological and methodological infrastructures.

If we didn’t have to fear the creation of recursively-self-improving AI, biased towards being likely to recursively-self-modify at a rate faster than humans are likely to (or indeed, are able to safely – that is, gradually enough to prevent phenomenal discontinuity), then I might favor biotechnological methods of achieving indefinite lifespans over gradual uploading. But with the way things are, I am an advocate of gradual Mind-Uploading first and foremost because I think it may prove necessary to prevent humanity from being left behind by recursively self-modifying superintelligences. I hope that it ultimately will not prove necessary – but at the current time I feel that it is somewhat likely that it will be.

Most people who wish to implement or accelerate an intelligence explosion al a I.J. Good, and more recently Vernor Vinge and Ray Kurzweil, wish to do so because they feel that such a recursively self-modifying super-intelligence (RSMSI) could essentially solve all of humanity’s problems – disease, death, scarcity, existential insecurity. I think that the potential benefits of creating a RSMSI are superseded by the drastic increase in existential risk it would entail in making any one entity superintelligent relative to humanity.

Thus uploading constitutes one of the means by which humanity can choose, volitionally, to stay on the leading edge of change, discovery, invention and novelty, if the creation of a RSMSI is indeed imminent. It is not that we wish to become machines and eschew our humanity – rather the loss of autonomy and freedom inherent in the creation of a relative Super-intelligence is antithetical to the defining features of humanity, and in order to preserve the uniquely human thrust toward greater self-determination in the face of such a RSMSI, or at least be given the choice of doing so, may necessitate the ability to gradually upload so as to stay on equal footing in terms of speed of thought and general level of intelligence (which is roughly correlative with the capacity to affect change in the world and thus to determine its determining circumstances  and conditions as well).

In a perfect world we wouldn’t need to take the chance of phenomenal discontinuity inherent in gradual uploading. In gradual uploading there is always a chance, no matter how small, that we will come out the other side of the procedure as a different (i.e. phenomenally distinct) person. We can seek to minimize the chances of that outcome by extending the degree of graduality with which we gradually replace the material constituents of the mind, and by minimizing the scale at which we gradually replace those material constituents (i.e. gradual substrate replacement one ion-channel at a time would be likelier to ensure the preservation of phenomenal continuity than gradual substrate replacement neuron by neuron would be). But there is always a chance.

This is why biotechnological means of indefinite lifespans have an immediate advantage over uploading, and why if non-human RSMSI were not a worry, I would favor biotechnological methods of indefinite lifespans over Mind-Uploading. But this isn’t the case, rogue RSMSI are a potential problem, and so the ability to secure our own autonomy in the face of a rising RSMSI may necessitate advocating Mind-Uploading over biotechnological methods of indefinite lifespans.

Mind-Uploading also has some ancillary benefits over biotechnological means of indefinite lifespans as well. If functional equivalence is validated (i.e. if it is validated that the basic approach works), mitigating existing sources of damage becomes categorically easier. In physical embodiment, repairing structural, connectional or procedural sub-systems in the body requires (1) a means of determining the source of damage and (2) a host of technologies and corresponding methodologies to enter the body and make physical changes to negate or otherwise obviate the structural, connectional or procedural source of such damages, and then exit the body without damaging or causing dysfunction to other systems in the process. Both of these requirements become much easier in the virtual embodiment of whole-brain-emulation.

First, looking toward requirement (2), we do not need to actually design any technologies and methodologies for entering and leaving the system without damage or dysfunction or for actually implementing physical changes leading to the remediation of the sources of damage. In virtual embodiment this requires nothing more than rewriting information. Since in the case of WBE we have the capacity to rewrite information as easily as it was written in the first place, while we would still need to know what changes to make (which is really the hard part in this case), actually implementing those changes is as easy as rewriting a word file. There is no categorical difference, since it is information and we would already have a means of rewriting information.

Looking toward requirement (1), actually elucidating the structural, connectional or procedural sources of damage and/or dysfunction, we see that virtual embodiment makes this much easier as well. In physical embodiment we would need to make changes to the system in order to determine the source of the damage. In virtual embodiment we could run a section of emulation for a given amount of time, change or eliminate a given informational variable (i.e. structure, component, etc.) and see how this affects the emergent system-state of the emulation instance.

Iteratively doing this to different components and different sequences of components, in trial-and-error fashion, should lead to the elucidation of the structural, connectional or procedural sources of damage and dysfunction. The fact that an emulation can be run faster (thus accelerating this iterative change-and-check procedure) and that we can “rewind” or “play-back” an instance of emulation time exactly as it occurred initially means that noise (i.e. sources of error) from natural systemic state-changes would not affect the results of this procedure, whereas in physicality systems and structures are always changing, which constitutes a source of experimental noise. The conditions of the experiment would be exactly the same in every iteration of this change-and-check procedure. Moreover, the ability to arbitrarily speed up and slow down the emulation will aid in our detecting and locating the emergent changes caused by changing or eliminating a given micro-scale component, structure or process.

Thus the process of finding the sources of damage correlative with disease and aging (especially insofar as the brain is concerned) could be greatly improved through the process of uploading. Moreover, WBE should accelerate the technological and methodological development of the computational emulation of biological systems in general, meaning that using such procedures to detect the structural, connectional and procedural sources of age-related damage and systemic dysfunction in the body itself, as opposed to just the brain.

This iterative change-and-check procedure would be just as possible via destructive uploading as it would with gradual uploading. Moreover, in terms of people actually instantiated as whole-brain-emulations, actually remediating those structural, connectional and/or procedural sources of damage as it pertains to WBEs is much easier than physically-embodied humans. Anecdotally, if being able to distinguish between the homeostatic, regulatory and metabolic structures and processes in the brain from the computational or signal-processing structures and processes in the brain is a requirement for uploading (which I don’t think it necessarily is, although I do think that such a distinction would decrease the ultimate computational intensity and thus computational requirements of uploading, therby allowing it to be implemented sooner and have wider availability), then this iterative change-and-check procedure could also be used to accelerate the elucidation of such a distinction as well, for the same reasons that it could accelerate the elucidation of structural, connectional and procedural sources of age-related systemic damage and dysfunction.

Lastly, while uploading (particularly instances in which a single entity or small group of entities is uploaded prior to the rest of humanity) itself constitutes a source of existential risk, it also constitutes a means of mitigating existential risk as well. Currently we stand on the surface of the earth, naked to whatever might lurk in the deep night of space. We have not been watching the sky for long enough to know with any certainty that some unforeseen cosmic process could not come along to wipe us out at any time. Uploading would allow at least a small portion of humanity to live virtually on a computational substrate located deep underground, away from the surface of the earth and its inherent dangers, thus preserving the future human heritage should an extinction event befall humanity. Uploading would also prevent the danger of being physically killed by some accident of physicality, light being hit by a bus or struck by lightning.

Uploading is also the most resource-efficient means of life-extension on the table, because virtual embodiment not only essentially negates the need for many physical resources (instead necessitating one, namely energy – and increasing computational price performance means that just how much a given amount of energy can do is continually increasing).

It also mitigates the most pressing ethical problem of indefinite lifespans – overpopulation. In virtual embodiment, overpopulation ceases to be an issue almost ipso facto. I agree with John Smart’s STEM compression hypothesis, that in the long run the advantages proffered by virtual embodiment will make choosing it over physical embodiment, in the long run at least, an obvious choice for most civilizations, and I think it will be the volitional choice for most future persons. It is safer, more resource efficient (and thus more ethical, if one thinks that forestalling future births in order to maintain existing life is unethical) and the more advantageous choice. We will not need say: migrate into virtuality if you want another physically-embodied child. Most people will make the choice to go VR themselves simply due to the numerous advantages and the lack of any experiential-incomparabilities (i.e. modalities of experience possible in physicality but not possible in VR).

In summary, Mind-Uploading (especially gradual uploading) is much more a means of life-extension than a means to arbitrarily greater speed of thought, intelligence or power (i.e. capacity to affect change in the world). We do not seek to become machines, only to retain the capability of choosing to remain on equal footing with them if the creation of RSMSI is indeed imminent. There is no other reason to increase our collective speed of thought, and to do so would be arbitrary – unless we expected to be unable to prevent the physical end of the universe, in which case it would increase the ultimate amount of time and number of lives that could be instantiated in the time we have left.

 

14 Comments

  1. Wonderful. Very thoughtful. I like the fact that you cite a number reasons why mind uploading is inevitable (extending life and ensuring humanity’s survival).

    In terms of the quality of that type of existence, I notice that you pointed out the fact that we would not only create a virtual version of ourselves, but duplicate the physical sensations that we as virtual humans could continue to experience:

    –we can build predictively accurate computational models of physical systems in general.–

    Would you mind if I quoted from your article on my blog?

    Thanks. I enjoyed this immensely.

  2. Great article, but I don’t understand why you view uploading as EITHER extending life OR enhancing our capabilities. I’d like both, please.

    Also, the first upload could very well turn out to be the RSMSI that you rightly fear.

    Other than that, bring it on!

    • Thanks, Calum. And I don’t think it’s an either life extension or enhancement situation. Which part of the article gave you that impression? Gradual mind uploading constitutes a distinct approach substantial life-extension, and in the article I argued that it may even be likely to give us a *greater* ability to self-modify. I draw this conclusion for a number of reasons outlined in the article, one of the bigger ones being that the process of gradual uploading will allow for much more precise neuromodification and neuromodulation (if we can design functional analogs to neural components, e.g. neurons, then we’ll likely be able to modulate their activity much better than we can biological neural components); another reason is because we’ll be able to build means of modulation into such functional analogs rather than relying on interfacing with an external means of neuromodulation (like nanoelectrodes or neurooptogenetics as we do now).

      • Ah – upon reflection, I suspect that by “either life extension or enhancement” you were probably referring to the comments, where I said that endorsing R&D into gradual mind-uploading rather than biotechnological or nanomedical approaches to substantive life extension (which have the comparative advantage of being implementable within shorter time-scales – this being due both to the relative complexity of the respective fields and the much greater history of development in biotech and biomedicine) because it could constitute a strategy toward minimizing the risk of RSMSI rogue as well (by using it to implement a Maximally-Distributed Intelligence Explosion, as alluded to in the comments).

        I do work on both approaches (I have a 20,000+ -word scholarly article accepted for the Fall Issue of the peer-reviewed Journal of Geoethical Nanotechnology, which contains more formal work on ‘gradual uploading’, titled “Concepts and Technologies for the Recurrent Functional Replication, Restoration and Indefinite Perpetuation of the Central Nervous System”, and I’m also collaborating on biomedical-gerontological research as a Research Scientist at ELPIs Foundation for Indefinite Lifespans (a non-profit research organization founded by Marios Kyriazis MD, see elpisfil.org for more details) with Giovanni Santostasi Ph.D. and Marios Kyriazis MD on a research project that describes and analyzes a novel therapy for use in regenerative medicine, WISCT (Whole-body Induced Somatic Cell Turnover). In terms of advocacy however, I’m actually a much more vocal and active advocate of biomedical approaches than I am for gradual uploading, though I consider both to be feasible and each to have its own advantages and disadvantages. This bias toward biomedical approaches is impacted both by the fact that biomedical approaches to life-extension look to be realizable in shorter time-scales and/or smaller budgets, and by the fact that it’s an easier prospect for the public to get on board with. I also want to clarify that I don’t see a distinct line defining biomedical approaches as life-extension and gradual mind uploading as enhancement; in the article I’m careful to explicitly characterize gradual mind uploading as a distinct approach within the larger field of life-extension, and *not* just as a means of enhancement, as it is often portrayed. Though in the bigger picture I’ve advocated for the feasibility and desirability of both approaches, and in some places have looked at their comparative pros and cons. I hope that helped clear up the potential confusion, Calum.

  3. Yes, I think we are in general agreement on the biggest points.

    I would, however, like to clarify my position on virtuality vs. physicality, and why virtual embodiment would provide an increase existential security while not mitigating it completely. Yes, at the end of the day a computer is physical. But we could put the physical substrate of a computer in a safe place (e.g. fortified structure, deep underground, etc.) without it affecting the lives virtually embodied by that substrate. The physical location of the computer is arbitrary, i.e. doesn’t make a difference to those inside. To create a similar degree of security in physical embodiment would entail people living underground. Because their sensory environments are tied to physicality, moving them underground would be moving them underground experientially as well, whereas it wouldn’t be moving them underground experientially in the case of virtually-embodied uploads, if we moved their substrate underground.

    Thus in physical embodiment, in order to be in the sensory environment we like to be in, we have to exist on the surface of the earth. This is arguably more dangerous than being located in a server underground, where you’re at less physical risk simply by not interacting and moving about the physical world as much, and also by being more protected from meteors, cosmic phenomena (e.g. solar fares), natural disasters (the only exception being seismic activity, where you would be seemingly at greater risk by being on a server underground).

    Redundancy is usually the best way to go. It typically is in most systems. But I have my doubts about the effectiveness of backup files. A backup file, freshly woken up, would be as phenomenally discontinuous as brain death, or, to use the examples used in the above article, as phenomenally discontinuous as putting yourself on pause (i.e. ceasing all causal interaction and thus causal continuity amongst your constitutive components, sub-systems and processes) in order to travel as a signal from one location to another.

    Yes, someone with the same phenomenal consciousness as an earlier instance of you (while nonetheless failing to posses phenomenal continuity with that instance of you, or indeed with the last and latest instance of you) will be around, but it is likely that “you” will be “dead” (you see how all the hard definitions these words imply become troublesome at this point, and we can’t use seemingly-simply terms like you and death anymore without cautionary quotation marks) but that a “you” that isn’t “you” will continue on, living “your” life. I think that once people realize this, that backup files of the mind won’t allow THEM to wake up personally in a new body, (which will come when it’s a real possibility and they are faced with the choice of maintaining a backup file, and its resultant philosophical ramifications), many people might become too squeamish to consider the possibility of an instance of themself bearing no phenomenal continuity with themself (i.e. because there’s no gradual integration procedure between the two physical instances) carrying out the rest of their lives with their loved ones, and it won’t be them (or at least, they won’t be able to experience any of it themself). Thus many people might choose not to make a backup copy of themself.

    I think I would, even bearing in mind the fact that it won’t be “me” experiencing waking up in a new body, because for me, someone else bearing my name and doing things is better than no one bearing my name and doing things (I’d rather that person be me, though, of course, I hope that’s somewhat obvious) and because I’m not one to turn squeamish at extremely non-typical philosophical considerations, but I think this might prevent the creation of backup files from being totally widespread and ubiquitous. It will still happen, and many people will either not consider the philosophical implications (i.e. will it really be me?) or else consider them and be fine with it, and thus still make and maintain backup files. It could very well be the norm – in fact I won;t be surprised if it becomes the statistical majority, a.k.a. the norm. But I don’t think it will be unquestioningly adopted by 100% of people who would choose to undergo a gradual uploading procedure, that it won’t be as much a fact of life as drinking water is.

    You’re right – if we used conventional computational and communications infrastructures like servers connected via the internet, we would not only be as vulnerable to technical errors and natural disasters, but we would probably be more vulnerable to them than we would be as we are now, physically-embodied. But then again, a server losing power today wouldn’t mean the death of one or potentially many, many lives. If we had minds virtually embodied on such servers, then we might consider the added effort and expense of fortifying power and communications connections, or moving servers underground or to more fortified structures, etc. We don’t now because loss of connection costs money, not lives. But when it starts costing lives, then we’ll start considering such added expense as worth it. There are ways that we could better secure servers that aren’t in place or aren’t utilized now, simply because it costs more money than is actually lost through (1) accidental physical damage to servers or (2) loss of power or internet (i.e. communications) connection.

    So those are some reasons that I hope clarify why I think virtual embodiment has the potential to provide some added existential security that is either impossible or inconvenient in physical embodiment. They aren’t comprehensive, but more focused on the particular examples you used and the specific concerns you raised.

    In any case this is more to clarify what I already said than to beat the point into the ground. You’re right, they’re still physical and thus will probably never be completely safe. But at the same time there are some differences that makes a difference, so to speak. The fact that we can fortify physical servers in ways that would be inconvenient for physically embodied (as opposed to virtually-embodied) persons (i.e. no one wants to live in a dark box in order to be more secure from national disasters) is just one example.

    @daedalus2u, your fearful Jesuit, that’s an interesting and quite unexpected point. Kudos – the more directed and specific the point the better. The fact that you gave a reference is also much appreciated. I have one caveat/response however, which I think you hinted at when you said “then again it could just be a feature”. But even if this is preaching you what you’ve already hinted at in a sense, explicating it for everyone else is still useful enough to merit the act of unfolding it here.

    My claim was the virtual embodiment allows us to decrease the noise or sources of error in empirical tests to either (1) determine how the low-level operations of the brain converge so as to produce the emergent functional modalities of mind and (2) differentiate homeostatic/regularity structural, connectional or procedural properties of brains from computational/signal-processing structural, connectional or procedural properties of the brain. I said that the brain changes in response to itself (e.g. thinking) and to interaction with its environment through the sense organs, and this means that every time you go to test the structural, connectional and procedural properties of the brain, the fact that they are ever changing and that you won’t have the same experiment-environment every time constitutes empirical noise, or in other words a source of error or randomness. This empirical noise is absent in whole-brain-emulations, where we can “play back” or “run” a given emulated instance *exactly* as it was . Thus the source of noise, the fact that the brain (i.e. experiment-environment) is always changing, is absent in such virtual experiments.

    So even if you and the referenced paper are correct, if noise is used by the brain for its effects (i.e. emergent functional modalities), if noise is *used* to signal process in some sense, this would still be a different type of noise than what I’m talking about. I’m taking about experimental noise, a source of error in empirical validation of principles or empirical testing of hypotheses, not necessarily a source of randomness in brain operation.

    So this empirical and experimental noise is the unwanted variation of environmental parameters (i.e. environmental variables, properties or aspects) between each instance, and a non-fluctuating environment makes for good experimental methodology and more reliable experimental results.

    So in summary, noise could very well play a salient computational/signal-processing role in the brain, but this would be a different type of noise than what I’m talking about. But that’s not to say that I don’t appreciate your comment, and your attempt to keep authors accountable to reality. The fact that you had an eye for detail (i.e. the specificity of your concern) and provided corroborative evidence for it as well, in the form of a journal reference, is also rare in my experience with article and essay comments, and is always appreciated.

    Thanks again for your comments, Ryan and daedelus2u. The best way to maximize the ethicacy, safety and beneficiality of NBIC, emerging, converging, disruptive and transformative tech. is through deliberative discussion and debate, and you guys are taking part in that, which is of dire importance. It is though discussion that we will (1) best determine what we consider the best embodiments of emerging technologies [i.e. determining the best embodiments], as well as (2) best determine how to shape these technologies into forms that embody our values, ideals, and what we consider to be their best and most beneficial, safe and ethical embodiments [i.e. making those best embodiments actually be the ones that occur].

  4. There is a slight problem* noise actually enhances functionality in neural networks.

    Czaplicka A, Holyst JA, Sloot PM. Noise enhances information transfer in hierarchical networks. Sci Rep. 2013;3:1223. doi: 10.1038/srep01223. Epub 2013 Feb 6.

    *then again it might be a “feature”.

  5. It seems like mind uploading (or whatever you want to term it) doesn’t solve existential risk, that is, it doesn’t eliminate death.

    It only takes care of one particular kind of death, biological aging. But parts on any computer wear out, computers can be destroyed, files can corrupt. Even if you make multiple copies of a digital mind, it wouldn’t eliminate global catastrophe.

    You might be able to potentially exist for centuries, yes. But how would that stop you from being constantly anxious about possible existential threats?

    In short, you never get to be disembodied in this universe–ever.

    • Thanks for your comments Ryan.

      You’re right – computers wear out, can be destroyed, etc. – but if we accept that phenomenal continuity or the state of “being me” is maintained throughout the gradual replacement procedure, we come out the other side in a better condition (in respect to our parts wearing out) than we were in before. Having built the new system we would then be embodied by, we could design it to have readily detachable parts so that when something wears out it can simply be detached (without causing structural damage to adjacent systems and structures, because it’s built to be detached) and replaced with an ease that isn’t possible in the highly interconnected and interdependent system of the biological body. We can have more durable parts and sub-systems, ones less susceptible to damage and dysfunction. Since we built it from the ground up, we would already have the ability to access any section or scale we want, access a given component, replace it, etc. In fact, we could even integrate systems for removing a given component, transporting it safely out of the system and import a new replacement component, in anticipation of things breaking down and wearing out. This is a luxury we just don’t have with biological bodies, largely due to the high degree of interdependence, interconnection and interaction between separate sub-systems and processes in biological systems. I go into these and some other comparative advantages in an essay accessible here: https://twitter.com/mfoundation/status/320145238733758465

      In terms of existential risk, you’re right – it doesn’t necessarily eliminate X-Risk. In fact, uploading itself constitutes a potential source of existential risk and Global Catastrophic Risk, in that if a single Whole-Brain-Emulation is implemented before everyone else has the chance, we then have a single intelligence that can think much faster than humanity, and could likely thwart any attempts we made to stop or deter him/her AS we attempt to do so in real-time, due to this subjective speedup factor alone.

      However, gradual uploading, if done safely and in a way that maximizes its net-availability, does constitute a means of decreasing existential risk insofar as it allows us to stay on-par with recursively-self-modifying AI. If humanity creates an entity superintelligent relative to itself, then all bets are off. Intelligence is a bigger source of X-Risk than any single technology that could be wielded by an intelligence. Intelligence is extremely unpredictable, and this is an extreme understatement. So if we consider the creation of recursively self-improving AI a likely outcome, then gradual uploading with wide availability may be able to facilitate a “maximally distributed intelligence explosion” thus preventing too much relative intelligence (and thus power) from accumulating in one single entity, which is extremely dangerous and which for me constitutes the most pressing source of X-Risk and Global Catastrophic Risk currently on the developmental horizon.

      In the much farther future, I think it’s likely that people will choose virtual embodiment over physical embodiment – mainly because all the features of physical embodiment will eventually be able to be recreated in Virtuality, while there will be many aspects of Virtuality that are not re-creatable in physicality. This, combined with the increased security from physical risks, will provide incentive for most people to choose virtuality over physicality in the very long run. No one will be forced; I think the comparative advantages will make it the obvious choice. People will balk at the thought that we once stood naked to the sky, totally vulnerable to whatever it could have in store; that we actually spent time traveling from location to location, sloshing our way through viscous physical space; that we actually went through the laborious process of making things from atoms when we could have made it out of light and never known the difference.

      If this is the case, then a civilization’s worth of Uploads could reside in a substrate that seems lifeless from the outside – static, not moving, etc. We could keep realtime sensors in physicality to warn us of physical sources of threats like meteors or supernovae. Meanwhile, we could bury our civilizational substrate deep underground, away from physical harm, and build systems to repair and replace that substrate from within physicality. This would improve our net security from X-risk.

      You’re right – X-Risk and Global Catastrophic Risk are things likely to always exist in varying degrees of seriousness and pressingness, and will likely never be completely mitigated. But this is life, and the fact that X-Risk will always be a possibility on the horizon shouldn’t prevent us from mitigating it to as great an extent as we can, even if it will never really go away.

      Thus, Ryan, we would, as you note, still be constantly anxious about X-risk — just a little bit less so than we are now, with our fragile bodies standing on the skin of the Earth, naked to whatever mysteries and potential threats the deep and languid night of the universe has in store. The subjective-speed-up gradual uploading makes possible could be so great that we might be able to combat physical threats as they occur in real-time, even when it comes to surprise threats that we had no time to prepare for. So maximally-available gradual uploading doesn’t eliminate all X-Risk, your right, and if we go about it in the wrong way could even constitute a source of X-Risk in and of itself, but overall I think it can help decrease net X-risk, along a variety of alternate avenues touched upon in this essay and this comment, and that X-Risk will always be something that we can minimize yet never fully negate.

      • Yes, I think we agree on just about everything. :) I am certainly on the more optimistic side when it comes to these possibilities.

        We both agree that such a transformation as you’ve described mitigates but does not eliminate existential risk. I guess I’m just a little more skeptical about the allure of being able to avoid “physical harm.” What isn’t physical about a computer?

        You’re saying the pitch is “In a regular old human body, parts break down, have to be replaced by surgery. You can get in a car accident. You can get sick. You’re incredibly fragile. Come into our digital world and *none of those things will happen!”*

        But the internet runs on servers, which are computers, in certain places. It’s still physical. It never becomes un-physical. What about solar flares? What about over-heating? What about natural disasters? What about viruses?

        The Large Hadron Collider broke down because a bird dropped a bit of bread onto a cooling unit. The Stock Exchange flash-crashes due to some fluke in the automated trading software.

        It seems like the real promise is redundancy. The redundancy of a yet uninvented system will take care of it. The super-intelligence will just auto-maintain its systems. There will always be a “backup.”

        I guess I’m just repeating myself at this point, but I’m just not convinced that people would sign up in order to escape physical harm and/or death. They will probably sign up because it’s more beautiful that biological life or, most likely, because the people they love are there.