Chalmers vs. Pigliucci — The Philosophy of Mind Uploading
|Chalmers’s Image from TedxSydney – Pigliucci from his CUNY Profile Page|
The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically advanced, substrate has proved so attractive to futurists and transhumanists.
But is it really feasible? This is a question I’ve looked at many times before, but the recent book Intelligence Unbound: The Future of Uploaded and Machine Minds offers perhaps the most detailed, sophisticated and thoughtful treatment of the topic. It is a collection of essays, from a diverse array of authors, probing the key issues from several different perspectives. I highly recommend it.
Within its pages you will find a pair of essays debating the philosophical aspects of mind-uploading (you’ll find others too, but I want to zone-in on this pair because one is a direct response to the other). The first of those essays comes from David Chalmers and is broadly optimistic about the prospect of mind-uploading. The second of them comes from Massimo Pigliucci and is much less enthusiastic. In this two-part series of posts, I want to examine the debate between Chalmers and Pigliucci. I start by looking at Chalmers’s contribution.
1. Methods of Mind-Uploading and the Issues for Debate
Chalmers starts his essay by considering the different possible methods of mind-uploading. This is useful because it helps to clarify — to some extent — exactly what we are debating. He identifies three different methods (note: in a previous post I looked at work from Sim Bamford suggesting that there were more methods of uploading, but we can ignore those other possibilities for now):
Destructive Uploading: As the name suggests, this is a method of mind-uploading that involves the destruction of the original (biological) mind. An example would be uploading via serial sectioning. The brain is frozen and its structure is analyzed layer by layer. From this analysis, one builds up a detailed map of the connections between neurons (and other glial cells if necessary). This information is then used to build a functional computational model of the brain.
Gradual Uploading: This is a method of mind-uploading in which the original copy is gradually replaced by functionally equivalent components. One example of this would be nanotransfer. Nanotechnology devices could be inserted into the brain and attached to individual neurons (and other relevant cells if necessary). They could then learn how those cells work and use this information to simulate the behaviour of the neuron. This would lead to the construction of a functional analogue of the original neuron. Once the construction is complete, the original neuron can be destroyed and the functional analogue can take its place. This process can be repeated for every neuron, until a complete copy of the original brain is constructed.
Nondestructive Uploading: This is a method of mind-uploading in which the original copy is retained. Some form of nanotechnology brain-scanning would be needed for this. This would build up a dynamical map of current brain function — without disrupting or destroying it — and use that dynamical map to construct a functional analogue.
Whether these forms of uploading are actually technologically feasible is anyone’s guess. They are certainly not completely implausible. I can certainly imagine a model of the brain being built from a highly detailed scan and analysis. It might take a huge amount of computational power and technical resources, but it seems within the realm of technological possibility. The deeper question is whether our minds would really survive the process. This is where the philosophical debate kicks-in.
There are, in fact, two philosophical issues to debate:
The Consciousness Issue: Would the uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?
The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process? Would our identities be preserved?
The two issues are connected. Consciousness is valuable to us. Indeed, it is arguably the most valuable thing of all: it is what allows us to enjoy our interactions with the world, and it is what confers moral status upon us. If consciousness was not preserved by the mind-uploading process, it is difficult to see why we would care. So consciousness is a necessary condition for a valuable form of mind-uploading. That does not, however, make it a sufficient condition. After all, two beings can be conscious without sharing any important connection (you are conscious, and I am conscious, but your consciousness is not valuable to me in the same way that it is valuable to you). What we really want to preserve through uploading is our individual consciousnesses. That is to say: the stream of conscious experiences that constitutes our identity. But would this be preserved?
These two issues form the heart of the Chalmers-Pigliucci debate.
2. Would consciousness survive the uploading process?
So let’s start by looking at Chalmers’s take on the consciousness issue. Chalmers is famously one of the new-Mysterians, a group of philosophers who doubt our ability to have a fully scientific theory of consciousness. Indeed, he coined the term “The Hard Problem” of consciousness to describe the difficulty we have in accounting for the first-personal quality of conscious experience. Given his scepticism, one might have thought he’d have his doubts about the possibility of creating a conscious upload. But he actually thinks we have reason to be optimistic.
He notes that there are two leading contemporary views about the nature of consciousness (setting non-naturalist theories to the side). The first — which he calls the biological view— holds that consciousness is only instantiated in a particular kind of biological system: no nonbiological system is likely to be conscious. The second — which he (and everyone else) calls the functionalist view — holds that consciousness is instantiated in any system with the right causal structure and causal roles. The important thing is that the functionalist view allows for consciousness to be substrate independent, whereas the biological view does not. Substrate independence is necessary if an upload is going to be conscious.
So which of these views is correct? Chalmers favours the functionalist view and he has a somewhat elaborate argument for this. The argument starts with a thought experiment. The thought experiment comes in two stages. The first stage asks us to imagine a “perfect upload of a brain inside a computer” (p. 105), by which is meant a model of the brain in which every relevant component of a biological brain has a functional analogue within the computer. This computer-brain is also hooked up to the external world through the same kinds of sensory input-output channels. The result is a computer model that is a functional isomorph of a real brain. Would we doubt that such a system was conscious if the real brain was conscious?
Maybe. That brings us to the second stage of the thought experiment. Now, we are asked to imagine the construction of a functional isomorph through gradual uploading:
Here we upload different components of the brain one by one, over time. This might involve gradual replacement of entire brain areas with computational circuits, or it might involve uploading neurons one at a time. The components might be replaced with silicon circuits in their original location…It might take place over months or years or over hours.
If a gradual uploading process is executed correctly, each new component will perfectly emulate the component it replaces, and will interact with both biological and nonbiological components around it in just the same way that the previous component did. So the system will behave in exactly the same way that it would have without the uploading.
Critical to this exercise in imagination is the fact that the process results in a functional isomorph and that you can make the process exceptionally gradual, both in terms of the time taken and the size of the units being replaced.
With the building blocks in place, we now ask ourselves the critical question: if we were undergoing this process of gradual replacement, what would happen to our conscious experience? There are three possibilities. Either it would suddenly stop, or it would gradually fade out, or it would be retained. The first two possibilities are consistent with the biological view of consciousness; the last is not. It is only consistent with the functional view. Chalmers’s argument is that the last possibility is the most plausible.
In other words, he defends the following argument:
(1) If the parts of our brain are gradually replaced by functional isomorphic component parts, our conscious experience will either: (a) be suddenly lost; (b) gradually fadeout; or © be retained throughout.
(2) Sudden loss and gradual fadeout are not plausible; retention is.
(3) Therefore, our conscious experience is likely to be retained throughout the process of gradual replacement.
(4) Retention of conscious experience is only compatible with the functionalist view.
(5) Therefore, the functionalist view is like to be correct; and preservation of consciousness via mind-uploading is plausible.
Chalmers adds some detail to the conclusion, which we’ll talk about in a minute. The crucial thing for now is to focus on the key premise, number (2). What reason do we have for thinking that retention is the only plausible option?
With regard to sudden loss, Chalmers makes a simple argument. If we were to suppose, say, that the replacement of the 50,000th neuron led to the sudden loss of consciousness, we could break down the transition point into ever more gradual steps. So instead of replacing the 50,000th neuron in one go, we could divide the neuron itself into ten sub-components and replace them gradually and individually. Are we to suppose that consciousness would suddenly be lost in this process? If so, then break down those sub-components into other sub-components and start replacing them gradually. The point is that eventually we will reach some limit (e.g. when we are replacing the neuron molecule by molecule) where it is implausible to suppose that there will be a sudden loss of consciousness (unless you believe that one molecule makes a difference to consciousness: a belief that is refuted by reality since we lose brain cells all the time without thereby losing consciousness). This casts the whole notion of sudden loss into doubt.
With regard to gradual fadeout, the argument is more subtle. Remember it is critical to Chalmers’ thought experiment that the upload is functionally isomorphic to the original brain: for every brain state that used to be associated with conscious experience there will be a functionally equivalent state in the uploaded version. If we accept gradual fadeout, we would have to suppose that despite this equivalence, there is a gradual loss of certain conscious experiences (e.g. the ability to experience black and white, or certain high-pitched sounds etc.) despite the presence of functionally equivalent states. Chalmers’ argues that this is implausible because it asks us to imagine a system that is deeply out of touch with its own conscious experiences. I find this slightly unsatisfactory insofar as it may presuppose the functionalist view that Chalmers is trying to defend.
But, in any event, Chalmers suggests that the process of partial uploading will convince people that retention of consciousness is likely. Once we have friends and family who have had parts of their brains replaced, and who seem to retain conscious experience (or, at least, all outward signs of having conscious experience), we are likely to accept that consciousness is preserved. After all, I don’t doubt that people with cochlear or retinal implants have some sort of aural or visual experiences. Why should I doubt it if other parts of the brain are replaced by functional equivalents?
Chalmers concludes with the suggestion that all of this points to the likelihood of consciousness being an organizational invariant. What he means by this is that systems with the exact same patterns of causal organization are likely to have the same states of consciousness, no matter what those systems are made of.
I’ll hold off on the major criticisms until part two, since this is the part of the argument about which Pigliucci has the most to say. Nevertheless, I will make one comment. I’m inclined towards functionalism myself, but it seems to me that in crafting the thought experiment that supports his argument, Chalmers helps himself to a pretty colossal assumption. He assumes that we know (or can imagine) what it takes to create a “perfect” functional analogue of a conscious system like the brain. But, of course, we don’t know really know what it takes. Any functional model is likely to simplify and abstract from the messy biological details. The problem is knowing which of those details is critical for ensuring functional equivalence. We can create functional models of the heart because all the critical elements of the heart are determinable from a third person perspective (i.e. we know what is necessary to make the blood pump from a third person perspective). That doesn’t seem to be the case with consciousness. In fact, that’s what Chalmers’s Hard Problem is supposed to highlight.
3. Will our identities be preserved? Will we survive the process?
Let’s assume Chalmers is right to be optimistic about consciousness. Does that mean he is right to be optimistic about identity/survival? Will the uploaded mind be the same as we are? Will it share our identity? Chalmers has more doubts about this, but again he sees some reason to be optimistic.
He starts by noting that there are three different philosophical approaches to personal identity. The first is biologism (or animalism), which holds that preservation of one’s identity depends on the preservation of the biological organism that one is. The second is psychological continuity, which holds that preservation of one’s identity depends on maintaining threads of overlapping psychological states (memories, beliefs, desires etc.). The third, slightly more unusual, is Robert Nozick’s “closest continuer” theory, which holds that preservation of identity depends on the existence of a closely-related subsequent entity (where “closeness” is defined in various ways).
Chalmers then defends two different arguments. The first gives some reason to be pessimistic about survival, at least in the case of destructive and nondestructive forms of uploading. The second gives some reason to be optimistic, at least in the case of gradual uploading. The end result is a qualified optimism about gradual uploading.
Let’s start with the pessimistic argument. Again, it involves a thought experiment. Imagine a man named Dave. Suppose that one day Dave undergoes a nondestructive uploading process. A copy of his brain is made and uploaded to a computer, but the biological brain continues to exist. There are, thus, two Daves: BioDave and DigiDave. It seems natural to suppose that BioDave is the original, and his identity is preserved in this original biological form; and it is equally natural to suppose that DigiDave is simply a branchline copy. In other words, it seems natural to suppose that BioDave and DigiDave have separate identities.
But now suppose we imagine the same scenario, only this time the original biological copy is destroyed. Do we have any reason to change our view about identity and survival? Surely not. The only difference this time round is that BioDave is destroyed. DigiDave is the same as he was in the original thought experiment. That suggests the following argument (numbering follows on from the previous argument diagram):
(9) In nondestructive uploading, DigiDave is not identical to Dave.
(10) If in nondestructive uploading, DigiDave is not identical to Dave, then in destructive uploading, DigiDave is not identical to Dave.
(11) In destructive uploading, DigiDave is not identical to Dave.
This looks pretty sound to me. And as we shall see in part two, Pigliucci takes a similar view. Nevertheless, there are two possible ways to escape the conclusion. The first would be to deny premise (2) by adopting the closest continuer theory of personal identity. The idea then would be that in destructive (but not non-destructive) uploading DigiDave is the closest continuer and hence the vessel in which identity is preserved. I think this simply reveals how odd the closest continuer theory really is.
The other option would be to argue that this is a fission case. It is a scenario in which one original identity fissions into two subsequent identities. The concept of fissioning identities was originally discussed by Derek Parfit in the case of severing and transplanting of brain hemispheres. In the brain hemisphere case, some part of the original person lives on in two separate forms. Neither is strictly identical to the original, but they do stand in “relation R” to the original, and that relation might be what is critical to survival. It is more difficult to say that nondestructive uploading involves fissioning. But it might be the best bet for the optimist. The argument then would be that the original Dave survives in two separate forms (BioDave and DigiDave), each of which stands in relation R to him. But I’d have to say this is quite a stretch, given that BioDave isn’t really some new entity. He’s simply the original Dave with a new name. The new name is unlikely to make an ontological difference.
Let’s now turn our attention to the optimistic argument. This one requires us to imagine a gradual uploading process. Fortunately, we’ve done this already so you know the drill: imagine that the subcomponents of the brain are replaced gradually (say 1% at a time), over a period of several years. It seems highly likely that each step in the replacement process preserves identity with the previous step, which in turn suggests that identity is preserved once the process is complete.
To state this is in more formal terms:
- (14) For all n < 100, Daven+1 is identical to Daven.
- (15) If for all n < 100, Daven+1 is identical to Daven, then Dave100 is identical to Dave.
- (16) Therefore, Dave100 is identical to Dave.
If you’re not convinced by this 1%-at-a-time version of the argument, you can adjust it until it becomes more persuasive. In other words, setting aside certain extreme physical and temporal limits, you can make the process of gradual replacement as slow as you like. Surely there is some point at which the degree of change between the steps becomes so minimal that identity is clearly being preserved? If not, then how do you explain the fact that our identities are being preserved as our body cells replace themselves over time? Maybe you explain it by appealing to the biological nature of the replacement. But if we have functionally equivalent technological analogues it’s difficult to see where the problem is.
Chalmers adds other versions of this argument. These involve speeding up the process of replacement. His intuition is that if identity is preserved over the course of a really gradual replacement, then it may well be preserved over a much shorter period of replacement too, for example one that takes a few hours or a few minutes. That said, there may be important differences when the process is sped up. It may be that too much change takes place too quickly and the new components fail to smoothly integrate with the old ones. The result is a break in the strands of continuity that are necessary for identity-preservation. I have to say I would certainly be less enthusiastic about a fast replacement. I would like the time to see whether my identity is being preserved following each replacement.
That brings us to the end of Chalmers’ contribution to the debate. He says more in his essay, particularly about cryopreservation, and the possible legal and social implications of uploading. But there is no sense in addressing those topics here. Chalmers doesn’t develop his thoughts at any great length and Pigliucci wisely ignores them in his reply. We’ll be discussing Pigliucci’s reply next.
As we saw above, there were two issues up for debate:
The Consciousness Issue: Would an uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?
The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process?
David Chalmers was optimistic on both fronts. Adopting a functionalist theory of consciousness, he saw no reason to think that a functional isomorph of the human brain would not be conscious. Not unless we assume that biological material has some sort of magic consciousness-conferring property. And while he had his doubts about survival via destructive or non-destructive uploading, he thought that that a gradual replacement of the human brain, with functionally equivalent artificial components, could allow for our survival.
As we will see today, Pigliucci is much more pessimistic. He thinks it is unlikely that uploads would be conscious, and, even if they are, he thinks it is unlikely that we would survive the uploading process. He offers four reasons to doubt the prospect of conscious uploads, two based on criticisms of the computational theory of mind, and two based on criticisms of functionalism. He offers one main reason to doubt survival. I will suggest that some of his arguments have merit, some don’t, and some fail to engage with the arguments put forward by Chalmers.
4. Pigliucci’s Criticisms of the Computational Theory of Mind
Pigliucci assumes that the pro-uploading position depends on a computational theory of mind (and, more importantly, a computational theory of consciousness). According to this theory, consciousness is a property (perhaps an emergent property) of certain computational processes. Pigliucci believes that if he can undermine the computational theory of mind, then so too can he undermine any optimism we might have about conscious uploads.
To put it more formally, Pigliucci thinks that the following argument will work against Chalmers:
(1) A conscious upload is possible only if the computational theory of mind is correct.
(2) The computational theory of mind is not correct (or, at least, it is highly unlikely to be correct).
(3) Therefore, (probably) conscious uploads are not possible.
Pigliucci provides two reasons for us to endorse premise (2). The first is a — somewhat bizarre — appeal to the work of Jerry Fodor. Fodor was one of the founders of the computational theory of mind. But Fodor has, in subsequent years, pushed back against the overreach he perceives among computationalists. As Pigliucci puts it:
[Fodor distinguishes] between “modular” and “global” mental processes, and [argues] that [only] the former, but not the latter (which include consciousness), are computational in any strong sense of the term…If Fodor is right, then the CTM [computational theory of mind] cannot be a complete theory of mind, because there are a large number of mental processes that are not computational in nature.
In saying this, Pigliucci explicitly references Fodor’s book-length response to the work of Steven Pinker, called The Mind Doesn’t Work that Way: The Scope and Limits of Computational Psychology. I can’t say I’m huge fan of Fodor, but even if I were I would find Pigliucci’s argument pretty unsatisfying. It is, after all, little more than a bare appeal to authority, neglecting to mention any of the detail of Fodor’s critique. It also neglects to mention that Fodor’s particular understanding of computation is disputed. Indeed, Pinker disputed it in his response to Fodor, which Pigliucci doesn’t cite and which you can easily find online. Now, my point here is not to defend the computational theory, or to suggest that Pinker is correct in his criticisms of Fodor, it is just merely to suggest that appealing to the work of Fodor isn’t going to be enough. Fodor may have done much to popularise the computational theory, but he doesn’t have final authority on whether it is correct or not.
Let’s move on then to Pigliucci’s second reason to endorse premise (2). This one claims that the computational theory rests on a mistaken understanding of the Church-Turing thesis about universal computability. Citing the work of Jack Copeland — an expert on Turing, whose biography of Turing I recently read and recommend — Pigliucci notes that the thesis only establishes that logical computing machines (Turing Machines) “can do anything that can be described as a rule of thumb or purely mechanical (“algorithmic”)”. It does not establish that “whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable”. This is said to be a problem because proponents of the computational theory of mind have tended to assume that “Church-Turing has essentially established the CTM”.
I may not be well-qualified to evaluate the significance of this point, but it seems pretty thin to me. I think it relies on an impoverished notion of computation. It assumes that computationalists, and by proxy proponents of mind-uploading, think that a mind could be implemented on a classic digital computer architecture. While some may believe that, it doesn’t strike me as being essential to their claims. I think there is a broader notion of computation that could avoid his criticisms. To me, a computational theory is one that assumes mental processes (including, ultimately, conscious mental processes) could be implemented in some sort of mechanical architecture. The basis for the theory is the belief that mental states involve the representation of information (in either symbolic or analogforms) and that mental processes involve the manipulation and processing of the represented information. I see nothing in Pigliucci’s comments about the Church-Turing thesis that upsets that model. Pigliucci actually did a pretty good podcast on broader definitions of computation with Gerard O’Brien. I recommend it if you want to learn more.
In summary, I think Pigliucci’s criticisms of the computational theory are off-the-mark. Nevertheless, I concede that the broader sense of computation may in turn collapse into the broader theory of functionalism. This is where the debate is really joined.
5. Pigliucci’s Criticisms of Functionalism
And I think Pigliucci is on firmer ground when he criticises functionalism. Admittedly, he doesn’t distinguish between functionalism and computationalism, but I think it is possible to separate out his criticisms. Again, there are two criticisms with which to contend. To understand them, we need to go back to something I mentioned in part one. There, I noted how Chalmers seemed to help himself to a significant assumption when defending the possibility of a conscious upload. The assumption was that we could create of “functional isomorph” of the brain. In other words, an artificial model that replicated all the relevant functional attributes of the human brain. I questioned whether it was possible to do this. This is something that Pigliucci also questions.
We can put the criticism like this:
(8) A conscious upload is possible only if we know how to create a functional isomorph of the brain.
(9) But we do not know what it takes to create a functional isomorph of the brain.
(10) Therefore, a conscious upload is not possible.
Pigliucci adduces two reasons for us to favour premise (9). The first has to do with the danger of conflating simulation with function. This hearkens back to his criticism of the computational theory, but can be interpreted as a critique of functionalism. The idea is that when we create functional analogues of real-world phenomena we may only be simulating them, not creating models that could take their place. The classic example here would be a computer model of rainfall or of photosynthesis. The computer models may be able to replicate those real-world processes (i.e. you might be able to put the elements of the models in a one-to-one relationship with the elements of the real-world phenomena), but they would still lack certain critical properties: they would not be wet or capable of converting sunlight into food. They would be mere simulations, not functional isomorphs. I agree with Pigliucci that the conflation of simulation with function is a real danger when it comes to creating functional isomorphs of the brain.
Pigliucci’s second reason has to do with knowing the material constraints on consciousness. Here he draws on an analogy with life. We know that we are alive and that our being alive is the product of the complex chemical processes that take place in our body. The question is: could we create living beings from something other than this complex chemistry? Pigliucci notes that life on earth is carbon-based and that the only viable alternative is some kind of silicon-based life (because silicon is the only other element that would be capable of forming similarly complex molecule chains). So the material constraints on creating functional isomorphs of current living beings are striking: there are only two forms of chemistry that could do the trick. This, Pigliucci suggests, should provide some fuel for scepticism about creating isomorphs of the human brain:
[This] scenario requires “only” a convincing (empirical) demonstration that, say, silicon-made neurons can function just as well as carbon-based ones, which is, again, an exclusively empirical question. They might or might not, we do not know. What we do know is that not just any chemical will do, for the simple reason that neurons need to be able to do certain things (grow, produce synapses, release and respond to chemical signals) that cannot be done if we alter the brain’s chemistry too radically.
I don’t quite buy the analogy with life. I think we could create wholly digital living beings (indeed, we may even have done so) though this depends on what counts as “life”, which is a question Pigliucci tries to avoid. Still, I think the point here is well-taken. There is a lot going on in the human brain. There are a lot of moving parts, a lot of complex chemical mechanisms. We don’t know exactly which elements of this complex machinery need to be replicated in our functional isomorph. If we replicate everything then we are just creating another biological brain. If we don’t, then we risk missing something critical. Thus, there is a significant hurdle when it comes to knowing whether our upload will share the consciousness of its biological equivalent. It has been a while since I read it, but as I recall,John Bickle’s work on the philosophy of neuroscience develops this point about biological constraints quite nicely.
This epistemic hurdle is heightened by the hard problem of consciousness. We are capable of creating functional isomorphs of some biological organs. For example, we can create functional isomorphs of the human heart, i.e. mechanical devices that replicate the functionality of the heart. But that’s because everything we need to know about the functionality of the heart is externally accessible (i.e. accessible from the third-person perspective). Not everything about consciousness is accessible from that perspective.
6. Pigliucci on the Identity Question
After his lengthy discussion of the consciousness issue, Pigliucci has rather less to say about the identity issue. This isn’t surprising. If you don’t think an upload is likely to be conscious, then you are unlikely to think that it will preserve your identity. But Pigliucci is sceptical even if the consciousness issue is set to the side.
His argument focuses on the difference between destructive and non-destructive uploading. The former involves three steps: brain scan, mechanical reconstruction of the brain, and destruction of the original brain. The latter just involves the first two of those steps. Most people would agree that in the latter case your identity is not transferred to the upload. Instead, the upload is just a copy or clone of you. But if that’s what they believe about the latter case, why wouldn’t they believe it about the former too? As Pigliucci puts it:
[I]f the only difference between the two cases is that in one the original is destroyed, then how on earth can we avoid the conclusion that when it comes to destructive uploading we just committed suicide (or murder, as the case may be)? After all, ex hypothesi there is no substantive differences between destructive and non-destructive uploading in terms of end results…I realize, of course, that to some philosophers this may seem far too simple a solution to what they regard as an intricate metaphysical problem. But sometimes even philosophers agree that problems need to be dis-solved, not solved [he then quotes from Wittgenstein].
Pigliucci may be pleased with this simple, common-sensical solution to the identity issue, but I am less impressed. This is for two reasons. First, Chalmers made the exact same argument in relation to non-destructive and destructive uploading — so Pigliucci isn’t adding anything to the discussion here. Second, this criticism ignores the gradual uploading scenario. It was that scenario that Chalmers thought might allow for identity to be preserved. So I’d have to say Pigliucci has failed to engage the issue. If this were a formal debate, the points would go to Chalmers. That’s not to say that Chalmers is right; it’s just to say that we have been given no reason to suppose he is wrong.
To sum up, Pigliucci is much more pessimistic than Chalmers. He thinks it unlikely that an upload would be conscious. This is because the computational theory of mind is flawed, and because we don’t know what the material constraints on consciousness might be. He is also pessimistic about the prospect of identity being preserved through uploading, believing it is more likely to result in death or duplication.
I have suggested that Pigliucci may be right when it comes to consciousness: whatever the merits of the computational theory of mind, it is true that we don’t know what it would take to build a functional isomorph of the human brain. But I have also suggested that he misses the point when it comes to identity.