The Intuitional Problem of Consciousness

Screen Shot 2014-09-04 at 9.50.25 AM

Could a computer ever be conscious? I think so, at least in principle.

Scientia Salon has seen a number of very interesting discussions on this theme which unfortunately have failed to shift anybody’s position [1]. That much is to be expected. The problem is that the two sides seem to be talking two different languages, each appearing obtuse, evasive or disingenuous to the other (although it has to be said, the conversation was very civil). I think the problem is that the two camps have radically different intuitions, so that what seems obvious to one side is anything but to the other. It’s important to keep this in mind and to understand that just because the other side doesn’t follow the unassailable logic of your argument doesn’t mean that they’re in denial, ideologically prejudiced or plain dumb.

The more formal critiques of computational consciousness include those such as Searle’s Chinese Room [2] and the Lucas-Penrose argument [3]. While these are certainly very interesting ideas to discuss, it seems to me that such discussion all too often ends in frustration as the debate is undermined by fundamental differences in intuition.

And so my goal in this article is not to discuss any of the more prominent arguments but to explore our conflicting intuitions. I’m not hoping to persuade anybody that computers can be conscious but rather to explain as well as I can my reasons for intuiting that they can, as well as my interpretation of the intuitions that lead others to skepticism. I am hoping to show that mine is at least a coherent position, and in particular that it is not as obviously wrong as it may appear to some.

I also want to make clear that I make no claims about the feasibility or attainability of general artificial intelligence. I am not one of those who think the Singularity is around the corner — my concern is only to explore one view of what consciousness is.

Let me start with some empirical assumptions which I think are probably true, but admit could be false.

Empirical assumption 1: I assume naturalism. If your objection to computationalism comes from a belief that you have a supernatural soul anchored to your brain, this discussion is simply not for you.

Empirical assumption 2: The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer [4].

It should soon become clear that these assumptions entail that it ought to be possible in principle to make a computer with the outward appearance of intelligence. If we take a strictly behaviouristic interpretation of intelligence (as I will from now on), it therefore seems to be relatively uncontroversial that computers can be intelligent, though this is certainly distinct from the stronger claim that they can be conscious.

The reason I am a computationalist has to do with a very straightforward thought experiment. The central idea involves conducting a simulation of a person, and so before I get too deep into it I need to address an objection that has often been raised (in this corner of the Internet at least), and this is the observation that a simulation of X is not X.

Let me concede right off the bat that a simulation of X is often not X. This is particularly clear if X is a substance. A simulation of photosynthesis may produce simulated sugar, but simulated sugar cannot sweeten your coffee! However, I suggest that sugar is not a particularly good analogy for consciousness because consciousness is clearly not a substance. We can’t after all have a test tube full of consciousness.

It would seem instead that consciousness must be a property of some kind. It is certainly true that physical properties are not usually exhibited by simulations. A simulation of a waterfall is not wet, a simulation of a fire is not hot, and a virtual black hole is not going to spaghettify [5] you any time soon. However I think that there are some properties which are not physical in this way, and these may be preserved in virtualisation. Orderliness, complexity, elegance and even intelligent, intentional behavior can be just as evident in simulations as they are in physical things. I propose that such properties be called abstract properties.

At this point, it would seem that consciousness could be either a physical property (like temperature) or an abstract property (like complexity). Indeed this seems to be one of the major points of departure between computationalists and anti-computationalists. I may not be able to persuade you that consciousness is an abstract property, but it would seem to me that the possibility is worthy of some consideration. If there are exceptions to the maxim that a simulation of X is not X, then consciousness could be one of them.

It seems obvious that we need to distinguish between physical properties and abstract properties, so let’s try to elaborate on that distinction. What jumps out at me straight away is that physical properties seem to involve physical forces and materials, while abstract properties seem to have more to do with pattern or form. To me that suggests that consciousness is the latter, but let’s see if we can delve deeper.

If we allow that we can distinguish between the physical and the virtual [6], it seems that physical properties are those which can directly affect physical objects. Such physical detectors include our own senses as well as devices such as thermometers and Geiger counters. In contrast, abstract properties seem to be those which cannot directly interact with physical objects. If they interact with matter at all, it is only when they are perceived by a mind or detected by a computer program. Since there is no such thing as a consciousness detector, and very little reason to think that any such device can ever be built, consciousness does not feel like a physical property to me. Indeed, if consciousness is detected directly by anything at all it is only by the mind that is conscious. For me, this very strongly suggests that consciousness is an abstract property.

Whether abstract or physical, consciousness is arguably unlike all the other properties so far discussed. Indeed it may be in a category all of its own, and this is because it is uniquely subjective. As far as we know, the only observer that has direct evidence of the consciousness of any entity is that very same entity. Whether or not consciousness is like complexity, orderliness or intelligence, there does seem to be enough reason to at least consider the suggestion that a computational process might in fact be conscious just as we are, if only because it is not obviously analogous to physical properties such as temperature or mass.

I hope that I have at least earned the right to ask you to entertain for a while the idea of a simulated person. If not, I implore you to bear with me anyway!

If naturalism is true, and if the laws of physics are computable, then it should be possible to simulate any physical system to any desired precision. Let’s suppose that the physical system we want to simulate is the room in which you are currently sitting [7]. The simulation is set up with the state of every particle in the room including those in your body [8].

Assuming that you accept for the sake of argument this rather fantastic, unfeasible premise, this virtual room includes within it a structure which corresponds to your body, and this body contains a virtual brain. The simulation may not be an absolutely perfect recreation (indeed quantum mechanics would seem to preclude that possibility), but, virtual/physical distinctions aside, it should be much more like you than your twin would be. Nothing ‘Virtual You’ does will be out of character for you. Anything you can do physically it can do virtually and vice versa. Everything you know it claims to know, everything you like it claims to like and so on.

But I have only established that Virtual You behaves like physical you. So far, I don’t think this is particularly controversial. The question is whether it has an inner mental life. This, again, is where intuitions divide. Computationalists think that it is abundantly clear that it must be conscious, while the opposite claim is just as evident to anti-computationalists. This may be an impasse, but I can at least outline some reasons for preferring computationalism.

Rather than asking whether it can be conscious, let’s first ask what seems to me to be a somewhat simpler question: can it believe? On this, opinions divide largely along the same party lines. Perhaps I cannot convince you that it believes, but it seems clear to me that it must at the very least have some kind of pseudo-belief, a virtual, functional kind of belief we can attribute to it whether or not it actually believes, such that Virtual You pseudo-believes the things that you actually believe. Let’s adopt the convention to refer to this kind of pseudo-belief as ‘belief*’ in order to be clear that I am not merely stipulating a definition of belief to suit my ends. I will continue to use this asterisk convention [9] to distinguish functional, objectively identifiable concepts from the intuitive subjective kind which apply only to conscious minds.

A believer* speaks and behaves as if it believes a certain proposition to be true. Within the brain of the believer* can be found an apparent representation of that proposition which, though virtual, is otherwise just like that we presume must be in your brain. It is debatable whether this is an actual representation – anti-computationalists might suggest that it is not – but it is at least a ‘representation*’ in that it corresponds to objects in the world (it might even be said to ‘refer*’ to them) and is modified appropriately as Virtual You acquires new information about those objects. From a functional perspective, beliefs* do everything that beliefs can do and even have analogous (virtual) biological representations. What, then, is the difference between beliefs* and beliefs?

The difference is not clear to me unless we stipulate that a belief can only take place in a physical biological brain. I imagine the anti-computationalist would offer the argument that beliefs* are formal, syntactic things whereas beliefs have semantics, following with the assertion that semantics cannot be derived from syntax. This latter claim is an oft-repeated refrain from the opponents of computationalism, but it seems to me to be an open question. If beliefs are no more than beliefs*, then perhaps a formal account of semantics is possible after all.

It might help if I can explain a little why I think beliefs and beliefs* are the same thing.

If beliefs* don’t have true semantics, they at least behave quite like they do, since it would seem that we can identify within the simulation the analogues of references and representations. So, on the one side we have beliefs, references, representations, semantics, understanding and so on, and on the other we have beliefs*, references*, semantics*, understanding* and so on. Everything that can be said about biological minds can be said about virtual minds* as long as we suffix our terms with ‘*’ as required. We therefore end up with two distinct models of mind which are conceptually indistinguishable apart from the labels we apply and the insistence that one is real while the other is fake. The computationalist intuition is that these two indistinguishable systems are in fact the same, while the anti-computationalist intuition is that they are somehow different.

If computationalism can answer all the other objections against computationalism (which I think it can), then it seems much more parsimonious to conclude that the computationalist intuition is correct. Parsimony is only a rule of thumb, and can lead us astray, but all things being equal it leads us to truth more often than not. This is why I think computers can believe.

Unfortunately, belief is not enough — far from it. We also need qualia, i.e., the ineffable indescribable whatness of sensory experience: the redness of red, the taste and feel of a hot morsel in the mouth, the agony of pain. The intuitions regarding qualia are perhaps the greatest factor in leading so many to regard computationalism as absurd. It is very difficult to imagine that any machine could have real experiences the way we do. Any appearance of consciousness in such a thing is therefore assumed to be an illusion.

However, if Virtual You believes at all, it is clear that Virtual You believes it is experiencing qualia. If you ask it if it feels pain, it will answer in the affirmative. It will claim to recall experiences from your childhood, and it will not notice any difference between the kinds of sensations it claims to feel today and those it remembers from long ago. If you are an anti-computationalist then it will be very difficult if not impossible to persuade Virtual You that it is a simulation, so convinced will it be of its false qualia (or qualia*). With this being the case, I propose that you have no way of being certain that you are not such a simulation yourself, because if you were a simulation then you would still believe you were experiencing the very same qualia as you perceive right now, and in a world where such simulations are possible you have no other evidence and no justification for holding yourself to be real.

Neurology and psychology may yet have a lot of light to shed on qualia. I am certainly not claiming to understand exactly how they work, but conceptually or metaphysically it doesn’t seem to me that there is a real problem here. If you can understand how a simulation could believe itself to be experiencing qualia, then I propose that there is no mystery to explain. If there is no principled way to justify the belief that your brain is more than a sophisticated biological computer, then the possibility remains open that it is just that.

When confronted with such arguments, it seems to me that the anti-computationalist either makes a straightforward appeal to intuition or makes a circular argument of some kind which boils down to the assertion that qualia really are a mysterious phenomenon which cannot be explained away so easily or that semantics are more than semantics*. When computationalists fail to agree, we are often accused of being disingenuous or obtuse.

But computationalists are not being disingenuous or obtuse. We just don’t see the distinction the other camp sees between semantics and semantics* or qualia and qualia*. It is becoming increasingly clear to me that the source of the dispute is not ignorance or stupidity or muddled thinking in either camp, but radically different fundamental intuitions. At least one set of intuitions is wrong, and it’s hard to say which. Either computationalists are missing some mental faculty of perception, or anti-computationalists are experiencing some kind of illusion.

It would seem the way forward is to put as little weight on intuitions as possible. The computationalist account has the advantage of parsimony, dissolving the problem of connecting semantics to syntax and explaining what properties of brains enable consciousness (i.e., logical structure). None of this means that computationalism is correct, but it suggests that it should be taken very seriously unless a fatal flaw can be identified which does not ultimately rest on anti-computationalist intuitions.

The famous arguments against computationalism alluded to earlier may present such fatal flaws. Their proponents certainly think so, while I obviously disagree, but until we understand each other’s intuitions a little better there is perhaps little point in having such discussions at all.

_____

[1] The debate about consciousness was most vigorous on the following articles, but also creeps into other discussions quite frequently: The Turing test doesn’t matter, by Massimo Pigliucci, 12 June 2014;What to do about consciousness, by Mike Trites, 23 April 2014; My philosophy, so far — part II, by Massimo Pigliucci, 22 May 2014.

[2] This oft-discussed thought experiment is part of a family of such that seek to disprove computationalism by positing computational systems which our intuition suggests cannot understand. Other examples include Ned Block’s homunculi-headed robot and China Brain. See the Stanford Encyclopedia of Philosophy on the Chinese Room; here is an animated 60-second short explaining the basic idea.

[3] The Lucas-Penrose argument concludes that human intelligence cannot be reduced to mechanism because mechanisms are constrained by Gödel’s incompleteness theorems to be unable to prove all true statements. The argument fails (in my view) because it assumes without evidence that human beings are not also so limited.

[4] It must be said that the assumption that the laws of physics are computable is doubted by certain anti-computationalists, especially those who endorse the Lucas-Penrose argument.

[5] See Wikipedia if you are unfamiliar with this wonderfully evocative technical term.

[6] As trivial as it may seem to be to distinguish between virtual and physical, it may not be so straightforward if computationalism is true and we happen to live in a simulation!

[7] Such a detailed simulation is, of course, entirely unfeasible. I use it only to establish a point of principle about the nature of consciousness, so I encourage you not to concern yourself too much with practical barriers to implementation, unless of course some physical law makes such a computation physically impossible.

[8] We will also presumably need a crude simulation of the exterior of the room. We don’t want to run out of oxygen or radiate heat away to a vacuum, and the room will need to be supported so that objects within the room are bound to the floor by gravity.

[9] This convention is adapted from one established in the excellent paper: Field, Hartry (1978). Mental representation. Erkenntnis 13 (July):9-61.

###

Mark O’Brien is a software developer and amateur philosopher who despite never having achieved anything in the field has an unjustified confidence in his own opinions and sees it as his sacred duty to share them with the world. The world has yet to notice. You might very well think that his pseudonymous alter ego is a regular on Scientia Salon, but he couldn’t possibly comment. He is Irish and lives in Aberdeen, Scotland.

This article originally appeared here: http://scientiasalon.wordpress.com/2014/09/01/the-intuitional-problem-of-consciousness/