Hidden Smiles and the Desire of a Conscious Machine

Conscious machines are everywhere in popular culture – countless books, films and TV shows are based in a future where humans have robot or computer companions. A world with intelligent machines is something nearly everyone can readily picture and, while we may not know how to create these machines at present, it is generally assumed that we will recognize them if they come around.

If a conscious machine were created, the argument goes, it could simply tell us itself. Or if it were sneaky, and chose not to, then we would quickly realize it was in our midst because of the way it acted. However, recent research does nothing to suggest these assumptions are valid. What’s more, we must ask if it will ever be possible to identify conscious machines. If we cannot tell the difference what does that say about our ideas of consciousness itself?

We are all familiar with characters in popular culture that embody the idea of the intelligent machine. They come in all shapes and sizes from the terrifying and evil to the benign or even comedic. All knowing computers such as HAL 9000 in 2001: A Space Odyssey, unstoppable killers such the Terminator, self-doubting androids such as Rick Deckard in Blade Runner, even drunk robot companions like Bender in Futurama,

Going as far back as Mary Shelley’s Frankenstein we have seen the idea of a conscious being animated by human engineering. Samuel Butler’s article, Darwin among the Machines“, published in 1863 already presaged many ideas in the realm of AI.

The concept that our mastery of science will allow us to create intelligence has driven an explosion of new discovery over the last century. The AI industry is a strong and burgeoning sector of Computer Science and has generated countless useful technologies that already make our lives far easier. We are fast approaching a state where computers can outstrip humans in even relatively complex or ‘emotional’ tasks.

Computers that can read your smile

Recent research at MIT has lead to the development of a computer system which is far better than human volunteers at recognizing subtle differences in human smiles. Their system looks at the facial contractions and differences between subjects when they smile due to frustration and when they smile out of delight.

The computer system analyses video of subjects as they carry out two different tasks. In one they see a cute picture of a baby which makes them smile with delight in the other they must fill out an online form purposefully designed to be frustrating. A number of different features, such as the length of time each smile lasts and the facial muscles that are used, are analysed.

What the researchers found was that while human volunteers were fairly accurate in predicting which smiles resulted from delight they were very poor at identifying frustrated smiles. Overall human volunteers were correct less than 50 per cent of the time. The computer system in comparison achieved 92% accuracy.

The idea that a computer can be better than a human at guessing intention in another human being raises a lot of questions about how we identify consciousness. The above example is evidently just a small step in one very narrow area of artificial intelligence but it shows the way in which AI is already encroaching on areas that we traditionally assumed were tied to human intelligence or conscious thought.

We assume other humans are intelligent because we recognize behaviour, language and subtle reactions, such as smiles, that suggest others have the same conscious emotional core as we ourselves feel. We look for intentions or desires behind their physical actions, such as smiling, and base our model on this. We surmise that humans have consciousness since we identify the feelings that we have with the reactions that people around us have.

If computers can reach the stage where they are better at identifying and categorizing these reactions then they must have some claim to being able to judge consciousness. If a computer can predict that someone was feeling frustrated correctly while another human gets it wrong then surely the computer is in some way better at understanding the frustration of the subject?

Understanding the intention of an invisible smile

Now there are numerous projects around the world developing small specialized AI systems, such as the above example, and each separately is quite some distance from providing anything approaching consciousness or even what we would generally call intelligence. However, if these are added together they could feasibly create something that humans might view as a conscious machine.

With enough small steps it is entirely possible that a robot simulacrum could be created that mimics human behaviour to the extent that every biological human on the planet is fooled. If this is the case then the machines actions will be indistinguishable from human consciousness.

This however is only one scenario and one that is by no means the most certain. Alongside the possibility of a replica human-like consciousness there are infinite other possible forms that a consciousness may take.

Just as the basic instincts of humans – keep breathing, eat food, find shelter, etc – give no hint that humans as a species would create symphonies, cathedrals or spaceships so for machines we may know the basic instruction set for individual components and how they operate but we may not appreciate the higher goals they aim for.

Given the fact that any conscious machine is likely to be constructed from different material to humans, that it will run on a different source of fuel, that it will have different dependencies and a different form of ‘conception’ and ‘birth’ in to the world – then it is highly likely that it will have different goals or intentions.

Just as human beings may have basic instincts or urges that they do not always follow so too any intelligent machine may have an explicit purpose that is different from the goals it ultimately seeks in practice. We may write code that makes computers act in a certain way but when they reach a certain stage of complexity the ultimate outcomes of these actions may be impossible to predict.

A conscious machine may have a goal that requires numerous individual steps to achieve. So many in fact, that when we as humans consider all of its actions actions we might not be able to discern that they lead to any sort of goal at all. There is no reason to presume that any machine consciousness will necessarily be similar to that of humans. In fact it quite likely that it will be entirely alien. In this case the goals, intentions or desires of this machine will be in a sense invisible to us.

A conscious machine will make actions in accordance with its physical make up – just as humans do. Any such machine will be governed by the same physical laws as we are and it will perform certain behaviours based on its internal structure. It will interact with the world around it and react to data it receives in a logical way.

To us these actions will seem entirely mechanical and ordinary but the ultimate goal of its actions may be completely hidden, unobvious or even inconceivable. These goals may be things that take huge periods of time, hundreds or thousands of years to achieve, or are never fully realized.

If we do not recognize these hidden goals that a conscious machine has then we will never be able to appreciate it as any more than a blind collection of processes. Taking another example from our own history; we take it as a sign of our intelligence that we are the only beast to master agriculture, that we can control the growth and cultivation of plants to achieve our aims. For example, with the cultivation of wheat we were able to store easily accessible food and this is what allowed the first civilizations to develop.

The first time humans discovered that they could take grains of wheat and sow and harvest them, it created a gigantic breakthrough in agriculture that has since seen wheat seeds spread all over the globe. We assume that we are very clever to have invented farming. But think of things from a different point of view for a moment. Is it not clever of the wheat plant to attract an animal that will tend to it, an animal that protects it from competitors and scavengers, an animal that waters it and plants it all around the world?

We do all this without the plant having to do a single thing. To the wheat we are a machine that has worked for it for over a millennia, a machine that has helped it spread across the entire world .We are a machine that even now works to improve the plant’s structure to make it grow faster and stronger. We engineer better, more resistant strains of wheat helping that plant to flourish. Is that not a clever use of humans by the wheat plant? It begs the question – who is using who?

Of course, no one would credit wheat plants with consciousness on this basis but this is precisely because we do not credit wheat plants with the ability to have goals of this nature. From our point of view the survival of the wheat plant is something that we manipulate to help us achieve our higher aims. From another perspective the survival of human beings is something the wheat plant manipulates for its own aims.

Equally, as AI develops we have our perspective on the role they play in supporting our higher aims but there is a much stronger chance that machines, as opposed to wheat, may actually develop hidden intentions. We increasingly face a world where machines are responsible for taking pseudo-conscious decisions and actions that affect our environment. If such a machine develops a form of consciousness that is suitably different from ours, or unrecognizable from our perspective, then we risk never realizing what its true goals are or how they relate to the actions that are taking place in the world around us.

7 Responses

  1. Thanks for an insightful article and great replies. From my perspective you are taking on the task of NOT anthropormizing machines. An effort I’ve rarely seen get so specific. Let me offer one thought. It may be that consciousness is not something that you can speed up or slow down. It is just possible that consciousness is on a continuum that is attached to universal laws. Example, all fauna, that we know, are linked in what seems to be a pretty consistent emotional ‘speed.’ Polar bears, cockroaches, and humans share the same ‘speed of emotion.’ Perhaps machines will necessarily have to as well.

    • malcolm says:

      Thanks for the comment. I think the issue of emotional ‘speed’ is interesting. I’m not sure I understand exactly your definition of it but there definitely do seem to be norms in the way different organisms react to stimulus at the emotional level. I would argue though that there is also huge variation.

      Some reactions, like laughter or tears, are relatively ‘fast’ or short lived and might last for a burst of a few minutes. Other reactions such as depression may last for years. Obviously these vary but there are limits – I’ve never heard of somebody laughing continuously for a month for example.

      I imagine a machine would have something similar and that would in some way be tied to whatever stimulus it received. My point in this article was that we may never know what these emotional reactions are because they are entirely alien to us.

      Note: This is not to say that a machine will have some motives that it hides from us because it doesn’t want us to know. That is an entirely different question. If a machine were to have an ulterior motive that it knew humans would disapprove of and therefore kept it hidden, say it wanted to kill everyone with brown hair, then we could, in theory, recognize that as goal once we had enough information.

      It is a different question though if we just cannot see the machine’s motives at all. If we look at all the outcomes of the machines actions and truly think of it as a blind collection of processes then we will never see any emotional component.

  2. mw says:

    human survival starts with being a baby and needing to bond with other humans. which is why social cognition is at or near the top of the cognitive hierarchy. study of stroke vicitms suggest that there are brain resources that look at everything as objects (including living creatures), and brain resources that look at everything as living (even objects). So its not surprising that its easy to convince people that there is a man on the moon or that SIRI is alive.

    As part of human consciousness people are constantly modelling other humans intentionality. this must be an extremely difficult ability to achieve with software. Otherwise it would be a feature of absolutely every app in existence.

  3. Mark Plus says:

    After seeing videos on YouTube of people having conversations with the Siri apps on their iPhones, I realized that it takes surprisingly little to trick the theory of mind into attributing minds to mindless things. We see this deluson all around us, ranging from god beliefs to SETI to those foolish ghost hunting shows on cable to some geeks’ fantasies about creating “friendly AI.”

    Hasn’t it ever occurred to anyone that the evolutionary process produced the human mind as a contingent adaptation through a pathway which might not ever happen again? That means we have no a priori reason to think that minds apart from ours have to exist elsewhere, and especially not in the machines we could engineer some day.

  4. Hedonic Treader says:

    If a computer can predict that someone was feeling frustrated correctly while another human gets it wrong then surely the computer is in some way better at understanding the frustration of the subject?

    Not necessarily. It may have better pattern recognition for the different smile types, but it doesn’t necessarily have any understanding of the nature or function of frustration or joy.

    In humans, empathy works partially by engaging the same affective-cognitive functions that led the other person to smile, via mirror neurons. We know how these smiles, and their associated emotions, feel like, because we have the same (or very similar) brain architectures.

    A machine might not share these functions, but recognize and maybe even simulate the expressive patterns indicating them. It could detect and classify emotions reliably, even wear them as a deliberate mask to interact with humans, without sharing or caring about the affective emotions at all. Conversely, it could have affective emotions that it doesn’t express in a way we find engaging. An abstract algorithm could suffer silently without our knowledge.

    It seems we need a much more precise analytic handle on affective consciousness to make proper decisions about our technology from now on.

    • Chrontius says:

      I suspect the answer to your conundrum on machine emotion will be mathematically modeling the mirror neuron and putting that into an AI’s kernel.

  1. November 21, 2012

    [...] See on hplusmagazine.com [...]

Share Your Thoughts