Sign In

Remember Me

Can Bots Feel Joy?

Can bots feel joy? Robot heads being programed to feel emotion

This is a separate question from whether machines can be intelligent, or whether they can act like they feel. The question is whether machines — if suitably constructed and programmed — can have awareness, passion, subjective experience … consciousness?

I certainly think so, but generally speaking there is no consensus among experts. It’s fair to say that — even without introducing machines into the picture — consciousness is without doubt one of the most confused notions in the lexicon of modern science and philosophy.

Given the thorny and contentious nature of the subject, I’m not quite sure why I took it upon myself to organize a workshop on Machine Consciousness… but earlier this year, that’s exactly what I did. The Machine Consciousness Workshop was held on June 14, in Hong Kong, as part of the larger Toward a Science of Consciousness conference and Asia Consciousness Festival. The TSC conference as a whole attracted hundreds of participants, but only a couple dozen deigned to venture into the riskier domain of machine consciousness. Among these brave souls, I reckon there were more than a couple dozen views on the matter at hand!

First we have the materialists. Joscha Bach — a German AI researcher and entrepreneur and the author of Principles of Synthetic Intelligence — summarizes their perspective elegantly: “The notion of the mind as an information processing system, capable of forming an integrated self-and-world-model, modulated by emotional configurations and driven by a finite set of motivational urges, is sufficient to remove the miracles [that some associate with consciousness].” Daniel Dennett is the best known modern advocate of the materialist view. According to his book Consciousness Explained, it’s patently obvious that machines can be conscious in the same sense as humans if they’re constructed and programmed correctly.

Conscious robots. Photo courtesy of conscious-robots.comPaul Fahn, an AI and robotics researcher at Samsung Electronics, presented this perspective at the MC Workshop in the context of his work on emotional robots. His core idea is that if a robot brain makes emotional decisions using a random or pseudorandom “preference oracle” similar to the one in a human brain, it will likely be emotional in roughly the same sense that humans are — and possessed of its own distinct but equally valid form of consciousness. Fahn emphasizes the need for empirical tests to measure consciousness and a talk by Raoul Arrabales at the workshop took concrete steps aimed in this direction, describing a series of criteria one can apply to an intelligent system to assess its level of consciousness.

But some, less happy with the materialist view, have referred to Dennett’s book as “Consciousness Explained Away.” Neuropsychologist Allan Combs has a new book in press called Consciousness Explained Better, in which he reviews a host of states of consciousness, including those accessed by mystics and meditators as well as those we feel in various unusual states of mind, such as dreaming, sleeping, dying, etc. As a panpsychist he sees consciousness as the basic material of the cosmos: he sees rocks, bugs, cows, humans and machines as differing manifestations of universal consciousness.

To a panpsychist, the question isn’t whether machines can be conscious, but whether they can manifest universal consciousness in a manner similar to humans. And the question of whether consciousness can be empirically measured is not that critical, because there’s no reason to assume the universe as a whole is understandable in terms of finite sets of finite data-items, of the sort that science works with. Setting aside mystical notions, pure mathematics points to all manner of massively infinite constructs that, if they “existed in reality,” could never be probed via scientific measurements.

Perhaps at the 2019 Machine Consciousness workshop, AIs will sit alongside humans, collectively debating the nature of awareness.

The coauthor of Combs’ workshop talk, creativity theorist Liane Gabora, holds the view that machines are conscious, but will never be nearly as conscious as humans. “I put my money on the argument that living things are more conscious than rocks or computers because they amplify consciousness by being self-organizing, self-mending, and autopoietic; that is, the whole emerges through interactions amongst the parts. And the human mind amplifies consciousness even further through a second level of autopoietic structure. Just as a body spontaneously repairs itself when wounded, if someone does something out of character or something unexpected happens, the mind spontaneously tries to repair its model of the world to account for this turn of events. This continuous building and rebuilding of a mental model of the world, and thus reconstituting of autopoietic structure, locally amplifies consciousness. Until computers do this, I don’t think their consciousness will go much beyond that of a rock.”

As a panpsychist myself, I find Liane’s view sympathetic, but I’m much more optimistic than she is that complex, self-organizing autopoietic structure can be achieved in computer programs. Indeed, that is one of the goals of my own AI research project!

Then there are the quantum consciousness folks, such as Stuart Hameroff, who gave the keynote speech at the Cognitive Informatics conference in Hong Kong, the day after the MC workshop. An MD anesthesiologist, Hameroff was seduced into consciousness theory via wondering about the neurobiology by which anesthetics bring about loss of consciousness. Together with famed physicist Roger Penrose, Hameroff developed a theory that consciousness arises via quantum-mechanical effects in structures called microtubules that make up the cell walls of brain cells.

A common joke about the Penrose-Hameroff theory is: “No one understands quantum theory, and no one understands consciousness, so the two must be equal!” But clearly the theory’s intuitive appeal goes beyond this: quantum nonlocality implies a form of interconnectedness of all parts of the cosmos, which resonates well with panpsychism.

Robot. Courtesy of Aldebaran RoboticsPenrose believes that human consciousness enables problem-solving beyond anything a computer can do. To bypass theorems that show that this kind of capability wouldn’t be provided by mere quantum computing, he proposes “quantum gravity computing,” based on an as-yet unknown unified theory of quantum physics and gravitation. Most scientists view this as fascinating, highly technical science fiction.

Regarding panpsychism, Hameroff says, “I disagree only slightly. I would say that what is omnipresent in the universe is proto-consciousness…. Penrose and I say proto-consciousness is embedded as irreducible components of fundamental spacetime geometry, i.e. the Planck scale, which does indeed pervade the universe.” He views consciousness per se as a special manifestation of proto-consciousness: “I don’t think a rock necessarily has the proper makeup for the type of quantum state reduction required for consciousness.”

A fascinating twist is suggested by recent work by Dirk Aeerts, Liane Gabora, Harald Atmanspacher and others, arguing that “being quantum” is more about being susceptible to multiple, fundamentally incompatible interpretations, than about specific physical dynamics. In this sense, consciousness could be quantum even if the brain doesn’t display nonclassical microphysical phenomena like quantum nonlocality.

Perhaps the savviest view at the MC workshop was expressed by multidisciplinary scientist Hugo de Garis, who leads an AI and robotics effort called the Conscious Robotics Project at Xiamen University in China: “Explaining what consciousness is, how it evolved and what role it plays is probably neuroscience’s greatest challenge. If someone were to ask me what I thought consciousness is, I would say that I don’t even have the conceptual terms to even begin to provide an answer.”

One interesting possibility is that we might create human-level, humanlike AI systems before we puzzle out the mysteries of consciousness. These AIs might puzzle over their own consciousness, much as we do over ours. Perhaps at the 2019 or 2029 Machine Consciousness workshop, AIs will sit alongside humans, collectively debating the nature of awareness. One envisions a robot consciousness researcher standing at the podium, sternly presenting his lecture entitled “Can Meat Feel Joy?”

Ben Goertzel is the CEO of AI companies Novamente and Biomind, a math Ph.D., writer, philosopher, musician, and all-around futurist maniac

Leave a Reply