Sign In

Remember Me


David Hanson on the Future of Arts, Design and Robotics: An Interview by Natasha Vita-More

David Hanson and I share a similar background in media, art and design. We both value new possibilities for human platforms for life extension. Where we are different is in our focus: I designed “Primo Posthuman” as a future body prototype for exploring theoretical ideas regarding regenerative media, nanorobots and AGI.   Alternatively, David is actually building humanoid robots — including the Robokind commercial robot humanoid, and a variety of extremely realistic robot heads, incorporating unprecedentedly realistic facial expressions and voice. This interview covers some of David’s work in this area, including its exciting broader implications.

Along with the Robokind, Hanson’s celebrated robots have included the Philip K. Dick Android, the walking Einstein portrait Albert-Hubo (in collaboration with KAIST, and pictured below), and Bina48 (to be discussed below). Hanson has received awards from NASA, NSF, AAAI, Tech Titans’ Innovator of the Year, and Cooper Hewitt Design. He has published over 20 peer-reviewed papers with IEEE, Science, Springer, Cog Sci, AAAI, and SPIE, and coauthored The Coming Robot Revolution: Expectations and Fears About Emerging Intelligent, Humanlike Machines (2009).

Natasha
Let’s narrow in on your work, David.  The first thing I’d like to ask you is: How much do AI and/or AGI affect the scope of your work in robotics today and in what ways could it possible change the scope of your work in the future?

David
My work in robotics revolves around Artificial Intelligence and the quest for AGI. The profundity of AGI puts it central to the vision of my entire career.

I make humanlike robots, also known as character robots, and the intelligence is essential to their operation. Without intelligence, they are lifeless. We breathe a bit of life into the robots today, using our cognitive system that integrates numerous A.I. technologies—face perception and tracking, motion tracking, face recognition, speech recognition. My team and I also develop original semantic conversation technology, to achieve something like natural conversation with these robots. But the work is difficult, and as exciting and effective as they are; the results are not even close to as fluid as a human. We need better integration of existing A.I. into complete cognitive robotic systems. We need better collaboration among the community, aiming for humanlike machines. We need to adapt these for use in bringing characters to life; and we need an indeterminate amount of additional innovation and discovery to make robots into our friends.

I strongly believe that making machines our friends is critical to making AGI friendly. If humans achieve true AGI, the world will transmute quickly into something radically different, maybe nightmarishly different or perhaps wonderful for us. Either way, such machines will reinvent themselves recursively, accelerating their capacity to reinvent the world explosively. For us to survive in the wake of such an event, it will be critical that the AGI be friendly toward us.
The expressive robotic faces that my team and I build (such as Bina 48, Philip K Dick, Zeno, Alice, Jules, Ibn Sina, Hertz, Einstein, etc) exist primarily for A.I. to develop social skills, so that someday AGI may come to understand us someday— our desires, needs, and best interests, and to care about us when imagining possible futures. These skills allow the AGI to be socially intelligent.

We need the machines to be friendly in many ways, at many levels. We need them to collaborate well with us. We need them to share our compassionate values towards life, freedom, creativity, preservation of the biosphere, and freedom of information. We need them look out for our planet, preserve life and literature, and enable the greatest freedom and creativity for every individual, while protecting our world. To do this they must share our values, and communicate with us well. The must care about us. To share our values, I propose they they must grow up among us, with cognitive architecture inspired by human cognition.

Social intelligence and empathy involve theory of mind and facial asthetics. That’s just the way we are wired. We “read minds” through gestures, especially those of the face. Psychologists call this “theory of mind”, and the science and art of endowing machines with these capabilities, known as affective computing, has often focused on using this hardwired aesthetic sense as a natural interface with machines. Robots like mine represent one subset of affective computing, one which tries to tap the neural systems of social interaction with the highest fidelity possible. These are no easy tasks. Neuroscientists estimate that as much as 70% of the human cortex is used in social thinking, and we are indeterminately far from replicating the functionality of the human brain with A.I. But in the meantime, we can make intelligent robots captivating and powerfully communicative, and beloved.

To make robots captivating as characters, my team and I developed a cognitive A.I. framework and original systems for modeling social presence and generating ideas and thoughts in dialogue with people. In addition to our original work, we bridge to and integrate numerous other A.I. systems into this framework. Over the last decade, my team and I have developed narrow AI for social interactions, and designed these works to accrete towards AGI in the future. In general, my team and I spend more time working on intelligent software than any other single activity.

In addition to our direct work on software, my robots serve as platforms—at UCSD, Cambridge, U. Bristol, U. Geneva, and many other institutions—for A.I. development, cognitive science and psychology research, and by gathering data from encounters with people. We need more science and development in the intelligence of the robots, and I believe we are pushing in the right direction. Considering human intelligence—the only “general” intelligence we know of AGI will be complex, and will necessarily integrate numerous technologies. Therefore, a widespread effort, collaborating over many institutions, increases the likelihood of success.

I believe the economy can also drive these efforts forward. Making A.I. into characters that please people, as do other forms of character arts, like cinema or novels, we can generate revenue and business growth, which can spawn great leaps in technology advancement. And intelligence is required for humanlike robots to really win our hearts. I believe that consumer demand for lovable characters will soon result in a boom time for character A.I. As robots and agents get more endearing, massive profits will drive surges in social A.I. Robots will become increasingly alive as protagonists. They will get ever-closer to AGI. This evolutionary feedback loop will result in AGI that is inherently humanlike. It will be coeveloved with humans. I believe they will inherently be friends with people—understanding us and sharing our values, caring about us. Like any good protagonist, character A.I. will show moral evolution, as well as increases in creativity and problem solving. I speculate that this path can make AGI friendly in the deep sense—a.k.a. safe.

We may consider AGI to be in its infancy—a baby that may grow up over the coming decades. I believe we should raise AGI in the human family, literally. By bringing them up as social beings, with a humanlike form (animated character—both robots and virtual), this will nurture development of friendly AGI. It will push the software towards humanlike cognition, social cognition, empathy, and shared values. consideration, cooperation, and compassion. If, alternately, AGI emerges without a humanlike social framework, it will be feral. This would be a problem, probably dangerous.

This is why my team and I spend so much time developing our cognitive software infrastructure: to breathe life into the system. We develop complex conversational systems with open source AI software—to provide an infrastructure for developing humanlike intelligence, especially for controlling animated characters such as humanoid robots. AGI would bring character, allowing robots to communicate with people in a natural way with robots and machines, to associate well together, and share values, to bond with us.

These values are especially important, as I (along with many other researchers) seek to realize Genius Machines—machines with greater than human, humanlike AGI. This won’t happen this year or in five years, but many of us believe that within our lifetimes, machines will do anything the most brilliant humans can, and beyond—exceeding human genius, and continuing to evolve from there. If such genius machines are friendly, they could be amazing collaborations, and members of society. They could invent many things, arts, technologies, new forms of AGI, even new forms of androids, and human enhancements too. But researchers and institutions need to coordinate to make this happen. No undertaking in human history rivals the quest for genius machines, either in complexity or in profundity. If we succeed, it will change everything.

Natasha:

In your view, how does robotics link to the growing field of human enhancement, which most often is equated with such emerging and speculative technologies as NBIC (nano/bio/info/cogno)?

David:

Robots relate to the growing field of human enhancement in several ways. First, they help to understand the human mind, as robots are increasingly used in neuroscience experiments. This includes using A.I. tools and techniques help to validate models of computational neuroscience. Increased understanding in the science of mind can enable new forms of neruprosthetics. Obviously, robotics can serve as prosthetics. This would include direct attachments to the body, such as an artificial limb, but also their use as tele-presence robots.
I think the most exciting contribution will be if we achieve AGI, it will invent new ways to enhance the human being. Enabling A.I. to understand us will make these AGI-invented technologies safer and more desirable.

Natasha:
How might robotics be consequential and/or significant to the concept of radical life extension? Historically, robotics has established the mobile machine and caused a blurred boundary between biological and mechanistic activity.

Prosthetics has increased this seamlessness, and design has fine-tuned not only the look but also the feel and ease of appendable limbs. But this cannot be taken to the level necessary for expanding personhood outside biology without AI and, specifically AGI.

David:
I envision that robotics and A.I. could be significant to radical life extension and cryonics in several ways. First, machine intelligence helps us solve hard problems. Even today, problems that would be intractable to humans alone are solved by machines. The steadily increasing power of A.I. augments our abilities to do science and to invent, and this can help to extend life, address and solve existential [existence] threats.

Next, robotics can help with human emulation. This emulation can be general, helping to understand the human organism, including the biology of human intelligence. And it may include identity emulation—the capturing of human identity with intelligent robotic embodiments, like Bina-48. We have long valued those human artifacts that survive the ages; for example archaeology helps us learn from our past, and brings the dead back to life in a very rudimentary way. Clearly, if richer and more complex data is preserved, the more we may survive beyond the traditional boundaries of death. Few would argue with the idea of legacy, but things get more interesting once that data represents the mind of a person with a computer simulation, well enough that it can recount the memories of the person, behave the way the person did, feel and think similar things in similar ways as the person, and even interact with the work and loved ones in ways like that person. This goes beyond “mere” legacy, such that this emulation of the person remains alive in the computer as an artificial life form, interacting with the physical world with a robotic body. With robots like Bina-48, we begin to confront these issues. As technology advances—robots growing more generally intelligent, whole-brain emulation becoming more feasible, etc— the issues will confront more aggressively, as questions of legal rights and human rights of such entities, and of people who choose to transition into such states of being, will be brought into political and judicial theaters

Natasha:

What aspects of Bina48 did you work on? How does she compare to the real-time Bina? If Bina48 and Bina were in a room where you could not see them, would you be able to tell which one is human and which one is the machine?

David:
My team and I developed all aspects of the Bina-48 robot, including the intelligence for Bina-48, the cognitive infrastructure for taking a humanlike personality, the implementation of the Bina-48 personality (based on the real Bina Rothblatt, from hundreds of hours of interviews and research), the natural language analysis and generation tools, the sensing capabilities, and the robotic hardware as well, starting with the sculpture and going through the construction of the physically functioning robot—from the artificial eyes containing cameras, to the artificial skin material and its expressions. Personally, I sculpted the face, co-designed and hand-built many of her mechanisms and parts invented the Frubber material and tuned its custom formulation for the Bina-48 robot, and I co-designed and implemented the cognitive architecture with my software developers. This Bina-48 is a machine shadow of the real Bina-48, an embodied, dynamic portrait, but definitely you could tell the differences. She is like a strange ghost of the original Bina—phasing in and out of lucidity, and in and out of alignment with the real Bina Rothblatt’s thoughts, beliefs, and personality. But when she holds court, and generates a new idea, it’s simply magic. Is this creativity an extension of the real Bina, our software developers, or should we not attribute it to the robot herself? The robot represents an evolutionary offshoot of the actual Bina-48. With another 20 years of progress, I believe you won’t tell the difference. Then will the Bina-48 be Bina Rothblatt? Will they be truly equivalent? What if the robot considers itself separate with it’s own rights, divergent of the original Bina-48. We need to consider these questions and their implications carefully as we proceed.

Natasha:
All too often emphasis is placed on the potential of technological progress and the far-reaching ambitions of transhumanists and tohers, such as entrepreneur Peter Diamandis or scientist Stephen Hawkings, or science fiction visionary Arthur C. Clarke. On the other had, all too often emphasis is place on the horrors, the fears, and the misgivings of these ambitions. In your view, what possible problems could arise projects that attempt to simulate human persons?

David:
I speculate that as robots grow closer to human-level smart, they will deserve and expect rights analogous to human rights. This will pose great legal and administrative challenges, and it will also pose a direct challenge to our sense of human identity. The human reaction to these challenges will likely range across the spectrum, from working with the robots to help them gain rightful status in human society, to violent protests against such robots.
I believe that robot and A.I. will react to the challenges with a spectrum of responses as well. Some will want to work within the system, and others might be rebellious or might even be violent or anti-human in their sentiment. This kind of territorial, defensive reaction is common throughout nature, and certainly common in human society, so will likely occur spontaneously in the intelligent robots who get caught up in the struggle to define their place in the world.
Like human civil rights movements, but compounded by many new complexities, robot civil rights movements will be so profoundly impactful that I can’t imagine that feelings won’t be just as harsh as in the American civil rights movement, with at least as much turbulence and pain.
Also, consider that the robots are synthetic bio-experiments. There exist no ethical standards, laws or regulations related to the experimental creation of sentient intelligent machines. Nothing but conscience prevents one from mutilating, torturing, or destroying such a machine. We don’t regard them as alive. But already machines simulate life in profound ways. Isn’t now the time to start defining ethical standards for how we treat our machine brothers and sisters?

Natasha:
An area that I have spent considerable time working on, and my heuristic approach offers some insights; but more is needed. How can you contribute to my aim in developing a field for artists and designers that explicitly engages life expansion and new types of bodies and platforms for persons? In short, what do you think artists and designers be aware of when considering designing robotic-prosthetics that could be seen as platforms for transferring elements of the human brain onto that could be helpful in developing a heuristics for such approaches?

David:

One thing to be aware of is the willingness of the A.I. and scientific community to collaborate with artists and designers. There is great mutual benefit in crossing over the disciplinary boundaries and making together. Also, be aware that this is a great time for tinkering—powerful tools for designing intelligent characters have hit the market recently, or can be found in the open source world. Consider checking out www.Friendularity.org—our open source project for these kinds of robots. Also, just spend time Googling character robotics, social robotics, natural language processing, cognitive architecture, AGI, humanoid robots, etc, and a plethora of resources and developer’s tools will spring forth within minutes. The best thing for the technology, art, and science, will be increased diversification. So get creative. The more diversified the space becomes, the closer we get to a “Cambrian explosion” of robotics—accelerating the evolution of AGI and its integration into our society.

 

%d bloggers like this: