I can’t really give an objective review of Barry Ptolemy’s new documentary film Transcendent Man – a treatment of the life and futurist ideas of Ray Kurzweil – because I was in the movie myself, answering 4-5 minutes of questions about Ray’s visions of the future.
But I’ll give you my subjective impression: two nano-enhanced cyberthumbs way, way up!
On Tuesday April 29 I took the train from Washington DC, where I live, to New York City, to see the premiere at the Tribeca Film Festival. The theater was packed and the audience was appreciative, and it was fascinating to see the film through their eyes. The two themes of the movie – Ray’s life as a futurist visionary, inventor and entrepreneur; and Ray’s vision of the near-future technological Singularity – are so familiar to me that, even if I hadn’t been in the movie, it would have been impossible for me to view it with a “beginner’s mind.” But a substantial percentage of the audience appeared unfamiliar with Ray or his thought, and based on the reactions I saw, it seemed the film did an excellent job of waking up its viewers to some radical new ideas.
Pulling no punches, the movie begins with the notion of the Singularity – a moment in time at which scientific and technological progress become so rapid as to defy human comprehension, with revolutionary developments occurring nearly instantaneously, and legacy human minds left in the dust by AI and cyborg intelligences. Kurzweil’s successes at technological and social forecasting are highlighted (he correctly predicted the rise of the Internet, the fall of the Soviet Union, the year that a computer would defeat a human champion at chess, and the list goes on and on), and the Singularity – which he forecasts for 2045 – is presented as his latest and greatest prediction, resulting from a painstaking process of data analysis covering technology trends in computer technology, biotechnology, nanotechnology, AI and other areas.
The film makes a powerful effort to ground the sometimes abstract-seeming and overly geekified Singularity concept in palpable human emotion and experience. Ray’s childhood gets significant screen time, as does his relationship with his deceased father – whose life he has documented and archived with incredible thoroughness, hoping to one day create advanced technology that can use this information to reconstitute a new incarnation, or at least an accurate simulation. His humanitarian motives are kept in focus, with moving scenes demonstrating the huge value delivered by the Kurzweil Reader for the blind. And when the theme of Ray’s quest for immortality comes up – as it does, again and again – one sees the heartfelt sentiment on Ray’s face and hears it in his voice. The urge to live forever is painted as it should be: not as some pathological act of psychic hubris, but rather as the most natural thing in the world \… a desire to avoid suffering and preserve life and knowledge.
I often think of Ray’s vision of the future as a “kinder, gentler Singularity,” and this comes across clearly in the film. Ray is portrayed as wanting the Singularity to help everyone, in the same way that he wanted the Kurzweil Reader to help blind people read. Vernor Vinge, who coined the term “Singularity” in its futurist sense, portrays the hypothesized event as a huge crazy leap into the incomprehensible unknown. While acknowledging there are massive surprises to come, Ray – with his dry, soothing businessman’s voice and demeanor — projects a calm confidence that all will be well even as legacy humans become obsolete. In his vision, we will all one day merge into a massive global cyborg-mind, yet still retain our capacity for individual experience and joy. (I myself have a bit of skepticism about this latter point – but we’ll come to that later.)
One of the strengths of the movie is the diversity of perspectives it presents. Contradictory views are given fair time and treatment, and basically fall into three categories:
Among the ethical contrarians, Hugo de Garis provides the most striking footage. Interviewed in Hong Kong, against the backdrop of Blade-Runner-esque Kowloon and racks of gory dead chickens, Hugo talks about “building gods.” Then he asks himself, “Would I build these machines, if I knew there was a strong chance they would destroy humanity” … and after a pregnant pause and a poignant expression, answers plainly: “Yeah.” Whereas Kurzweil foresees a future in which advanced technology by and large improves the human condition, de Garis foresees a massive world war between pro-tech and anti-tech forces, quite likely followed by the extinction of the human race while the universe is colonized by our digital-god offspring.
In my 4-5 minutes of movie-stardom, I present more of a Vinge-esque perspective: striking an uncharacteristic pose of humility, I posit that we don’t know and fundamentally can’t know what’s going to happen once the Singularity is upon us. I have some doubts about the precision of Kurzweil’s redoubtable prognostic powers in this context: I think the Singularity could come in 2045 as he projects, or it could come in 2080 or maybe even 2015. Except in the case that we nuke ourselves into oblivion or do something else equally stupid and destructive, I’ll be shocked if it doesn’t occur this century, though. And I think the ethical quality of the Singularity, from a human perspective, is equally wide-open and unknowable. A utopic paradise beyond Ray Kurzweil’s wildest dreams is remarkably feasible … as are the human strife and obsolescence of Hugo de Garis’s conflicted fantasy nightmares. And the most probable outcome is NONE OF THE ABOVE – i.e. something none of us pre-Singularity humans has the capacity to imagine.
The showing of Transcendent Man at the Tribeca Festival was followed by a Q&A session with Ray and director Barry Ptolemy, which was particularly lively and fascinating. I remember Ray’s response to my own question well. What I asked, in essence was the following:
“Ray, as you know, I’m involved in a project oriented toward creating a powerful AI system, and if it works as well as I hope, I think it may lead to a Singularity well before your projected date of 2045. And my goal in doing this isn’t just to create an artificial supermind to end scarcity and bring immortality and all that good stuff, it’s also to become one with that supermind. I don’t just want us to build gods, I want us to become gods. But there’s one doubt that often vexes me, and I’d like to know what you think about it. I wonder if there will come a point, when we’ve enhanced our brains enough with advanced technology, when we’ll have to stop and say: OK, that’s all I can do and still remain myself. If I add anything more – if I up my IQ from 500 to 510 – I’ll lose the self-structure and the illusion of will and all the other things that make me Ben Goertzel. I’ll just become some other, godlike mind whose origin in the human ‘Ben Goertzel’ is pretty much irrelevant.”
Ray responded by stating that he felt it would be possible to achieve basically arbitrarily high levels of intelligence while still retaining individuality.
But the moderator of the Q&A session, NPR Correspondent Robert Krulwich (who did an absolutely wonderful job), took up my side. He posited a future scenario where Robert enhanced his brain with the UltimateBrain brain-computer plug-in, and Ray enhanced his brain with the SuperiorBrain brain-computer plug-in. If Robert is 700 part Ultimate Brain and 1 part Robert; and Ray is 700 parts SuperiorBrain and 1 part Ray … i.e., if the human portions of the post-Singularity cyborg beings are minimal and relatively un-utilized … then, in what sense will these creatures really be human? In what sense will they really be Robert and Ray?
Ray responded that they would be human because the UltimateBrain and SuperiorBrain would be built by humans … or built by robots built by humans … so in a sense they would still be human, since they’d be human technology.
Yes, noted Robert, they would still be human in that sense – but that didn’t mean they’d still be Robert and Ray.
I stated my own view, that a point probably will be reached where to progress further, we’ll have to give up our human selves and accept that the role of our human selves has been to give rise to smarter, wiser, greater minds, more capable of creative activity, positive emotion and connection with the universe.
Ray’s (grinning) answer: “And would that be so bad?”
My (smiling) reply: “No.”
I don’t know what the less Singularity-savvy members of the audience thought about that interchange — but I do know that, as the evening rolled on, none of them looked bored. And that, really, is the strength of Transcendent Man. It doesn’t go into much depth about the science and engineering that is bringing the Singularity to pass – for that the viewer will have to go buy Ray’s book The Singularity Is Near, Damien Broderick’s The Spike, or one of the numerous other books on future technology. But the film does get across the essential flavor of the radical future vision that Kurzweil, Vinge and other present-day Singularity sages are promulgating, and without ever getting preachy or long-winded. With a bit of luck, the movie may well prove an important step in the process of imprinting some critical ideas on the mass mind of our species… at this very special juncture of our history where we find ourselves, as man stands on the verge of his own… well… transcension.