The Singularity. If you’re an h+ reader, you’ve been sufficiently introduced to the concept (and if not, see Resources at the end of this interview).
Since 2006, the Singularity Institute for Artificial Intelligence has organized a yearly "Singularity Summit," bringing together leading figures in AI and other disciplines to talk and debate about accelerating technological development — primarily within the evolution of Artificial Intelligence. Speakers have included Ray Kurzweil, Rodney Brooks, Douglas Hofstadter and Eric Drexler.
The earlier summits have taken place in the San Francisco Bay Area. Now, new President Michael Vassar brings Singularity Summit 2009 to New York City. Speakers this year include Kurzweil, Aubrey de Grey, David Chalmers and Peter Thiel.
h+: Tell us a bit about the origins of the Singularity Summit, and do you feel as though the intention behind it has shifted at all?
MICHAEL VASSAR: Back around the year 2000, Douglas Hofstadter became interested in the Singularity. He put together a conference to have prominent academics he knew discuss it, but he was dissatisfied with the quality of discourse that resulted. In 2006 Tyler Emerson, SIAI’s former executive director, put on another such conference with Stanford University and invited Hofstadter, along with many of the most prominent Transhumanists, and prominent environmentalist Bill McKibben. The conference was a very successful media event, but some participants, including Hofstadter, were once again dissatisfied by the quality of discourse. The following years, with help from Clarium Capital and other corporate and media sponsors, Tyler put on events with a broader selection of speakers with interest in AI, robotics, technological acceleration, and other topics associated with the Singularity to present their ideas to a wider audience.
This year, I have brought the Summit to New York. The Summit itself will follow the format of the 2007 Summit except for a thematic division — with primarily technology presentation on the first day, and mostly futurism focused presentations on the second. After the Summit, will be a workshop for interested participants during which the dialog function of the 2006 Summit can be pursued in private.
h+: An outsider would assume that all speakers at a Singularity Summit are believers in "The Singularity," or are at least speaking to the singularity idea. But from my experience, it doesn’t work that way. Say something about how the talks fit under the broad "singularity" umbrella.
MV: We try to have prominent philosophers with mixed attitudes on Singularity relevant issues and at least one prominent AI or Singularity skeptic or critic speak every year. Since 2007, we have also brought together the top names in AI research regardless of their personal attitudes toward the Singularity. This year, I am expanding that focus to include more singularity-relevant neuroscience and nanotechnology. I have also decided to break with tradition to some extent by recognizing the study of technological acceleration as a specific Singularity relevant focus, so many of the speakers will be discussing the ways in which a variety of types of models suggest that the internet, science, nutrition, and new methods of collaboration can, have, and will expand our individual and especially our collective intelligence.
h+: How does the increasing interest in Ray Kurzweil impact on the Singularity Summit? Are you going to get lots of Ray fans?
MICHAEL ANISSIMOV: Yes, definitely, we are going to get lots of Ray fans, but I think many of his fans like to think of themselves as general intellectuals and/or techies who I would describe most generally as part of the Wired crowd. So they’ll definitely be interested in the wider range of speakers we have at the Summit, including people like David Chalmers and Peter Thiel. It’s really hard to estimate what percentage of people could be described as "Ray fans" versus "interested in the Singularity concept in general". Though Ray has gotten a lot of coverage, especially in the past few years, the wider Singularity concept has gotten its own attention as well, with a mainstream article coming out at least every two weeks. I think there’s some recognition that there’s a larger Singularity movement that goes beyond Ray Kurzweil, though Ray is certainly at the forefront when it comes to articulating many of the movement’s concepts.
MV: Naturally Ray’s increased fame will make the Summit more popular. It seems likely though that the popularity of these topics on the West Coast is a few years ahead of their popularity on the East Coast, so this will probably be more of an issue next year than this year.
h+: All the speakers seem to be male and probably white. I find this line of questioning tiresome if it’s a panel involving three or four people, but when you have something like 27 speakers, you can’t ignore it. Does this say something about intellectuals who are involved in these areas of research… or what does it say?
MV: There are plenty of prominent female scientists, but far fewer female science celebrities. Look in any bookstore’s science section. We invited a few female roboticists. And we invited Susan Blackmore, Cynthia Kenyon and Leda Cosmides, but none of them were available for this year, although some will appear next year. Blackmore apparently doesn’t fly for global warming reasons. The only female general AI researchers I am aware of are Monica Anderson and SIAI’s Anna Salamon.
h+: But does it worry you at all that the preponderance of comfortable white males in this scene could be alienating to most people in the world? What do you want to say to people with those concerns?
MV: No, it doesn’t worry me. People who are hung up on genitals and skin color are already alienated from people who expect to live without germlines or bodies. I don’t want to say anything to them. If they have concerns they can get a PhD in math or a natural science, become more famous (as judged by Google hits for instance) than our median speaker, and ask to speak at a future summit before its schedule becomes full… I’ll be happy to hear their criticism. Given that SIAI has a record of inviting critical speakers, from McKibben, Horgan, Zorpette and Hamerof (who have attended Summits) to Ehrlich, Fukuyama and Sandel (who haven’t so far) I don’t think that we can legitimately be called insular. But people can non-legitimately be called Satanic Ritual Abusing Aliens, and we have been in crank emails.
People who are hung up on genitals and skin color are already alienated from people who expect to live without germlines or bodies.
MA: The percentage of white males who will be attending Singularity Summit is probably similar to the percentage of white males in the transhumanist community in general, which I believe is between 70% and 80%. As Michael said, we asked multiple female speakers if they would speak, and none were able, though we are continuing to pursue several possibilities. It is true that the vast majority of world experts in computer science and futurism are white males. If you wrote an algorithm to randomly generate 27 experts in those areas, there would probably be a substantial probability that they’d all be white males, especially taking into account that any given person might be busy on the weekend of the conference or not feel like flying to New York. Let me point out that 33% of SIAI’s employees — 2 out of 6 — are female, and a number of our volunteers are female or non-white. If you look at the Wikipedia page "list of futurologists", about 95% appear to be male, and probably white.
Some people may have a chip on their shoulder about the predominance of white males in futurism, and use that as evidence to dismiss futurism in general, but I really see an absence of race and gender bias in our community. If a female (and it is more about females than non-whites) wants to become a prominent futurist, I don’t think that gender bias would slow them down by much. If anything, it would help them, because there is probably bias in their favor. I have talked about futurist issues with a great number of intelligent females, but I’ve only met a few of them that pursued it as a career direction. Very few people choose futurism as a career direction in general.
h+: You mentioned that Douglas Hofstader has been dissatisfied with the discourse… twice. Why is this one special? Why should Hofstader come to this one and have his mind blown?
MV: The Summit this year will focus more on established academics than previous Summits have. Hopefully, this will lead to a somewhat stronger norm in favor of serious intellectual engagement. Much more importantly, this Summit will be followed by a two day private workshop for speakers and a small number of invited guests. SIAI’s major focus of activity for several years has been the development of better theories of human rationality. We want to try to translate some of that effort into more effective debate moderation than is normally attempted. There’s one world that we live in, and if two people disagree about matters of fact, then they aren’t each entitled to their opinion. At least one of them is doing something terribly wrong. Very often though, apparent disagreements of fact aren’t real disagreements at all. Normal discussion seems to go on under the assumption that it’s OK to stick to a position rigidly without clarifying it or really responding to contrary arguments. We want to see what happens when there’s an explicit understanding among participants that this isn’t respectable behavior. We want to see how far we can encourage people to move their positions under the explicit norm that sticking to a position with an ever-weaker argument, rather than revising it, is less than admirable.
h+: People who identify the singularity with Ray Kurzweil may not know that there are other visions. What would you say are a couple of important ones? And will they be represented at the summit?
MA: The other visions tend to be mostly contained within Ray Kurzweil’s as sub-items of his presentation, though they are more logically independent than sometimes implied in Ray’s books. These would be Vinge’s original event horizon idea, the notion that you cannot predict what a genuinely smarter-than-human intelligence would do because you are merely of human intelligence; and I.J. Good’s intelligence explosion idea, which focuses on the strong feedback loops that would arise from an intelligence directly improving on its underlying design.
The exciting thing about the Summit is that we are bringing together a lot of people who have never talked about the Singularity in a public forum before, so we are going to be just as surprised as many others about the content of many of the talks. For instance, David Chalmers, the world-famous philosopher of consciousness, is going to be talking about uploading, which as far as I know he doesn’t talk about very often, if at all. We have many people with backgrounds in artificial intelligence and human intelligence respectively, and it will be interesting to see how — or if — their specialties influence which technology they think will first cross the line into superintelligence.
MV: I will point out that the word Singularity is increasingly being used to refer to the whole set of ideas that would once have been called Transhumanist. In response to that, some of this year’s speakers are presenting relatively mundane visions of radical life extension, modest intelligence enhancement, ubiquitous roboticized transportation, improved online scientific collaboration and the like. The key is that their analyses have to be detailed and well argued and have to involve change of a magnitude larger than is generally considered respectable to discuss.
Tell us about some lectures or speakers that you’re personally excited about.
MV: Gregory Benford’s work is particularly exciting to me, both because it is about current research and because I am less familiar with the details of it than I am with those of some other speakers. Anders’ second presentation is also of great interest to me, which is honestly why he’s speaking twice. I’m very curious as to what Peter Thiel and Robin Hanson will say, and I really look forward to seeing whether Gary Drescher can make his work popularly accessible.
MA: David Chalmers, for sure, because he walks a fine line between my favored philosophical position, hard-line materialist determinist reductionist fundamentalism, and what I consider a reasonable concern, which is the phenomenological aspect of consciousness, which he calls "the hard problem." He proposes that an accurate theory of reality will need to include ontological primitives for consciousness, which would violate trends of the last few hundred years in which phenomena which were considered ontologically unique (life, chemistry, the universe) were subsequently found out to be physical systems with components that interact based on relatively simple low-level rules.
I am very interested to hear what MIT Media Lab professor Ed Boyden has to say, because he is a rising star and could become even more prominent in the near future. He’ll be talking on synthetic neurobiology and the potential of enhancing the human brain via that route. Because I’ve always been focused on AI as a Singularity technology, I am interested in the alternative — human enhancement — even though, at this point, I still think that AI seems like a faster and safer path.
h+: Do you personally believe the singularity is near?
MA: I believe there is a substantial probability that smarter-than-human intelligence will be achieved in the next 20-40 years, yes. We have already enhanced human intelligence and productivity to a certain degree with the Internet, good memes, and even mundane things such as caffeine. An average undergrad of today with a smart phone that has access to Google would probably seem quite clever to the intelligentsia of the 18th century. Today, there are numerous technological routes, which, if they reach a threshold of sophistication, would enable smarter-than-human intelligence — neurobiology, brain-computer interfacing, and artificial intelligence. Even if one of these routes slowed to a standstill, others would still continue in the direction of greater intelligence.
Even if we don’t come up with a theory of intelligence detailed enough to create human-level AI in the next 20-40 years, it seems that reverse-engineering of the human brain could eventually produce a facsimile of human intelligence detailed enough that it would be capable of autonomous thought and creativity. If the resolution of our brain scanning and computing technologies continues to increase as they have for decades, eventually it becomes inevitable. Those that argue otherwise seem to take a mystical view of intelligence, or just find the idea discomforting and try to refute it based on ad hoc arguments. I am more interested in technical arguments for why brain scanning resolution is not likely to continue improving over the next few decades.
MV: How near? Near enough to respond to? Certainly. Near enough to put a date on, or even a decade? Certainly not.
QUESTION FOR READERS: Do you believe in The Singularity? What is it and when is it coming?
(POST YOUR COMMENT BELOW!)