Where is human evolution leading us? Can the human brain be reverse engineered? Can we ever create non-human intelligence given the complexities of the molecular biology of a cell? How can AI be used to solve the problem of human aging? How can haptics (active touch) and cyborg “glogging” augment human experience in the future? These are but a few of the questions explored during the first day of The Singularity Summit 2010.
Hundreds of AI researchers, robotics experts, philosophers, entrepreneurs, scientists, and interested laypeople crowded the Grand Ballroom of the Hyatt Regency in San Francisco for the annual Singularity Summit. The Singularity Institute – founded in 2000 by Eliezer Yudkosky, Brian Atkins, and Sabine Atkins – celebrated its fifth year of such conferences.
“This year, the conference shifts to a focus on neuroscience, bioscience, cognitive enhancement, and other explorations of what Vernor Vinge called ‘intelligence amplification’ – the other route to the Singularity,” said Michael Vassar, president of the Singularity Institute. Presentations included a live video telecast of Ray Kurzweil (vacationing in Massachusetts) and a talk by Ben Goertzel on the need for a “full-fledged, generally intelligent Artificial Biologist” to help in the analysis of the key aging-associated genes that affect lifespan, covering many of the points in his recent h+ article “AIs, Superflies, and the Path to Immortality.”
Day One of the Summit culminated in a debate between two former colleagues of Francis Crick (co-discoverer of DNA): Cambridge emeritus professor Dennis Bray and UCSD neurobiologist and Computational Neurobiology Lab director at the Salk Institute Terry Sejnowski debated the the question “Will We Soon Emulate Biological Systems.” Bray’s prior presentation on the adaptability and the self-regenerating generalist abilities of living organisms made the case that “real biology is much more complex than our models,” examining the question “why aren’t we further along in developing sentient robots?” Sejnowski, one of the pioneers of brain state modeling, actually agreed with many of Bray’s points (making for one of the shortest, least contentious debates at any conference I’ve attended), but argued that molecular-level models “tell us where to look next, what’s missing.”
Singularity Institute President, Michael Vassar, led off the Day One presentations by framing the Singularity in the context of classical scholarship and the scientific method concluding that the Singularity, like Darwin’s theory of evolution, meets the criteria of both empirical evidence and rational discourse. He pointed to the independently developed hypotheses of Von Neumann and Vinge, the logical arguments of Vinge and Good, and the inductive data provided by the exponential growth curves of Kurzweil and others.
The fact that Kurzweil’s presentation “The Mind and How to Build One” was telecast rather than live was probably a disappointment to many conference attendees. Kurzweil reiterated many of the points covered in his h+ interview, adding comments on his recent thinking on the intricacies of the interconnections of the cerebral cortex, the exponential growth of Information Technology (his famous graphs have been updated with data from 2008-2009), and that it may turn out that the algorithms of the brain may function the modular, recursive Lisp programming language developed in the 70s and 80s as a series of “pattern recognizers,” a series of repeated modules organized in a hierarchy. He touched briefly on the “hard problem” of consciousness, proposing that the Turing Test may not be enough to prove that an AI is conscious and that it may require a “leap of faith” to accept an AI as intelligent: “if it quacks like a duck, it is a duck.”
Gregory Stock’s intriguing presentation “Evolution and the Posthuman Future” preceded Kurzweil’s telecast. Stock argued that we are currently experiencing a once-in-a-million-years fundamental transition based on advances in silicon technology and genomics, but “we don’t have a clue where exponential growth is going.” We are transitioning to a “planetary superorganism” (not just a metaphor) through the convergence of our wetware brains with the Internet. Why would this new evolving superorganism limit itself to human values? Human values will make little sense for AIs living where backups and copies are easy, personal boundaries are weak, and sexual reproduction is absent, “Evolution will drive such cyber beings towards values more suited their circumstances and environment.”