“Knowing is half the battle,” said former Microsoft architect Ramez Naam during Day Two of The Singularity Summit. And knowing how and when you’re fooled concluded the day’s discussion with famous conjurer, escape artist, and paranormal debunker James Randi – looking very much like the reincarnation of Charles Darwin – who encouraged the audience to maintain a healthy skepticism.
The Day 2 topic of “Why and How We Should Solve the World’s Problems” was somewhat eclectic in scope, and included presentations ranging from Naam’s “The Digital Biome” and Jose Luis Cordeiro’s “The Future of Energy,” to Anita Goel’s “main course” of “Information Processing in Nanomachines That Read and Write DNA.” Harvard’s Irene Pepperberg’s riveting videos of the non-human intelligence of crows, rooks, and her famous African Gray, Alex, raised issues of animal rights and encouraged the audience to consider a moratorium on animal intelligence amplification.
Singularity Institute co-founder, Eliezer Yudkowsky, kicked off the day with a talk called “Simplified Humanism and Positive Futurism,” presenting shocking pictures of a smallpox victim. He pointed out that smallpox was once considered inevitable like death and aging is today, and proposed a “simplified humanism” in which qualified statements like “curing disease is good unless genes are involved” should be simplified to statements like “curing disease is good.” He suggested a “rational futurism” that is positive not in the sense of a “golden age,” but one that commits us to reason and rationality, substituting goals for prophecies.
Ph.D. and M.D. Anita Goel of Nanobiosym discussed her work on learning how “to control the knobs” of nanomachines that read and write DNA. The design of such nanomachines is resulting in applications such as Gene RADAR (a diagnostic, point-of-care device that uses saliva and blood samples), energy transduction, reading and writing DNA, and information storage. She encouraged audience participation at the end of her talk when she added a “dessert” of speculation on Maxwell’s demon and whether life, mind, and consciousness are emergent processes.
The centerpiece of the afternoon was a panel discussion between evolutionary psychology pioneer John Tooby, cutting edge AI modeler Shane Legg, Ben Goertzel, and Eliezer Yudkowsky. The discussion was framed by the prior presentations of Legg and Tooby. Legg, sounding like Russell Crowe with a distinctive Aussie accent, laid the foundation for a generalized equation encapsulating some “80 definitions of intelligence that I’ve collected,” proposing that his equation could help with exponential graphs plotting increases in machine intelligence over the coming years. He said this AIQ (Algorithmic Intelligence Quotient) can accommodate different measures of intelligence, from random to highly complex. Like his mentor Marcus Hutter, Legg admitted that his research is focused on an “external, ideal” space — rather than trying to reverse engineer the human brain — of a quadrant that encompasses both human and ideal, external and internal measures.
Tooby discussed his work towards the “enlightenment project” of a natural science of human nature: to develop high resolution maps of the circuit logic of each of the thousands of evolved programs that make up the “species-typical architecture of the human mind/brain.” In his view, evolution is the key that unlocks the discovery of circuit logic — ancestral human problems had to be solved for successful survival and reproduction. Tooby characterized this as a “mesh-like lock and key,” where every adaptive problem has an adaptive solution.
The panel discussion itself, moderated by Michael Vassar, was kicked off by Yudkowsky’s observations that humans have more broad intelligence than chimps — close to chips, but possibly with “additional modules” such as a generalized visual system that can recognize non-ancestral things like “cars.”
Goertzel, in response to Legg’s presentation, commented on the “mathematical general notion of intelligence,” stating that completely generalized intelligence requires too many resources to be immediately practical. He talked about the spectrum of intelligent systems, from narrow AI at one end of the spectrum to the most complex (and potentially resource intensive) models of Legg and Hutter. He mentioned that a theory for the general principles of “moderately intelligent systems” doesn’t yet exist.
Legg responded that models, while fictitious, are useful for developing systems. He said such models are not only useful for solving simple games at present, but that they are “fictitious in the same way as the Turing Test,” and are useful for solving certain kinds of problems. They can also be used as a way to look at the human brain.
Tooby commented that aspects of system integration in AI will incorporate inputs from lower level modules, making an analogy to rigid object mechanics combining with force dynamics to develop the notion of fields in physics. He pointed out that generality can be achieved by erasing specificity or by making super efficient specializations. He returned to the idea that “efficiency is the key” several times. Yudkowsky and Legg discussed the domain specificity of intelligence, and Goertzel concluded that either AGI or a natural system (like the brain) will likely have a hierarchy of different general and specific modules.
Whether or not The Singularity Summit 2010 audience was better equipped to solve the world’s problems at the end of Day Two, they certainly walked away with the knowledge to fight at least half the battle — and maybe even perform a few Amazing Randi magic tricks.