The Future of Machine Intelligence

The Future of Machine Intelligence

In early March 2009, 100 intellectual adventurers journeyed from various corners of Europe, Asia, America and Australasia to the Crowne Plaza Hotel in Arlington Virginia, to take part in the Second Conference on Artificial General Intelligence, AGI-09: a conference aimed explicitly at the grand goal of the AI field, the creation of thinking machines with general intelligence at the human level and ultimately beyond.

While the majority of the crowd hailed from academic institutions, major firms like Google, GE, AT&T and Autodesk were also represented, along with a substantial contingent of entrepreneurs involved with AI startups, and independent researchers. The conference benefited from sponsorship by several organizations, including, Japanese entrepreneur and investor Joi Ito’s Joi Labs, Itamar Arel’s Machine Intelligence Lab at the University of Tennessee, the University of Memphis, Novamente LLC, Rick Schwall, and the Enhanced Education Foundation.

Since I was the chair of the conference and played a large role in its organization – along with a number of extremely competent and passionate colleagues – my opinion must be considered rather subjective … but, be that as it may, my strong feeling is that the conference was an unqualified success! Admittedly, none of the research papers were written and presented by an AI program, which is evidence that the field still has a long way to go to meet its goals. Still, a great number of fascinating ideas and mathematical and experimental results were reported, building confidence in the research community that real progress toward advanced AGI is occurring.

AGI Versus Narrow AI

This is a new conference series, and some may wonder why the world needs yet another. There are lots of AI conferences each year, some of them quite large affairs. The 2008 conference of the Association for the Advancement of AI (AAAI), arguably the premier organization for promoting AI research and development, drew over 1000 attendees. Why did 100 AI researchers and developers feel moved to attend a special, separate conference on “Artificial General Intelligence”?

I suppose I’m as well qualified as anyone to answer this question. I would put it as follows:

Ask the proverbial “man on the street” what AI is about and you’re likely to hear mention of thinking machines from science fiction: C3PO and R2D2, HAL-9000, Asimov’s various creations from “I, Robot”, the Terminator, and so forth.

Ask the average AI researcher from an academic, industry or government lab, on the other hand, and you’ll hear a different story. Most AI research today has to do with quite narrow and specialized kinds of intelligent software, a far cry from the AIs in popular media – and a far cry from the dreams on which the AI field was founded.

The AGI conference series represents a concerted effort by a group of professional AI researchers to create a cohesive research community focused on “software displaying intelligence with the same sort of generality that human intelligence seems to have” rather than “software displaying any kind of intelligent-looking behavior.”

I asked Marcus Hutter, perhaps the world leader in theoretical AGI, who helped organize the conference’s scientific program to sum up the value of the AGI conference series and he put it this way: “The top patriarchal AI conferences and journals have a bias towards extending established sophisticated techniques and have a high entry barrier for creative and more speculative novel approaches.” That pretty well nails it. Achieving a goal as ambitious as artificial general intelligence requires radical innovation, and an acceptance of diverse approaches.

Since its founding in the 1950s and 1960s, the AI field has achieved great successes – to name just a few: the AI linguistics underlying Google and other search engines; the AI planning and scheduling software used throughout the military and industry; the AI fraud detection software underlying modern credit card operations; and the AI gaming software underlying everything from Deep Blue to the bots in massively multiplayer online games. Yet all these wonderful achievements have a common narrowness of scope, which is why inventor and futurist Ray Kurzweil has characterized them as “narrow AI.” The grand goal of the original AI researchers – the creation of thinking machines with general intelligence at the human level and ultimately beyond — remains largely unaddressed. Many, both within and outside the AI field, have complained about this situation; the mission of the AGI conference series is to help do something about it.

The First Conference on AGI, AGI-08, took place in March 2008 at the University of Memphis, and was organized by Stan Franklin, one of the “grand old men” of the AI field, whose AI work was long sponsored by the US Navy, and who has achieved reknown for bridging the gap between AI and cognitive science, as reflected in his influential book Artificial Minds. Franklin was inspired to organize AGI-08 after being invited to give the keynote address at an earlier, smaller gathering that I organized together with Pei Wang from Temple University and my business partner Bruce Klein: the 2006 AGI Workshop held in Bethesda Maryland. In his keynote presentation at the 2006 AGI workshop, Franklin said AGI was concerned with “machines with human-level, and even superhuman intelligence, that generalize their knowledge across different domains, reflect on themselves, and create fundamental innovations and insights” – a phrasing that sums up nicely what AGI is all about.

Just to drive home the “AGI versus narrow AI” distinction, it’s worth contrasting Franklin’s characterization of AGI with the definition of Artificial Intelligence given on the website of the AAAI: “advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines.“ This is indeed a noble goal – but it’s also a very broad goal, covering all sorts of narrowly specialized software programs or hardware devices implementing particular mechanisms of mind but not attempting any sort of human-level generalization, reflection, innovation or insight. The mission of AGI researchers, and of the AGI conference series, is to focus more directly on the original, more ambitious goal of the AI field.

When I gave the introductory speech for the AGI conference this year, I began with the words: “I got into the AI field for one reason: I wanted to create thinking machines.” Nearly everyone in the audience nodded their heads in empathy and agreement. Most AI researchers start out in the field for precisely this reason, but they then experience strong pressure to orient their research efforts toward more specialized applied or theoretical issues. This pressure exists for a variety of reasons, but the biggest one is probably historical: AI researchers in the 1960s and 1970s made a lot of ambitious promises that they couldn’t keep. Human-level AI was always “a few years off,” and eventually the world at large — and even some members of the AI field — stopped believing it would ever arrive. This skepticism propagated to funding sources, who became wary about supplying resources to anyone pursuing grand AI goals. But as I pointed out in my introductory talk, the world today is rather different from 30 years ago. Our computer hardware is far better, our understanding of neuroscience and cognitive psychology is far better, and we have a huge array of newfangled reasoning, learning and data processing algorithms at our disposal. The time is ripe for a renewed attack on the problem of building a real thinking machine, an artificial general intelligence. Or at least, that’s the view of this AGI researcher… as well as the vast majority of the folks who trekked to Arlington to take part in AGI-09! As Hugo de Garis (a veteran of AI robotics labs in America, Europe and Japan, currently leading a major humanoid robotics project at Xiamen University in China) put it, “AGI is a return to the old dream of the 1950s and 1960s to build a human level artificial intelligence. And it’s about time.”


The AGI-09 keynote was given by Juergen Schmidhuber, who runs the IDSIA AI lab in Lugano, Switzerland – the smallest of the world’s top-ten AI labs, renowned in particular for its work on the mathematical theory of general intelligence. (Schmidhuber is also professor of Cognitive Robotics at the Technical University of Munich, and a member of the faculty of the University of Lugano). He began with a impressively diverse array of practical AI achievements that have come out of his lab in the last two decades — programs for handwriting recognition, robot control, data analysis and so forth — and then put this work in context by describing his team’s applied AI software as a very special case of more general principles of intelligence, which he sought to describe with equations relying on the branch of mathematics called “algorithmic information theory.” The reach of his equations is unusually broad, going beyond mere intelligence and touching on qualities at the edge of science such as complexity and beauty (according to Schmidhuber, the foundation of beauty is compressibility: “Among several patterns classified as "comparable" by some subjective observer, the subjectively most beautiful is the one with the simplest (shortest) description, given the observer’s particular method for encoding and memorizing it.”).

The creation of a machine with superhuman cognitive abilities has been a lifelong goal for Schmidhuber, who says that that since age 15 or so his main scientific ambition has been “to build an optimal scientist, then retire.” However, he is understandably coy about exactly how far he thinks he – or anyone else — is from achieving this ambition. When questioned about the practical applications of his general mathematical approach to intelligence, Schmidhuber admitted it was a work in progress, but opined that “Within two or three or thirty years, someone will articulate maybe five or six basic mathematical principles of intelligence,” and he suggested that, while there will be a lot of complexity involved in making an efficient hardware implementation, these principles will be the foundation of the creation of the first thinking machine.

Schmidhuber wrapped up his keynote with a futurist thematic, presenting a lively argument in favor of the hypothesis that technological development is propelling the human race toward a Singularity: a point after which development on Earth is driven by AI’s with vastly greater intelligence than humans, so that humans no longer have control or even comprehension of the progress of events. This hypothesis is best known in the versions proposed by Vernor Vinge and Ray Kurzweil – and the latter has extrapolated recent technology trends to come up with a projected Singularity date of 2045. Schmidhuber’s different but related extrapolations lead to a date of 2040. He also traces the Singularity meme back before Vinge to mathematician Stanislaw Ulam, who spoke of a Historical Singularity back in the 1950’s. But, ever the sophisticated European humorist, Schmidhuber followed up this prognostication with another extrapolation, analyzing historical data so as to prove that the Singularity should have occurred in 1540 AD and concluding that, whether the Singularity arrives or not, “we are fortunate to live in very interesting times.”


The 40+ technical papers presented during the conference were highly diverse, without any single unifying theme. This is perfectly appropriate: the AGI field is in a stage of exploration, with different research groups pursuing very different directions toward their common end goal. Indeed, there is no reason to believe that one uniquely true and correct approach to AGI exists: the goal “build an AGI” is more like “build a flying machine” than like “build an airplane.” We now have blimps, helicopters, airplanes, rockets, and gliders. Similarly, we may invent many viable approaches to AGI, each with their own strengths and weaknesses. Pursuing this metaphor further, one thing we lack in the AGI field is any analogue of aerodynamic theory. This role would be played by the “five or six basic mathematical principles of intelligence” that Schmidhuber envisions. However, just as the Wright Brothers achieved sustained, powered human flight without sophisticated aerodynamic theory, it is quite possible that we can achieve AGI at the human level or beyond prior to the development of a thorough formal theory of intelligence. Often theory and engineering develop hand in hand, rather than one mainly driving the other.

Amid all the diversity of presentations at the AGI 2009 Conference, some common concepts stood out. For instance, there seemed to be a particularly large number of high-quality papers dealing with the problem of “program learning” — that is, the creation of software that can take as input a certain behavior (presented as a description, or as a set of examples), and produce as output a computer program capable of demonstrating this behavior.

This may sound abstract but it can be quite concrete and practical – for instance I have used this notion in some of my own AI work to help “virtual AI dogs” learn tricks. Each trick one of these virtual dogs does — say, playing fetch or guarding his owner — is controlled by a small computer program running inside the dog’s “mind” (which is itself a larger computer program). And these little “trick programs” are not programmed by any human programmer Rather, they are learned by the dogs via experience — by imitating what they see human-controlled avatars do in the virtual world, and getting “good dog / bad dog” signals from their human owners. Similar methods can be used to enable AI systems to learn internal programs that recognize objects in their visual field, carry out virtual or physical robotic movements, recognize patterns in data or text, guide logical inferences and so forth.

As Bill Hibbard from the University of Wisconsin noted, "The idea of program learning is not new, but it was remarkable at AGI-09 the extent to which this approach has become the consensus of the best people working in AGI. The presentations of Moshe Looks, Eric Baum, Marcus Hutter, Josh Hall, Juergen Schmidhuber and several others all focused on systems or methods for learning new algorithms."

Many but not all of us AGI researchers consider this notion of “programs that learn to produce programs” central to the problem of creating thinking machines. Hardly anyone believes that program learning in itself is the solution to the AGI problem, almost everyone believes it is a part of the solution, and opinions differ on just how central it is. Advocates of a “cognitive architecture” based approach (whose talks dominated the second day of the conference) argue that specific algorithms for program learning, reasoning, perception and the like are less essential than the overall mind-architecture in which these are interrelated. Examples of this sort of approach are the SOAR architecture championed at the conference by John Laird and his students, and my own OpenCog Prime and Novamente Cognition Engine architectures, both of which aim to embed program learning more general frameworks. It will be interesting to see how the program learning theme continues to develop as the AGI field advances.


Ray Kurzweil, one of several AGI-09 sponsors, sponsored the “Kurzweil Best AGI Paper Prize,”. It provided a $1000 check to the winner, and $100 checks to two runners-up. Both the winner and the first runner-up were from Germany and, interestingly, both of these papers focused on automated program learning. The winner of the Kurzweil Best AGI Paper Award was titled "Combining Analytical and Evolutionary Inductive Programming", by Neil Crossley, Emanuel Kitzelmann, Martin Hofmann and Ute Schmid, from the Cognitive Systems Group at the University of Bamberg, who work in the AI tradition of “inductive programing.”.

The novel contribution made by these researchers was to find a way to make two different kinds of program learning algorithm work well together: evolutionary learning (which is based on emulating natural selection, i.e. “evolving” programs according to specified fitness functions), and analytical learning, which is based on formal reasoning. Making these two kinds of learning work together has often stymied AI researchers, because the willy-nilly chaos of evolution tends to scramble the carefully configured conclusions produced by analytic learning systems. But these researchers figured out a way to reformulate the conclusions of analytic learning, in a way that doesn’t get scrambled by the evolutionary process, but rather gives evolutionary learning a head start, allowing it to proceed more effectively than if it were operating without coupling to an analytic process.

The first runner-up was a very deep paper by Marcus Hutter, entitled "Feature Markov Decision Processes,". It describes a new approach to the general problem of learning programs based on reinforcement from the environment (using a special way of extracting features from the AI’s history of states, actions, and rewards). One of the reviewers of his paper suggested that it was “likely to give rise to a whole new subfield of AI, pursuing various applications and specializations.”

In a completely different vein, the second runner-up (“Everyone’s a Critic: Memory Models and Uses for an Artificial Turing Judge,” by W. Joseph MacInnes, Blair C. Armstrong, Dwayne Pare, George S. Cree and Steve Joordens) dealt with the Turing Test, the classic test for evaluating human-level AI, in which an AI must carry out a textual conversation with a set of humans, and fool them into thinking it’s a human. These authors turned the problem around, and created a software algorithm with the purpose of serving as the judge in a Turing test, and trying to tell the difference between humans and AIs based on their conversations. Despite relying on relatively simple methods, their algorithm was able to perform roughly as well as humans as a Turing test judge.


The final day of the conference – organized by J. Storrs Hall together with yours truly — was a workshop devoted to “The Future of AGI,”. Discussions ranged far and wide, with talk titles like “Unethical but Rule-Bound AI Would Kill Us All” (by Selmer Bringsjord from RPI) and “When Robots Do All the Jobs, What Will People Do?” (by James Albus from NIST and George Mason University).

Bringsjord put forth the controversial position that the best hope of achieving beneficial AGI in the future is to make sure that AGI’s operate in a mathematically sound way, so that they can be formally proved to obey their specifications. A number of audience members took exception to this, arguing that the formulation of real-world requirements as formal specifications would be sufficiently problematic as to render provable soundness a poor guarantee.

Albus’s talk hit a raw nerve with many attendees, given the current world economic situation. The recent increases in unemployment experienced in the US and many other nations, make it particularly piquant to contemplate a future in which nearly all humans are unemployed due to the advent of AGIs that do their jobs better. With short-term economic concerns so urgent, it may seem that the future economic impact of AGIs is not the most pressing matter. But Albus feels differently: Human-level AGI has the potential to create sufficient economic wealth to make us all rich. However, the political will and investment capital to realize this potential will not emerge until we get a satisfactory answer the question: How will ordinary people get income if most of the economic wealth is created by robots and intelligent systems?” His own solution is proposed in his book People’s Capitalism: The Economics of the Robot Revolution, published in 1978 and available online; and it turns out his ideas have parti“cular relevant to the current economic crisis.

In essence, Albus proposes that — as AGIs gradually take over the economy — the government should gradually transform the population from workers into investors, by giving each citizen an investment account that they can manage themselves, in a manner inspired by current mutual funds. This has the advantage of keeping people emotionally and intellectually vested in the development of new technology even as their labor is less and less required. And while there is no indication that the Obama administration is aware of Albus’s ideas, it occurred to several conference participants that they might provide some interesting inspiration right now in 2009. A partial alternative to bailing out large banks might be to directly place money into investment accounts for individuals to allocate.

Issues closer to the heart of AGI research itself also got some attention in a session led by Itamar Arel from the University of Tennessee, who is leading an effort called the “AGI Roadmap.” oriented toward aligning as many AGI researchers as possible in roughly the same direction, and laying out a plan for collaborative development over the next years. Creating an AGI Roadmap is a tall task given the diversity of approaches at play in the field. But as Arel notes, creating a roadmap will almost surely be worth the effort, as it has strong potential to “help focus the community’s research efforts, promote common terminology and improve its positioning — all of which are critical for a relatively young scientific field.” J. Storrs Hall, who recently played a leading role in the creation of the Nanotechnology Roadmap (He is now the director of the Foresight Nanotech Institute), contributed words of wisdom regarding the roadmapping process, emphasizing the value even of simple aspects of the process, like getting various researchers to agree on a common end goal for their work.

The roadmap discussion echoed themes that had been heard two days earlier, in the first technical session of the conference, chaired by John Laird from the University of Illinois, which focused on “Evaluation and Metrics for Human-Level AGI.” As Laird and the other presenters in his session highlighted, one of the key issues in creating an AGI Roadmap is measuring incremental progress. It’s not so hard to tell when you’ve got an AGI with human-level or superhuman capabilities, but how do you tell when you’re 10% or 50% of the way there? Potentially, a system that’s 50% of the way to being a human-level AGI might not do anything useful at all. After all, young children and mentally impaired people have almost all the same machinery as normal adults, yet differ vastly in terms of functionality. One suggestion in this direction, which I made in a talk I gave in Laird’s session entitled “AGI Preschool,” is to create online virtual worlds emulating human preschools, and focus on teaching young AGI systems to proceed roughly through the same milestones we expect of our own children.

I will give the penultimate word to Moshe Looks, an AI researcher at Google, who presented a talk on his work in automated program learning, which has AGI aspirations and is also of interest to Google in the context of statistical language modeling (an area in which Google has no small stake): “There is definite momentum gathering behind the core ideas of AGI in the general AI and machine learning communities. They are taken more seriously and are more widely discussed than even a few years ago.”

My (hardly unbiased) opinion is that this trend will continue, and AGI-10 and each succeeding conference will be progressively more interesting and exciting … and we’ll know we’ve succeeded when, one year, one of the conference participants is an AGI presenting its own original research.


5 Responses

  1. anish rajana says:

    why dont we start to make arrangements to pass the turing test right now, instead of waiting till about 2040…!

  1. June 5, 2012

    […] champion, at his own game – something that many previously believed would never be possible. Scientists have been working seriously to develop artificial intelligence (AI) since the 1950s. Today even our phones are “smart.” Siri carries on […]

Leave a Reply

buy windows 11 pro test ediyorum