Itamar Arel on the Path to Artificial General Intelligence

Whether or not you not agree with me that Artificial General Intelligence is likely to be the core technology driving progress in the next century — I guess you have to concur that IF we could achieve advanced AGI, the implications would be pretty damn profound.

So it’s exciting that an increasing number of AI researchers believe we could have human-level AGI within decades (or even years), not centuries.

But even the optimists in the AGI research community show little agreement on the optimal path for getting to their common end goal.

As an AGI researcher myself, I have my own fairly strong views on the best path to AGI – see the OpenCog website if you’re curious for details.  But even so, I’m always curious to probe into the views of other AGI researchers, and see what I can learn.  So this year I’ve decided to conduct some in-depth interviews with a handful of other AGI researchers.  The first in the series was an interview with Dr. Pei Wang from Temple University [INSERT LINK ].   This one is the second, with Dr. Itamar Arel, who runs the Machine Intelligence Lab at  the University of Tennessee and also co-founded the Silicon Valley AI startup Binatix Inc.

Though he started his career with a focus on electrical engineering and chip design, Itamar has been pursuing AGI for many years now, and has created a well-refined body of theory as well as some working proto-AGI software code.  In recent years he’s begun to reach out to the futurist community as well as the academic world, speaking at the 2009 Singularity Summit in New York and the 2010 H+ Summit @ Harvard.

I discovered Itamar’s work in 2008 with great enthusiasm, because I had always been interested in Jeff Hawkins’ Hierarchical Temporal Memory approach to machine perception, but I’d been frustrated with some of the details of Hawkins’ work.  It appeared to me that Itamar’s ideas about perception were conceptually somewhat similar to Hawkins’s, but that Itamar had worked out many of the details in a more compelling way.  Both Itamar’s and Hawkins’ approaches to perception involve hierarchical networks of processing elements, which self-organize to model the state and dynamics of the world.  But it seemed to me that Itamar’s particular equations were more likely to cause the flow of information up and down the hierarchy to cause the overall network to organize into an accurate world-model.  Further, Itamar had a very clear vision of how his hierarchical perception network would fit into a larger framework involving separate but interlinked hierarchies dealing with actions and goals.  (Another exciting project that has recently emerged in the same general space is that of Hawkins’ former collaborator Dileep George, who has now broken off to start his own company, Vicarious Systems.   But I can’t comment on the particulars of George’s new work, as little has been made public yet.)

I had a chance to work closely with Itamar when we co-organized a workshop at University of Tennessee in October 2009, at which 12 AGI researchers gathered to discuss the theme of “A Roadmap Toward Human-Level AGI”  (a paper summarizing the findings of that workshop has been submitted to a journal and should appear “any day now”).  I’m also currently exploring the possibility of using his Machine Intelligence Lab’s DeSTIN software together with my team’s OpenCog software, to supply OpenCog with a robot perception module.  So as you can imagine, we’ve had quite a few informal conversations on the ins and outs of AGI over the last couple years.  This interview covers some of the same ground but in a more structured way.  If you’re interested to scratch beneath the surface of the AGI vision and understand more about how contemporary AGI researchers are thinking about the problem, perhaps you’ll find our ruminations of interest!

Ben: Before plunging into the intricacies of AGI research, there are a couple obvious questions about the AGI field as a whole I’d like to get your take on.   First of all the ever-present funding issue….

One of the big complaints one always hears among AGI researchers is that there’s not enough funding in the field.  And for sure, if you look at it comparatively, the amount of effort currently going into the creation of powerful AGI right now is really rather low, relative to the magnitude of benefit that would be obtained from even modest success, and given the rapid recent progress in supporting areas like computer hardware, computer science algorithms and cognitive science.  To what do you attribute this fact?

Itamar: I feel that the answer to this question has to do with understanding the history of AI. When the term AI was first coined, over 50 years ago, prominent scientists believed that within a couple of decades robots will be ubiquitous in our lives, helping us with every day chores and exhibiting human-level intelligence. Obviously, that didn’t happen yet. However, funding agencies, to some degree, have lost hope in the holy grail of AGI given the modest progress made toward this goal.

Ben: So the current generation of researchers is paying the price for the limited progress made by prior generations – even though now we  have so much more understanding and so much better hardware….

Itamar: That’s correct. Historical under-delivery on the promise of achieving AGI is probably the most likely reason for the marginal funding being directed at AGI research. Hopefully, that will change in the near future.

Ben: Some people would say that directing funding to AGI isn’t so necessary, since “narrow AI” research (focusing on AI programs that solve very specific tasks and don’t do anything else, lacking the broad power to generalize) is reasonably well funded.  And of course the funding directed to narrow AI is understandable, since some narrow AI applications are delivering current value to many people (e.g. Internet search, financial and military applications, etc.).   So, these folks would argue that AGI will eventually be achieved via incremental improvement of narrow-AI technology — i.e. that narrow AI can gradually become better and better by becoming broader and broader, until eventually it’s AGI.   So then there’s no need for explicit AGI funding.  What are your views on this?

Itamar: I think my view on this is fairly similar to yours.  I believe the challenges involved in achieving true AGI go well beyond those imposed by any narrow AI domain. While separate components of an AGI system can be degraded and applied to narrow AI problem domains, such as a scalable perception engine (the focus of much of my current work), it is only in the integration and the scaling of the system as a whole that AGI can be demonstrated. From a pragmatic standpoint, developing pieces of AGI separately and applying them to problems that are more easily funded, is a valid approach.

Ben: But even if you do use narrow AI funding to develop pieces of your AGI system, there’s still going to be a chunk of work that’s explicitly focused on AGI, and doesn’t fit into any of the narrow AI applications, right?   This seems extremely obvious to me, and you seem to agree — but it’s surprising that not everybody sees this.

Itamar: I think that due to the misuse of the term AI, the public is somewhat confused about what AI really means. The majority of the people have been lead to believe that rule-based expert systems, such as those driving modern games, are actually “AI”. In general, I suspect there is little trust in the premise of AGI, to the point where people probably associate such technology with movies and fiction.

Ben: One AGI-related area that’s even better-funded than narrow AI is neuroscience.  So, one approach that has been proposed for surmounting the issues facing AGI research, is to piggyback on neuroscience, and draw on our (current or future) knowledge of the brain.   I’m curious to probe your views on this a bit.

Itamar: While reverse-engineering the brain is, almost by definition, a worthy approach, I believe it is one that will take at least a couple of decades to bear fruit. The quantity and diversity of low level neural circuitry that needs to be deciphered and modeled is huge. On the other hand, there is vast neuroscience knowledge today from which we can be inspired and attempt to design systems that mimic the concepts empowering the mammal brain. Such biologically-inspired work is picking up fast and can be seen in areas such as reinforcement learning and deeply-layered machine learning. Moreover, one can argue that while the mammal brain is our only instantiation of an intelligent system, it is most likely not an optimal (or the only possible) one. Thus, I believe being inspired by the brain, rather than trying to accurately reverse engineer the brain, is a more pragmatic path toward achieving AGI in the foreseeable future.

Ben: Another important, related conceptual question has to do with the relationship between mind and body in AGI….

In some senses, human intelligence is obviously closely tied to human embodiment — a lot of the brain deals with perception and action, and it seems that young children spend a fair bit of their time getting better at perceiving and acting.  This brings up the question of how much sense it makes to pursue AI systems that are supposed to display roughly human-like cognition but aren’t connected with roughly humanlike bodies.  And then, if you do believe that giving an AI a human-like body is important, you run up against the related question of just how human-like an AI body needs to be, in order to serve as an appropriate vessel for a roughly human-like mind.

Itamar: What makes AGI truly challenging is its scalability properties.  Machine learning has delivered numerous self-learning algorithms that exhibit impressive properties and results when applied to small scale problems.  The real-world, however, is extremely rich and complex.  The spatiotemporal patterns that we as human are continuously exposed to, and that help us understand the world with which we interact, are a big part of what makes achieving AGI a colossal challenge. To that end, while it is very possible to design, implement and evaluate AGI systems in simulated or virtual environments, the key issue is that of scalability and complexity. A physical body, facilitating physical interaction with the real-world, inherently offers reach stimuli from which the AGI system can learn and evolve. Such richness remains a challenge to be attained in simulation environments.

Ben: OK, so if we do interface our AGI systems with the physical world in a high-bandwidth way, such as via robotics – as you suggest — then we run up against some deep conceptual and technical questions, to do with the artificial mind’s mode of interpreting the world.

This brings us to a particular technical and conceptual point in AGI design I’ve been wrestling with lately in my own work …

Human minds deal with perceptual data and they also deal with abstract concepts.  These two sorts of entities seem very different, on the surface — perceptual data tends to be effectively modeled in terms of large sets of floating-point vectors with intrinsic geometric structure; whereas abstract concepts tend to be well modeled in symbolic terms, e.g. as semantic networks or uncertain logic formulas or sets of various sorts, etc.   So, my question is, how do you think the bridging between perceptual and conceptual knowledge works in the human mind/brain, and how do you foresee making it work in an AGI system?  Note that the bridging must go both ways – not only must percepts be used to seed concepts, but concepts must also be used to seed percepts, to support capabilities like visual imagination.

Itamar: I believe that conceptually the brain performs two primary functions: situation inference and mapping of situations to actions.  The first is governed by a perception engine which infers the state of the world from a sequence of observations, while the second maps this inferred state to desired actions. These actions are taken so as to maximize some reward driven construct. Both of these subsystems are hierarchical in nature and embed layers of abstraction that pertain to both perceptional concepts and actuation driven concepts. There is, indeed, continuous interdependency between those two subsystems and the information they represent, however the hierarchy of abstraction and its resulting representations are an inherent property of the architecture. My feeling is that the theory behind the components that would realize such architectures is accessible today.

Ben: Hmmm….  You say “I believe that conceptually the brain performs two primary functions: situation inference and mapping of situations to actions.”

Now, it’s moderately clear to me how most of what, say, a rabbit or a dog does can be boiled down to these two functions.  It’s less clear how to boil down, say, writing a sonnet or proving a mathematical theorem to these two functions.  Any comments on how this reduction can be performed?

I suppose this is basically a matter of probing your suggested solution to the good old symbolic/connectionist dichotomy.  Your AI approach is pretty much connectionist — the knowledge represented is learned, self-organized and distributed, and there’s nothing like a built-in logic or symbolic representation system.  So apparently you somehow expect these linguistic and symbolic functions to emerge from the connectionist network of components in your system, as a result of situation inference and situation-action mapping…..  And this is not a totally absurd expectation since a similar emergence apparently occurs in the human brain (which evolved from brains of a level of complexity similar to that of dogs and rabbits, and uses mostly the same mechanisms)….

: I think you hit the nail on the head: situation inference is a vague term that in practice involves complex, hierarchical and mutli-scale (in time and space) abstraction and information representations. This includes the symbolic representations that are needed for a true AGI system. The “logic” part is a bit more complicated, as it resides on the intersection between the two primary subsystems (i.e. inference and control). The control part invokes representations that project to the inference engine, which in turn reflects back to the control subsystem as the latter eventually generates actuation commands. In other words, logical inference (as well as many “strategic” thinking capabilities) should emerge from the interoperability between the two primary subsystems. I hope that makes some sense!

Ben: I understand the hypothesis, but as you know I’m unsure if it can be made to work out without an awful lot of special tweaking of the system to get that emergence to happen.   That’s why I’ve taken a slightly different approach in my own AGI work, integrating an explicitly symbolic component with a connectionist “deep learning component” (which may end up being a version of your DeSTIN system, as we’ve been discussing!).   But I certainly think yours is a very a worthwhile research direction and I’m curious to see how it comes out.

So anyway, speaking about that — how is your work going these days?  What are the main obstacles you currently face?  Do you have the sense that, if you continue with your present research at the present pace, your work will lead to human-level AGI within, say, a 10 or 20 year timeframe?

Itamar: In addition to my work at the University of Tennessee, I’m affiliated with a startup company out of the Silicon Valley called Binatix.  Binatix  aims to develop broadly applicable AGI technology. Currently, the work focuses on a unique deep machine learning based perception engine, with some very promising results to be made public soon. Down the road, the plan is to integrate a decision making subsystem as part of the work toward achieving AGI. My personal hope is that, by following this R&D direction, human-level AGI can be demonstrated within a decade.

Ben: You seem to feel you have a pretty solid understanding of what needs to be done to get to human-level AGI along a deep-learning path.  So I’m curious to probe the limits of your knowledge!   What would you say are the areas of AGI and cognitive science, relevant to your own AGI work, that you feel you understand least well and would like to  understand better?

Itamar: I think that cognitive processes which take place during sleep sessions are critical to learning capabilities, and are poorly understood. The key question is whether sleep (or some analogous state) is a true prerequisite for machine intelligence, or will there be a way to achieve the same results attained via sleep by other means?  This is something I’m working on at present.

Ben: I remember back in the 1990s when I was a cognitive science professor in Perth (in Western Australia), my friend George Christos there wrote a book called Memory and Dreams: The Creative Human Mind, about dreaming and its cognitive importance, and its analogues in attractor neural networks and other AI systems.   He was playing around a lot with neural nets that (in a certain sense) dreamed.  Since then there’s been a lot of evidence that various phases of sleep (including but not only dream sleep) are important for memory consolidation in the human brain.  In OpenCog it’s pretty clear that  memory consolidation can be done without a sleep phase.  But yeah, I can see in your deep learning architecture, which is more brain-like than OpenCog, it’s more of a subtle question – more like with the attractor neural nets George was playing with.

Itamar: Also, I think the question of “measuring” AGI remains an unsolved one, for which there needs to be more effort on the part of the community. If we hope to be on the path to AGI, we need to be able to measure progress achieved along such path. That requires benchmarking tools, which are currently not there. It would be interesting to hear what other AGI researchers think about AGI benchmarking, etc.

Ben: Hmmm….  Well — taking off my interviewer hat and putting on my AGI researcher hat for a moment — I can tell you what I think.  Honestly, my current feeling is that benchmarking is fairly irrelevant to making actual research progress toward AGI.  I think if you’re honest with yourself as a researcher, you can tell if you’re making worthwhile progress or not via careful qualitative assessments, without needing formal rigorous benchmarks.  And I’m quite wary of the way that formulation and broad acceptance of benchmark tests tends to lead to a “bakeoff mentality”  where researchers focus too much on beating each other on test criteria rather than on  making real research progress.

On the other hand, I can certainly see that benchmarking is useful for impressing funding sources and investors and so forth, because it gives a way for them to feel like concrete progress is being made, well before the end goal is achieved.

As you know we discussed these issues at the AGI Roadmap Workshop we co-organized in 2009….  I think we made some good progress toward understanding and articulating the issues involved with assessing progress toward AGI — but we also came up against the sheer difficulty of rigorously measuring progress toward AGI.  In particular it seemed to be very difficult to come up with tests that would usefully measure partial progress toward AGI, in a way that couldn’t be “gamed” by narrow-AI systems engineered for the tests…

It’s been a while since that workshop now — I wonder if you have any new thoughts on what kinds of benchmark tests would be useful in the context of your own AGI research program?

Itamar: Unfortunately, I don’t. Like you, I think that one would know if he/she were truly far along the path to introducing true AGI.  It’s a bit like being pregnant; you can’t be “slightly” pregnant 😉

Ben: Yeah — and I tend to think that, at a certain point, some AGI researcher – maybe one of us; or maybe the two of us working together, who knows! – is going to produce a demonstration of AGI functionality that will be sufficiently compelling to make a significant percentage of the world feel the same way.  I think of that as an “AGI Sputnik” moment – when everyone can look at what you’ve done and qualitatively feel the promise.  That will mean more than progress along any quantitative benchmarks, I feel.  And after the AGI Sputnik moment happens, things are going to progress tremendously fast – both in terms of funding and brainpower focused on AGI, and (correlatedly) in terms of R&D progress….

You may also like...

3 Responses

  1. Didier says:

    Shane Legg said he will soon publish some of his work on his AIQ (algorithmic intelligence quotient) . You can watch a video of his speech about AIQ at the singularity summit vimeo dot com/17553536
    AIQ measures the performance of an AGI in a lot of randomly generated environments, just like the environments of my Occam’sRazor game on my web site razorcam dot com (Legg agreed that my game could be a human interface for an AIQ test)

    Also “Measuring universal intelligence: Towards an anytime intelligence test” from Hernández-Orallo has just been published in the high impact journal “Artificial Intelligence”
    Like AIQ , this paper is heavily inspired by AIXI.

  2. Dre says:

    Very interesting conversation. How do you consider Shane Legg’s equation for intelligence in your quest for a means to quantitatively benchmark AGI? I believe from the public’s perception, IBM Watson debut will be considered as progress in AI, even though understood by those in AI to be another form for narrow AI.

    • nixigewpr says:

      Also “Measuring universal intelligence: Towards an anytime intelligence test” from Hernández-Orallo has just been published in the high impact journal “Artificial Intelligence”
      Like AIQ , this paper is heavily inspired by AIXI.

Leave a Reply