Pei Wang on the Path to Artificial General Intelligence

Supposing a technological Singularity or something like it does occur later in this century – what’s likely to be the main technology pushing it forward?

Like Ray Kurzweil and many others, I believe the answer is: Artificial General Intelligence.  Other technologies will surely play large roles, but what will really push us over the threshold and radically transform our world, will be the emergence of engineered minds with general intelligence significantly greater than our own.

Nobody fully understands AGI yet – but there’s an active community of researchers hammering away at the problem every day.  And while this community shares a passion for AGI and a belief in the tractability of the problem, it’s also characterized by a wild diversity of perspectives on nearly every relevant technical and conceptual issue.

As an AGI researcher I have my own strong opinions about the best path to creating powerful AGI, but nevertheless, I always find it interesting to probe the different views of other AGI researchers.  This is why, back in 2009, I collaborated with Seth Baum and my father in creating a survey of expert opinion on the timeline to AGI.  And it’s why this year I’ve decided to conduct a series of in-depth interviews with a handful of AGI researchers, poking more deeply into some issues concerning the field.

This first interview in the series is with Dr. Pei Wang, who is one of the other AGI researchers I’ve worked with most closely.

After first getting his Computer Science education in China, Pei came to the US to complete his PhD with famed writer and Cognitive Scientist (he doesn’t consider himself as an AI research anymore) Douglas Hofstadter.   Since that time he’s held a variety of university and industry positions, including a stint working for me in the dot-com era (he was the Director of Research at the New York AI firm Webmind Inc., where I was CTO and co-founder).  Currently he’s a computer science faculty at Temple University, and pushing ahead his approach to AGI based on his NARS system (Non-Axiomatic Reasoning System).   While NARS is quite different from my own OpenCog approach to AGI, it was part of the inspiration for OpenCog’s PLN reasoning system, and it’s an approach I respect considerably.

Those with a technical bent may enjoy Pei’s 2006 book Rigid Flexibility: The Logic of Intelligence. Or for a briefer summary, see Pei’s Introduction to NARS or his talk from the 2009 AGI Summer School.

Ben: I’ll start out with a more “political” sort of question, and then move on to the technical stuff afterwards….

One of the big complaints one always hears among AGI researchers is that there’s not enough funding in the field.  And indeed, if you look at it comparatively, the amount of effort currently going into the creation of powerful AGI right now is really rather low, relative to the magnitude of benefit that would be obtained from even modest success, and given the rapid recent progress in supporting areas like computer hardware, computer science algorithms and cognitive science.  To what do you attribute this fact?

I note that we spend a lot of money, as a society, on other speculative science and engineering areas, such as particle physics and genomics — so the speculative nature of AGI can’t be the reason, in itself, right?

Pei: We addressed this issue in the Introduction we wrote together for the book Advances in Artificial General Intelligence, back in 2006, right?  We listed a long series of common objections given by skeptics about AGI research, and argued why none of them hold water.

Ben: Right, the list was basically like this:

* AGI is scientifically impossible

* There’s no such thing as “general intelligence”

* General-purpose systems are not as good as special-purpose ones

* AGI is already included in current AI

* It’s too early to work on AGI

* AGI is nothing but hype; there’s no science there

* AGI research is not fruitful; it’s a waste of effort

* AGI is dangerous; even if you succeed scientifically you may just destroy the world

I guess the hardest one of these to argue against is the “too early” objection.  The only way we can convincingly prove to a skeptic that it’s not too early, is to actually succeed at building an advanced AGI.   There’s so much skepticism about AGI built up, due to the failures of earlier generations of AI researchers, that no amount of theoretical argumentation is going to convince the skeptics.  Even though now we have a much better understanding of the problem, much better computers, much better neuroscience data, much better algorithms, and so forth.

Still, I think the situation has improved a fair bit – after all, we now have AGI conferences every year, a journal dedicated to this topic, and there are at least special sessions on human-level AI and so forth within the mainstream AI conferences.  More so than 10 or even 5 years ago, you can admit you work on AGI and not get laughed at.  But still, the funding isn’t really there for AGI research, the way it is for fashionable types of narrow AI.

What do you think – do you think the situation has improved since we wrote that chapter in 2006?

Pei: It has definitely improved, though not too much.

Within the fields of artificial intelligence and cognitive science, I think the major reason for the lack of effort in the direction of AGI right now is the well-known difficulty of the problem.  I suppose that’s much the same as the “too early” objection that you mention.  To most researchers, it seems very hard, if not impossible, to develop a model of intelligence as a whole.  Since there is no basic consensus on the specific goal, theoretical foundation, and development methodology of AGI research, it is even hard to set up milestones to evaluate partial success. This is a meta-level problem not faced by the other areas you mentioned – physics and biology and so forth.

Furthermore, in those other areas there are usually few alternative research paths — whereas in artificial intelligence and cognitive science, researchers can easily find many more manageable tasks to work on, by focusing on partial problems, and still get recognition and rewards.  Actually the community has been encouraging this approach by defining “intelligence” as a loose union of various “cognitive functions”, rather than a unified phenomenon.

Ben: One approach that has been proposed for surmounting the issues facing AGI research, is to draw on our (current or future) knowledge of the brain.   I’m curious to probe into your views on this a bit.

Regarding the relationship between neuroscience and AGI, a number of possibilities exist.  For instance, one could propose:

A) to initially approach AGI via making detailed brain simulations, and then study these simulated human brains to learn the principles of general intelligence, and create less humanlike AGIs after that based on these principles; or

B) to thoroughly understand the principles of human intelligence via studying the brain, and then use these principles to craft AGI systems with a general but not necessarily detailed similar to the brain; or

C) to create AGI systems with only partial resemblance to the human brain/mind, based on integrating our current partial knowledge from neuroscience with knowledge from other areas like psychology, computer science and philosophy; or

D) to create AGI systems based on other disciplines without paying significant mind to neuroscience data.

I wonder, which of these four approaches do you find the most promising, and why?

Pei: My approach is roughly between the above D and C. Though I have gotten inspirations from neuroscience on many topics, I do not think to build a detailed model of neural system is the best way to study intelligence.

I said a lot about this issue in the paper “What Do You Mean by AI?” in the proceedings of the AGI-08 conference.  So now I’ll just briefly repeat what I said there.

The best known object showing “intelligence” is undoubtedly the human brain, so AI must be “brain-like” in some sense. However, like any object or process, the human brain can be studied and modeled at multiple levels of description, each with its vocabulary, which specifies its granularity, scope, visible phenomena, and so on. Usually, for a given system, at a lower level descriptions say more on its internal structure, while at a higher level descriptions say more on its overall regularity and outside interaction. No level is “more scientific” or “closer to reality” than the others, so that all the other levels can be reduced into it or summarized by it.  As scientific research, study on any level can produce valuable theoretical and practical results.

However, what we call “intelligence” in everyday language is more directly related to a higher level description than the level provided by neuroscience. Therefore, though neuroscientific study of the brain can gradually provide more and more details about the mechanism that supports human intelligence, it is not the most direct approach toward the study of intelligence, because its concepts often focus on human-specific details, which may be neither necessary nor possible to be realized in machine intelligence.

Many arguments supporting the neuroscientific approach toward AGI are based on the implicit assumption that the neuro-level description is the “true” or “fundamental” one, and all higher-level descriptions are its approximations. This assumption is wrong for two reasons: (1) Such a strong reductionist position has been challenged philosophically, and it is practically impossible, just like no one really wants to design an operating system as a string of binary code, even though “in principle” it is possible. (2) Even according to such a strong reductionist position, the “neural level” is not the lowest level, since it is not hard to argue for the contribution to intelligence from the non-neuron cells, or the non-cell parts or processes in the human body.

Ben: Another important conceptual question has to do with the relationship between mind and body in AGI….

In some senses, human intelligence is obviously closely tied to human embodiment — a lot of the brain deals with perception and action, and it seems that young children spend a fair bit of their time getting better at perceiving and acting.  This brings up the question of how much sense it makes to pursue AI systems that are supposed to display roughly human-like cognition but aren’t connected with roughly humanlike bodies.  And then, if you do believe that giving an AI a human-like body is important, you run up against the related question of just how human-like an AI body needs to be, in order to serve as an appropriate vessel for a roughly human-like mind.

(By a “roughly human-like mind” I don’t mean a precisely simulated digital human, but rather a system that implements the core principles of human intelligence, using structures and processes with a reasonable conceptual correspondence to those used by the human mind/brain.)

Pei: Once again, it depends on “What do you mean by AI?” Even the above innocent-looking requirement of “a reasonable conceptual correspondence to those used by the human mind/brain” may be interpreted very differently in this context. If it means “to respond like a human to every stimulus”, as suggested by using the Turing Test to evaluate intelligence, then the system not only needs a (simulated) human body, but also a (simulated or not) human experience. However, as I argued in the AGI-08 paper, if “intelligence” is defined on a more abstract level, as a human-like experience-behavior relation, then a human-like body won’t be required. For example, an AGI system does not need to feel “hungry” in the usual sense of the word (that requires a simulated human body), but it may need to manage its own energy repository (that does not require a simulated human body). This difference will surely lead to difference in behaviors, and whether such a system is still “intelligent” depends on how the “human-like” is interpreted.

Ben: Hmmm… I understand; but this doesn’t quite fully answer the question I was trying to ask.

My point was more like this: Some researchers argue that human thinking is fundamentally based on a deep and complex network of analogies and other relationships to human embodied experience.  They argue that our abstract thinking is heavily guided by a sort of visuomotor imagination, for example.  That our reasoning even about abstract

things like mathematics or love is based on analogies to what we see and do with our bodies.  If that’s the case, then an AGI without a humanlike body might not be able to engage in a humanlike pattern of thinking.

Pei: The content of human thinking depends on human embodied experience, but the mechanism of human thinking doesn’t (at least not necessarily so).

If a robot has no vision, but has advanced ultrasonic sensation, then, when the system has AGI, it will develop its own concepts based on its own experience. It won’t fully understand human concepts, but we cannot fully understand its, neither. Such a system can develop its “math” and other abstract notions, which may be partially overlap with ours, though not completely. According to my definition, such a system can be as “intelligent” as human, since its experience-behavior relationship is similar to ours (though not the experience, or behavior, separately). By “abstract”, I mean the meta-level mechanism and processes, not the abstract part of its object-level content.

Ben: Yes, but … I’m not sure it’s possible to so strictly draw a line between content and mechanism…

Pei: Of course, it is a matter of degree, but to a large extent the distinction can be made. On a technical level, this is why I prefer the “reasoning system” framework — here “object language” and “meta language” are clearly separated.

Ben: The mechanism of human thinking is certainly independent of the specific content of the human world, but it may be dependent in various ways on the “statistics” (for lack of a better single word) of the everyday human world.

For instance, the everyday human world is full of hierarchical structures; and it’s full of solid objects that interact in a way that lets them maintain their independence (very different than the world of, say, a dolphin or an intelligent gas cloud on Jupiter – I wrote an article last year speculating on the properties of intelligences adapted for fluid environments).   And the brain’s cognitive mechanisms may be heavily adapted to various properties like this, that characterize the everyday human world.  So one line of thinking would be: If some AGI’s environment lacks the same high-level properties as the everyday human world, then the cognitive mechanisms that drive human-level intelligence may not be appropriate for that AGI.

Pei: Can you imagine an intelligence, either in the AI context or the Alien Intelligence context, to have sensors and motors very different from ours? If the answer is yes, then the content of their mind will surely be different from ours. However, we still consider them as “intelligent”, because they can also adapt to their environment, solving their problems, etc., and their adaptation and problem-solving mechanisms should be similar to ours (at least I haven’t seen why that is not the case) — all intelligent systems need to summarize their experience, and use the pattern observed to predict the future.

I agree with you that in different environments, the most important “patterns” may be different, which will in turn favor different mechanisms. It is possible. However, the opposite is equally possible (and also interesting to me), that is, no matter in which environment, the major mechanism for adaptation, recognition, problems-solving, etc., is basically the same, and its variations can be captured as different parameter settings. This mechanism is what “intelligence” means to me, not the concrete beliefs, concepts, and skills of the system, which depend on the concrete body and environment.

Ben: On the other hand, a virtual world like Second Life also has a sort of hierarchical structure (though not as rich or as deeply nested as that of the everyday human physical world), and also has solid objects — so for the particular two high-level properties I mentioned above, it would potentially serve OK….

Pei: Sure, though there is a difference: the hierarchical structures in the natural world is largely the result of our mental reconstruction from our experience, so there is no “correct answer”, while the current virtual worlds are not that rich (which may change in the future, of course).

Ben: Also, one could argue that some cognitive mechanisms only work with a very rich body of data to draw on, whereas with a data-poor environment they’ll just give nonsense results.  For instance (and I say this knowing you don’t consider probability central to AGI, but it’s just an example), some probabilistic methods require a large number of cases in order to meaningfully estimate distributions….  In that case, such a cognitive mechanism might not work well for an AI operating in Second Life, because of the relative poverty of the data available…

Pei: In principle, I agree with the above statement, but I don’t think it mean that we cannot distinguish mechanism from content.

Ben: So I am I correct in understanding that, in your view, the same basic cognitive mechanisms are at the core of AGI no matter what the high-level properties of the environment (not just no matter what is the specific content of the environment)?

Pei: Yes, though I don’t mean that they are the best solution to all practical problems. In certain environments (as you mentioned above), some less intelligent mechanisms may work better.

Ben: Hmmmm… or, alternately, do you think that the cognitive mechanisms of general intelligence are tied to some high-level properties of the environment, but that these properties are so high-level that any environment one gives an AGI system is likely to fulfill them?

Pei: I’d say that intelligence works in “most interesting environments”. If the environment is constant, then the traditional computational models are better than intelligent ones; on the other extreme, if the environment is purely arbitrary, and no pattern can be recognized in meaningful time, intelligence is hopeless. However, since I’m not looking for a mechanism that is optimal in all environments, it is not an issue to me.

Ben: OK, well I don’t think we can go much deeper in that direction without totally losing our audience!  So I’ll veer back toward the “politics” side of things again for a bit….  Back to the nasty business of research funding! ….

As we both know, “narrow AI” research (focusing on AI programs that solve very specific tasks and don’t do anything else, lacking the broad power to generalize) gets a lot more attention and funding than AGI these days.  And in a sense this is understandable, since some narrow AI applications are delivering current value to many people (e.g. Internet search, financial and military applications, etc.).   Some people believe that AGI will eventually be achieved via incremental improvement of narrow-AI technology — i.e. that narrow AI can gradually become better and better by becoming broader and broader, until eventually it’s AGI.   What are your views on this?

Pei: “Narrow AI” and “AGI” are different problems, to a large extent, because they have different goals, methods, evaluation criteria, application, etc., even though they are related here or there. I don’t think AGI can be achieved by integrating the existing AI results, though these tools will surely be useful for AGI.

Ben: Yeah, as you know, I agree with you on that….  Narrow AI has yielded some results and even some software that can be used in building AGI applications, but the core of an AGI system has got to be explicitly AGI-focused … AGI can’t be cobbled together from narrow AI applications.  I suppose this relates to the overall the need to ground AGI work in a well-thought-out philosophy of intelligence.

Pei: Yes.  One problem slowing down progress toward AGI, I think, has been a neglect among AI researchers of a related discipline: philosophy. Most major mistakes in the AGI field come from improper philosophical assumptions, which are often implicitly held.  Though there is a huge literature on the philosophical problems in AI, most of the discussions there fail to touch the most significant issues in the area.

Ben: So let’s dig a little into the philosophy of mind and AI, then…,  There’s one particular technical and conceptual point in AGI design I’ve been wrestling with lately in my own work, so it will be good to get your feedback.

Human minds deal with perceptual data and they also deal with abstract concepts.  These two sorts of entities seem very different, on the surface — perceptual data tends to be effectively modeled in terms of large sets of floating-point vectors with intrinsic geometric structure; whereas abstract concepts tend to be well modeled in symbolic terms, e.g. as semantic networks or uncertain logic formulas or sets of various sorts, etc.   So, my question is, how do you think the bridging between perceptual and conceptual knowledge works in the human mind/brain, and how do you foresee making it work in an AGI system?  Note that the bridging must go both ways – not only must percepts be used to seed concepts, but concepts must also be used to seed percepts, to support capabilities like visual imagination.

Pei: I think I look at it a little differently than you do.  To me, the difference between perceptual and conceptual knowledge is only “on surface”, and the “vectors vs. symbols” distinction merely shows the choices made by the previous researchers. I believe we can find unified principles and mechanisms at both the perceptual level and the conceptual level, as a continuous and coherent “conceptual hierarchy”. Here “conceptual” means “can be recognized, recalled, and manipulated as a unit within the system”, so in this broad sense, various types of percepts and actions can all be taken as concepts. Similarly, perceptual and conceptual knowledge can be uniformly represented as specialization-generalization relations among concepts, so as to treat perception and cognition both as processes in which one concept is “used as” another in certain way.

According to this view the distinction between perceptual and conceptual knowledge still exists, but only because in the conceptual hierarchy certain concepts are closer to the sensors and actuators of the system, while some others are closer to the words in communication languages of the system. Their difference is relative, not absolute. They do not need to be handled by separate mechanisms (even with a bridge in between), but by a uniform mechanism (though with variants in details when it is applied to different parts of the system).

Ben: Certainly that’s an elegant perspective, and will be great if it works out.  As you know my approach is more heterogeneous – in OpenCog we use different mechanisms for perception and cognition, and then interface them together in a certain way.

To simplify a bit, it feels to me like you begin with cognition and then handle perception using mechanisms mainly developed for cognition.  Whereas, say, Itamar Arel or Jeff Hawkins, in their AGI designs, begin with perception and then handle cognition using mechanisms mainly developed for perception.  On the other hand, those of us with fundamentally integrative designs, like my OpenCog group or Stan Franklin’s LIDA approach, start with different structures and algorithms for perception and cognition and then figure out how to make them work together.  I tend to be open-minded and think any of these approaches could potentially be made to work, even though my preference currently is for the integrative approach.

So anyway, speaking of your own approach — how is your own AGI work on NARS going these days?  What are the main obstacles you currently face?  Do you have the sense that, if you continue with your present research at the present pace, your work will lead to human-level AGI within, say, a 10 or 20 year timeframe?

Pei: My project NARS has been going on according to my plan, though the progress is slower than I hoped, mainly due to the limit of resources.

What I’m working on right now is: real-time temporal inference, emotion and feeling, self-monitoring and self-control.

If it continues at the current pace, the project, as currently planned, can be finished within 10 years, though whether the result will have “human-level AGI” depends on what that phrase means — to me, it will have.

Ben: Heh….  Your tone of certainty surprises me a little.  Do you really feel like you know for sure that it will have human-level general intelligence, rather than needing to determine this via experiment?  Is this certainty because you are certain your theory of general intelligence is correct and sufficient for creating AGI, so that any AGI system created according to this theory will surely have human-level AGI?

Pei: According to my theory, there is no absolute certainty on anything, including my own theory!

What I mean is: according to my definition of “intelligence”, I currently see no major remaining conceptual problem.  Of course we still need experiments to resolve the relatively minor (though still quite complicated) remaining issues.

Ben: Indeed, I understand, and I feel the same way about my own theory and approach!  So now I guess we should stop talking and get back to building our thinking machines.  Thanks for taking the time to dialogue, I think the result has been quite interesting.

6 Responses

  1. Just bookmarked this post with my jumptags account .. thanks

  2. Tom says:

    Can’t wait to see whats instore for artificial intelligence.

  1. February 1, 2011

    […] This post was mentioned on Twitter by Richard Runds, futureseek. futureseek said: Pei Wang on the Path to Artificial General Intelligence http://ff.im/-xbn2O […]

  2. February 3, 2011

    […] Pei Wang on the Path to Artificial General Intelligence. […]

Leave a Reply

https://phuonghoangschool.com/wp-includes/nexus-slot/