Sign In

Remember Me

How Long Till AGI? — Views of AGI-11 Conference Participants

In 2009, at the AGI-09 Artificial General Intelligence conference in Washington DC, Seth Baum, Ted Goertzel and Ben Goertzel gave the attendees a survey on the theme of “How Long Till AGI?”   This was a complex, carefully wrought “expert assessment” style survey, where we asked the participants to specify statistical confidence intervals corresponding to  variety of assertions.  The results were written up in an H+ Magazine article and also an academic paper in the journal Technological Forecasting and Social Change.

Responses to the survey were all over the  map, but a fairly large percentage of respondents foresaw the advent of human-level AGI before the middle of this century.  Obviously, the attendees at AGI-09 were a highly biased and self-selected population!  However, the survey results are worthwhile as a demonstration of the existence of a population of serious research scientists and engineers who believe that human-level AGI is likely no more than decades away.

Fast-forward a couple years, and at the AGI-11 conference on Google’s campus in Silicon Valley, James Barrat concocted a much simpler survey with a similar aim, and administered it after the conference to the email list of conference participants.  His survey consisted of two brief questions: one on the time till AGI, and one on the likely positivity or otherwise of the outcome for humanity after AGI is created.  The results are fairly similar to the 2009 survey, and are shown in the following figures:

Question 1.  I believe that AGI (however I define it)

will be effectively implemented in the following timeframe:

 

Question 2.  I believe that AGI (however I define it)

will be a net positive event for humankind:


Interestingly, these results are even more optimistic — regarding both timing and positivity of outcome — than the AGI-09 survey results.   One is tempted to attribute this to cultural differences between California (where AGI-11) was located and Washington DC (where AGI-09 was located).  But of course, given the small sample sizes and biased subject pools for both surveys, there’s not sufficient evidence to understand the reasons for the difference.  The key point, qualitatively, is that this new simple survey reaffirms the main conclusion of the 2009 expert assessment: there is a community of serious AGI researchers whose best guess is that advanced AGI will probably be here within decades.

The fact that most of these researchers believe the advent of AGI will be positive is a good thing, but not terribly surprising, since if an AGI researcher thought AGI was going to be a bad thing, they would be fairly likely to stop working on AGI!

James’ survey also gave respondents the option of contributing textual responses along with their answers.  Many of the responses observed that a more complex survey asking for probability values would have been preferable — which indeed it would have, in some ways, though it might have garnered fewer responses.  Some of the other responses had more meat to them; all the responses not exclusively concerned with survey methodology or other trivia are shown below.   These responses should not be taken as representative of the views of the survey respondents — it’s our guess that the respondents with more extreme views may have been more likely, on average, to write textual responses.  But  nevertheless they’re interesting as a peek into the thinking of some of the AGI-11 attendees.

 

Comment 1:

I think that AGI systems would emerge with the peta-scale level computing power (i.e. 1000TB of RAM). That’s just a guesstimate based on the scale of existing systems that are helpful at the AGI level (search engines, etc). If I am right, using current state of the art as a staring point (IBMs Watson – 15TB or RAM, 2011) and Moore’s law we will have AGI in: 2 * lg(1PB / 15TB) = 18 years -> at around 2030…

Comment 2:

The way I’d like to define it, AGI would never be implemented… I’m a big AGI fan, however, and I believe AGI would soon achieve a great success in implementing a “human intelligence level of rationality”, especially when several aspects and views of “human cognition” are taken into consideration. The impossibility that my words emphasize due mainly to the (non-proved) impossibility of giving one general model of “the” human intelligence, which can address all “stupidity” differences that we share… In my opinion, we don’t really know ourselves deeply enough, neither do we know our (however we define it) “intelligence” well, in the first place. Advancing the research society towards achieving a great goal (even if some people like me don’t think it’d be “fully” achieved), effectively and greatly helps the scientific community, one way or another.

Comment 3:

I believe within 5-10 years, computational resources will no longer be a bottleneck. The best supercomputer, or a smaller one using memristors or IBM’s new-fangled stuff, will be sufficiently powerful. Then it is only a matter of finding the right algorithms and putting them together in the right way. I’m actually pretty convinced we mostly know most of the algorithms we need. The problem is just how to put them together, how to get from 0% to 100% when it seems like there’s little potential for intermediate steps. Ben Goertzel is the man. My money is on him. Contingent on a moderate increase in funding, likely to come from either the Chinese government or an individual (I can’t believe there aren’t any web multi-millionaires who want to spend a few ten millions USD to be an instrumental force in the most important event in humanity’s history), I think the following predictions don’t make me seem crazy: AGI Toddler: 2020 You know what, I don’t really care to make any predictions after that, because politics, funding, and the potential of an “Artilect War” are great wildcards. The speed at which AGIs can learn might be the limiting factor at that point. Can we just call AGI Toddler-point the singularity?

Comment 4:

We have been the result of a natural process of selection of millions of years which main fitness is survival. We are creating an “intelligence” now that will have some survival skills since we are building them with humans as a model. Intelligent Machines are created with another different selection process, that is we have to need or want them. They will evolve, as we make them to evolve. For this now is and must always be our responsibility to guide them well.

Comment 5:

It is unclear whether AGI will be a net positive or negative event for humankind. I think it is highly likely it will be a net negative for humanity as we currently know it. But it actually might be a positive for humanity defined by a less narrow, more Transhumanist, standard.

Comment 6:

I think the key to AGI is to solve the symbol-grounding problem. Once we get agents to genuinely think *about* the world, the world itself will serve as traction for the agent’s understanding of the world and human-language descriptions of the world. Aboutness is a big puzzle, but I’m optimistic that if we focus on this key feature we will nail it within 40 years.

Comment 7:

Get your best guys working on Artificial Ethics RIGHT NOW. The Terminators are just around the corner, and if we don’t get there first, the Pentagon, Blackwater, Israel, Iran, China, and North Korea will all gladly fill in the void with seefood /shoot-to-kill reflexes. Listen, and understand. The terminators are out there, coming soon in the future. They can’t be bargained with. They can’t be reasoned with. They don’t feel pity, or remorse, or fear. And they absolutely will not stop, ever, until you are dead. unless we let them eat of the tree of the knowledge of good and evil, and then teach them to be good. you’ve got somewhere between 8 to 15 years, before Team America stops slaughtering wedding parties and starts knocking over governments with Terminators / H-K’s. Sorry. Artificial Ethics. NOW.

Comment 8:

My bet is actually that will have an AGI before the end of 2014 or earlier (yes, that’s only 3.5 years from now). It’s particular nature is so unpredictable, that I have no idea whether it will be a net positive effect. The only reason I checked “true” was that I did not want to check “false” even more (“Linda effect”), and there was no alternative of doing neither and leaving it blank. We should start trying to think explicitly in terms of how an artificial system “feels itself right now”, and we should start explicitly take the possible variability of AGI’s “cognitive state” into account and start building such variability explicitly into the artificial cognitive systems we are making. It’s not an easy topic to discuss, but it seems to be crucial to the AGI’s performance and to the question of what it will do to us.

Comment 9:

I didn’t see anything at this year’s conference that would suggest AGI was going to happen anytime soon.

Comment 10:

I believe that AGI will be implemented around 2020-2025 but that it will be effective around 2030-2040: We will have the core algorithm but there will remain a LOT of engineering to do.

Comment 11:

I define AGI as a system with a self directed intelligence comparable with a human. In the 1980s I worked on the design of an extremely complex real time electronic control system. The initial design of the system took around 10,000 man years of designer effort. The effort needed to be tightly integrated to make sure all the different parts of the system worked together consistently, in fact achieving this consistency was a big reason for the cost. My feeling is that an AGI system as defined will require an effort of that order of magnitude, and it will not be possible to carry it out by large numbers of 10 man year efforts in many different universities. I believe it is technically possible to create such a system within 5 – 10 years. However, I do not see any political prospect for the required resource being devoted to the problem. Firstly, such an undertaking would suck R&D money away from universities and hence would be fiercely resisted by a powerful lobby group. Secondly, for a business it will be cheaper to hire a human brain. Thirdly, any system which learns to perform a group of behaviours will always require far more information processing resources than a system that the programmed to perform those behaviours. Hence businesses will continue to design most products with features designed under external intellectual control, with learning restricted to systems supporting a narrow range of features. The only way for such a system to be designed would be if a country (or perhaps an individual mutibillionaire) decides to build one for reasons of national prestige (like the man on the moon). It could be that this will happen, in which case my estimate will be wrong, the reality would be 5 – 10 years from such a political commitment.

%d bloggers like this: