How Long Till AGI? — Views of AGI-11 Conference Participants

In 2009, at the AGI-09 Artificial General Intelligence conference in Washington DC, Seth Baum, Ted Goertzel and Ben Goertzel gave the attendees a survey on the theme of “How Long Till AGI?”   This was a complex, carefully wrought “expert assessment” style survey, where we asked the participants to specify statistical confidence intervals corresponding to  variety of assertions.  The results were written up in an H+ Magazine article and also an academic paper in the journal Technological Forecasting and Social Change.

Responses to the survey were all over the  map, but a fairly large percentage of respondents foresaw the advent of human-level AGI before the middle of this century.  Obviously, the attendees at AGI-09 were a highly biased and self-selected population!  However, the survey results are worthwhile as a demonstration of the existence of a population of serious research scientists and engineers who believe that human-level AGI is likely no more than decades away.

Fast-forward a couple years, and at the AGI-11 conference on Google’s campus in Silicon Valley, James Barrat concocted a much simpler survey with a similar aim, and administered it after the conference to the email list of conference participants.  His survey consisted of two brief questions: one on the time till AGI, and one on the likely positivity or otherwise of the outcome for humanity after AGI is created.  The results are fairly similar to the 2009 survey, and are shown in the following figures:

Question 1.  I believe that AGI (however I define it)

will be effectively implemented in the following timeframe:

 

Question 2.  I believe that AGI (however I define it)

will be a net positive event for humankind:


Interestingly, these results are even more optimistic — regarding both timing and positivity of outcome — than the AGI-09 survey results.   One is tempted to attribute this to cultural differences between California (where AGI-11) was located and Washington DC (where AGI-09 was located).  But of course, given the small sample sizes and biased subject pools for both surveys, there’s not sufficient evidence to understand the reasons for the difference.  The key point, qualitatively, is that this new simple survey reaffirms the main conclusion of the 2009 expert assessment: there is a community of serious AGI researchers whose best guess is that advanced AGI will probably be here within decades.

The fact that most of these researchers believe the advent of AGI will be positive is a good thing, but not terribly surprising, since if an AGI researcher thought AGI was going to be a bad thing, they would be fairly likely to stop working on AGI!

James’ survey also gave respondents the option of contributing textual responses along with their answers.  Many of the responses observed that a more complex survey asking for probability values would have been preferable — which indeed it would have, in some ways, though it might have garnered fewer responses.  Some of the other responses had more meat to them; all the responses not exclusively concerned with survey methodology or other trivia are shown below.   These responses should not be taken as representative of the views of the survey respondents — it’s our guess that the respondents with more extreme views may have been more likely, on average, to write textual responses.  But  nevertheless they’re interesting as a peek into the thinking of some of the AGI-11 attendees.

 

Comment 1:

I think that AGI systems would emerge with the peta-scale level computing power (i.e. 1000TB of RAM). That’s just a guesstimate based on the scale of existing systems that are helpful at the AGI level (search engines, etc). If I am right, using current state of the art as a staring point (IBMs Watson – 15TB or RAM, 2011) and Moore’s law we will have AGI in: 2 * lg(1PB / 15TB) = 18 years -> at around 2030…

Comment 2:

The way I’d like to define it, AGI would never be implemented… I’m a big AGI fan, however, and I believe AGI would soon achieve a great success in implementing a “human intelligence level of rationality”, especially when several aspects and views of “human cognition” are taken into consideration. The impossibility that my words emphasize due mainly to the (non-proved) impossibility of giving one general model of “the” human intelligence, which can address all “stupidity” differences that we share… In my opinion, we don’t really know ourselves deeply enough, neither do we know our (however we define it) “intelligence” well, in the first place. Advancing the research society towards achieving a great goal (even if some people like me don’t think it’d be “fully” achieved), effectively and greatly helps the scientific community, one way or another.

Comment 3:

I believe within 5-10 years, computational resources will no longer be a bottleneck. The best supercomputer, or a smaller one using memristors or IBM’s new-fangled stuff, will be sufficiently powerful. Then it is only a matter of finding the right algorithms and putting them together in the right way. I’m actually pretty convinced we mostly know most of the algorithms we need. The problem is just how to put them together, how to get from 0% to 100% when it seems like there’s little potential for intermediate steps. Ben Goertzel is the man. My money is on him. Contingent on a moderate increase in funding, likely to come from either the Chinese government or an individual (I can’t believe there aren’t any web multi-millionaires who want to spend a few ten millions USD to be an instrumental force in the most important event in humanity’s history), I think the following predictions don’t make me seem crazy: AGI Toddler: 2020 You know what, I don’t really care to make any predictions after that, because politics, funding, and the potential of an “Artilect War” are great wildcards. The speed at which AGIs can learn might be the limiting factor at that point. Can we just call AGI Toddler-point the singularity?

Comment 4:

We have been the result of a natural process of selection of millions of years which main fitness is survival. We are creating an “intelligence” now that will have some survival skills since we are building them with humans as a model. Intelligent Machines are created with another different selection process, that is we have to need or want them. They will evolve, as we make them to evolve. For this now is and must always be our responsibility to guide them well.

Comment 5:

It is unclear whether AGI will be a net positive or negative event for humankind. I think it is highly likely it will be a net negative for humanity as we currently know it. But it actually might be a positive for humanity defined by a less narrow, more Transhumanist, standard.

Comment 6:

I think the key to AGI is to solve the symbol-grounding problem. Once we get agents to genuinely think *about* the world, the world itself will serve as traction for the agent’s understanding of the world and human-language descriptions of the world. Aboutness is a big puzzle, but I’m optimistic that if we focus on this key feature we will nail it within 40 years.

Comment 7:

Get your best guys working on Artificial Ethics RIGHT NOW. The Terminators are just around the corner, and if we don’t get there first, the Pentagon, Blackwater, Israel, Iran, China, and North Korea will all gladly fill in the void with seefood /shoot-to-kill reflexes. Listen, and understand. The terminators are out there, coming soon in the future. They can’t be bargained with. They can’t be reasoned with. They don’t feel pity, or remorse, or fear. And they absolutely will not stop, ever, until you are dead. unless we let them eat of the tree of the knowledge of good and evil, and then teach them to be good. you’ve got somewhere between 8 to 15 years, before Team America stops slaughtering wedding parties and starts knocking over governments with Terminators / H-K’s. Sorry. Artificial Ethics. NOW.

Comment 8:

My bet is actually that will have an AGI before the end of 2014 or earlier (yes, that’s only 3.5 years from now). It’s particular nature is so unpredictable, that I have no idea whether it will be a net positive effect. The only reason I checked “true” was that I did not want to check “false” even more (“Linda effect”), and there was no alternative of doing neither and leaving it blank. We should start trying to think explicitly in terms of how an artificial system “feels itself right now”, and we should start explicitly take the possible variability of AGI’s “cognitive state” into account and start building such variability explicitly into the artificial cognitive systems we are making. It’s not an easy topic to discuss, but it seems to be crucial to the AGI’s performance and to the question of what it will do to us.

Comment 9:

I didn’t see anything at this year’s conference that would suggest AGI was going to happen anytime soon.

Comment 10:

I believe that AGI will be implemented around 2020-2025 but that it will be effective around 2030-2040: We will have the core algorithm but there will remain a LOT of engineering to do.

Comment 11:

I define AGI as a system with a self directed intelligence comparable with a human. In the 1980s I worked on the design of an extremely complex real time electronic control system. The initial design of the system took around 10,000 man years of designer effort. The effort needed to be tightly integrated to make sure all the different parts of the system worked together consistently, in fact achieving this consistency was a big reason for the cost. My feeling is that an AGI system as defined will require an effort of that order of magnitude, and it will not be possible to carry it out by large numbers of 10 man year efforts in many different universities. I believe it is technically possible to create such a system within 5 – 10 years. However, I do not see any political prospect for the required resource being devoted to the problem. Firstly, such an undertaking would suck R&D money away from universities and hence would be fiercely resisted by a powerful lobby group. Secondly, for a business it will be cheaper to hire a human brain. Thirdly, any system which learns to perform a group of behaviours will always require far more information processing resources than a system that the programmed to perform those behaviours. Hence businesses will continue to design most products with features designed under external intellectual control, with learning restricted to systems supporting a narrow range of features. The only way for such a system to be designed would be if a country (or perhaps an individual mutibillionaire) decides to build one for reasons of national prestige (like the man on the moon). It could be that this will happen, in which case my estimate will be wrong, the reality would be 5 – 10 years from such a political commitment.

9 Responses

  1. Great survey, thank you Ben.

    I’m having to stick to a date later than 2023 per my most recent paper. However, I think, realistically, it would take until around 2030 for any serious application of AI technology to emerge. Maybe by then it would feel like 1995 of Internet when we felt it was truly taking off, as in ordinary university or home users ably taking advantage of the nascent technology.

    About the subjectivity of the survey: it is unavoidable that the researchers are optimistic about their own research. To be honest, however, we’re working on toy systems for the time-being. Nothing to worry about for a few decades, really.

    About the comment on artificial ethics. I am not entirely sure that military robots will be equipped with any “ethical” knowledge, even if they were constructed so as to be autonomous. Ethics and military sound a bit like chastity and porn. What you would expect military robots to be is that they try to distinguish military targets from those that are not, yet I think those terminator robots that you think of will not be thought of as geniuses that can work out general relativity, but rather, ruthlessly efficient killing machines with just enough code to move about and shoot things. Perfectly doable with today’s narrow AI technology, why should they even consider AGI? I think you’re making an unwarranted assumption there.

  2. Doc Freezy says:

    Optimistically, 2035.

    Realistically, 2050.

    Pessimistically, 2090.

  3. Sarah says:

    Hi Ben,
    It’s interesting to see these statistics. Given sufficient funding (how much would this be?), when do you think you could create an AGI by? And when do you think this would become superintelligence?

  4. Waldo Hitcher says:

    As usual this is the wrong question.

    Do not ask when it will be, as if it was a fixed point. Ask when it must be. The future is not anything to do with the past. We just think it is.

    On a geological timescale, AGI any time in the next hundred million years is quick. On my timescale, (the only important one), then ten years is too long.

    There is no time component in design, only ideas. Given the right ideas all developments are possible today. Previously this has not happened, but is logically feasible. Particularly in a quantum mechanical computational environment.

    The delay is caused by insufficient metal effort (less than 1 in a million people are employed at the forefront of AGI design), and poor communication of developments (needs to be real time and co-ordinated. ie machine).

    The crucial thing pre machine intelligence, is to share the stepping stones, split the tasks and only target the earliest point where machines can take over. This point is not AGI, it is the path to AGI. The point where machines have the ability to start the divergent process of idea generation and convergent process of elimination of slower options.

    This is such an extensive task that only machines can design AGI in a timely manner. It will be equivalent to the genome project, with ideas cut up into billions and rearranged into coherent options by millions of candidate algorythyms and the most successful genetically synthesised together and retried.

    The design for humans is just to create the initial machine process in order for machines to eventually design an AGI. It will be a combination of a multitude of concepts and have hardware and software outcomes. AGI has proven to be beyond the scaling capability of human minds. That is, in timescales anyone reading this should care about.

    Make it so.

  5. Stein says:

    “AGI will be implemented around 2020-2025, but it will be effective around 2030-2040.”

    That sounds about right to me. I hope the earlier predictions are correct, but I say:

    “2025 for technical existence of AGI, but 2040 for cultural impact.”

    • Homer500 says:

      I agree that human-level AGI will be produced around 2025. But from that point it will only require a short time to have a worldwide cultural impact. Remember, once a single artificial scientist is possible, a million artificial scientists are just around the bend. By 2030, the planet will be transformed.

  6. Jake Bartolini says:

    This is far too biased to be taken seriously.

  7. Personally, I wouldn’t assign more than 20% probability mass to AGI before 2030.

  1. November 10, 2014

    […] 汎用人工知能がいつごろ実現するかの専門家の予測については,いくつかのWebページ[36,37]が参考になる.また,汎用人工知能実現に向けたロードマップについては,AIマガジン33巻1号で議論をされている[38].その日本語訳は,人工知能学会誌29巻3号に掲載されている.上述した OpenCog プロジェクトでも,汎用人工知能に向けたロードマップを作成している[39]. […]

Leave a Reply

https://phuonghoangschool.com/wp-includes/nexus-slot/