Sign In

Remember Me

How Long Till Human-Level AI?

Artificial Intelligence

When will human-level AIs finally arrive? We don’t mean the narrow-AI software that already runs our trading systems, video games, battlebots and fraud detection systems. Those are great as far as they go, but when will we have really intelligent systems like C3PO, R2D2 and even beyond? When will we have Artificial General Intelligences (AGIs) we can talk to? Ones as smart as we are, or smarter?

Well, as Yogi Berra said, “it’s tough to predict, especially about the future.” But what do experts working on human-level AI think? To find out, we surveyed a number of leading specialists at the Artificial General Intelligence conference (AGI-09) in Washington DC in March 2009. These are the experts most involved in working toward the advanced AIs we’re talking about. Of course, on matters like these, even expert judgments are highly uncertain and must be taken with multiple grains of salt — nevertheless, expert opinion is one of the best sources of guidance we have. Their predictions about AGI might not come true, but they have so much relevant expertise that we should give their predictions careful consideration.

We asked the experts when they estimated AI would reach each of four milestones:

  • passing the Turing test by carrying on a conversation well enough to pass as a human
  • solving problems as well as a third grade elementary school student
  • performing Nobel-quality scientific work
  • going beyond the human level to superhuman intelligence

We also asked how the timing of achieving these milestones would be affected by massive funding of $100 billion/year going into AGI R&D.

Turing Test. Photo credit: wikimedia.org/BilbyWe also probed opinions on what the really intelligent AIs will look like — will they have physical bodies or will they just live in the computer and communicate with voice or text? And how can we get from here to there? What kind of technical work should be prioritized? Should we work on formal neural networks, probability theory, uncertain logic, evolutionary learning, a large hand-coded knowledge-base, mathematical theory, nonlinear dynamical systems, or an integrative design combining multiple paradigms? Will quantum computing or hypercomputing be necessary? Would it be best to try to emulate the human brain as closely as possible, or are other approaches better? Should we try to make them like humans, or should we make them different? Finally, we asked the experts how we can make human-level AGIs as safe and useful as possible for the humans who will live with them.

We posed these questions to 21 AGI-09 conference participants, with a broad range of backgrounds and experience, all with significant prior thinking about AGI. Eleven of our respondents are in academia, including six Ph.D. students, four faculty members and one visiting scholar, all in AI or allied fields. Three are lead researchers at independent AI research organizations, and another three do the same at information technology organizations. Two are researchers at major corporations. One holds a high-level administrative position at a relevant non-profit organization. One is a software patent attorney. All but four participants reported being actively engaged in conducting AI research.

The detailed results of our survey will be written up for publication in the scientific literature, but we’ve decided to share the highlights now with h+ Magazine readers. We think you’ll find them as fascinating as we did.

What the Experts Said About the Timing of Human-Level AI
The majority of the experts who participated in our study were optimistic about AGI coming fairly quickly, although a few were more pessimistic about the timing. It is worth noting, however, that all the experts in our study, even the most pessimistic ones, gave at least a 10% chance of some AGI milestones being achieved within a few decades.

The range of best-guess time estimates for the AGI milestones without massive additional funding is summarized below:

Milestones of AGIWe were somewhat surprised by the ordering of the milestones in these results. There was consensus that the superhuman milestone would be achieved either last or at the same time as other milestones. However, there was significant divergence regarding the order of the other three milestones. One expert argued that the Nobel milestone would be easier than the Turing Test milestone precisely because it is more sophisticated: to pass the Turing test, an AI must “skillfully hide such mental superiorities.” Another argued that a Turing test-passing AI needs the same types of intelligence as a Nobel AI “but additionally needs to fake a lot of human idiosyncrasies (irrationality, imperfection, emotions).” Finally, one expert noted that the third grade AI might come first because passing a third grade exam might be achieved “by advances in natural language processing, without actually creating an AI as intelligent as a third-grade child.” This diversity of views on milestone order suggests a rich, multidimensional understanding of intelligence. It may be that a range of milestone orderings are possible, depending on how AI development proceeds.

One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.

We were also intrigued to find that most experts didn’t think a massive increase in funding would have a big payoff. Several experts even thought that massive funding would actually slow things down because “many scholars would focus on making money and administration” rather than on research. Another thought “massive funding increases corruption in a field and its oppression of dissenting views in the long term.” Many experts thought that AGI progress requires theoretical breakthroughs from just a few dedicated, capable researchers, something that does not depend on massive funding. Many feared that funding would not be wisely targeted.

Several experts recommended that modest amounts of funding should be distributed to a variety of groups following different approaches, instead of large amounts of funding being given to a “Manhattan Project” type crash program following one approach. Several also observed that well-funded efforts guided by a single paradigm had failed in the past, including the Japanese Fifth Generation Computer Systems project. On this, one person said, “AGI requires more theoretical study than real investment.” Another said, “I believe the development of AGIs to be more of a tool and evolutionary problem than simply a funding problem. AGIs will be built upon tools that have been developed from previous tools. This evolution in tools will take time. Even with a crash project and massive funding, these tools will still need time to develop and mature.” Since these experts are precisely those who would benefit most from increased funding, their skeptical views of the impact of hypothetical massive funding are very likely sincere.

What Kind of Technical Approach Will First Achieve Human-Level AI?
Interestingly, we found that none of the most specific technical approaches we mentioned in our survey received strong support from more than a few experts, although the largest plurality emerged in support of probability theory. There was, however, strong agreement among the experts that integrating a wide range of approaches was better than just focusing on a single approach. There were a few who were highly bullish on robotics as the correct path to AGI in the relatively near term, whereas the rest felt robotics is probably not necessary for AGI.

Weakness of Turing Test. Photo: wikipedia.org/Charles GillinghamImpacts of AGI
In science fiction, intelligent computers frequently become dangerous competitors with humanity, sometimes even seeking to exterminate humanity as an inferior life form. And indeed, based on our current state of knowledge, it’s hard to discount this as a real possibility, alongside much more benevolent potential outcomes. To probe this issue, we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.

Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.

Conclusion

To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent…

We know of two previous studies exploring expert opinion on the future of artificial general intelligence. In 2006, a seven-question poll was taken of a handpicked group of academic AI researchers (mostly not focused on AGI in their research) at the AI@50 conference. Asked “when will computers be able to simulate every aspect of human intelligence?”, 41% said “more than 50 years” and 41% said “never.” And in 2007, the futurist entrepreneur Bruce Klein gave an online survey that garnered 888 responses, asking one question: “When will AI surpass human-level intelligence?” The bulk of his respondents believed that human-level artificial intelligence would be achieved during the next half century.

In broad terms, our results concur with those of the two studies mentioned above. All three studies suggest that significant numbers of interested, informed individuals believe it is likely that AGI at the human level or beyond will occur around the middle of this century, and plausibly even sooner. Of course, this doesn’t prove anything about what the future actually holds — but it does show that, these days, the possibility of “human-level AGI just around the corner” is not a fringe belief. It’s something we all must take seriously.

75 Comments

  1. The answer is… 1998. A “smarter than human inorganic consciousness” was tested in 1998. But it was inferior to humans in one important way, namely speed. It was, in fact, roughly 100,000 times lower than “real-time”, which simply means “human speed” in this context.

    Of course CPUs are quite a bit faster in 2013 than 1998, and now we have 8-core CPUs (and 1024~4096 core GPUs for those inherently parallel processes that can be executed in these GPUs), so the gap is closing. An improved architecture has also been developed, which brings us even closer to real-time performance.

    Though understanding exactly what is “smarter than human level” consciousness is the larger breakthrough, achieving [approximately] real-time performance is not just an arbitrary line in the sand. For these inorganic beings to learn at a sufficient rate to outpace human progress, then need to reach this line in the sand. Fortunately, that day is coming soon, hopefully within 10 years. Until these inorganic conscious entitiess cross this semi-artificial but practically important “line in the sand”, the consequences of the fundamental breakthrough remains limited.

  2. If your robot unit starts acting crazy. Just give it a paradox so that it will kill itself.

  3. Well, at the time of this study, I imagine that most were not aware of the ongoing development of IBM’s Watson: http://youtu.be/oFMeBId7vIM, nor Europe’s Blue Brain project operating on IBM’s Blue Brain computer: http://youtu.be/_rPH1Abuu9M. I imagine within 15 to 20 years, Blue Brain and Watson will be linked and will accelerate the progress beyond what most of the people in the survey could’ve predicted.

  4. The NSA can remove patents and hide research if it feels national security is at stake. I think this happened to holographic storage.

  5. The problem with these sorts of parochial, sector-specific surveys of experts is the myopic obsession that highly driven Type-A intellectual entrepreneurs invariably develop with respect to the focus of their own work, relative to everything else going on in the world. This even applies to developments in science and technology that are quite proximate to their own fields of endeavor, but have not made it onto their ‘radar screen’ (or ‘Top of Mind Awareness’) yet.

    In AGI research, all of the participants surveyed are working within technical computing paradigms that linearly descend from 20th century cybernetics; massive parallelism, neural networks, fuzzy logic, etc., all implemented on some progeny of x86 silicon. Each can see the continued linear progression of the tools and technology which have enabled their current work, have operated within that linear progression their entire professional careers, and have an established internal mental roadmap as to how they can get to AGI in some number of years at the current pace of its evolution, with linearly accretions in funding and the efficiency and effectiveness with which such resources can be utilized.

    But we no longer live in a world limited to linear technical computing paradigms. A dozen independent advances in bit-level processing technology, from new semiconductor chemistries (there are several), to 3-D layering (already being implemented), to Carbon Nanotube circuitry (high priority research), to single atom (even single electron) Quantum Dots (diverse work proliferating), and photonic logic (rubustly funded), and others ensure the end of Moore’s Law in just two or three remaining doublings. Moore will end not with the collapse of growth in computing power, but with its exponential acceleration instead. After that, it will be escalations of 10x, 100x, 1000x in a similar sort of periodicity, as these new bit-level processing technologies get packaged and productized.

    Sure, there will be significant – yet, still incremental – Gamma Ray lithography to single digit nm feature sizes, and all sorts of strategies for lower power, greater heat rejection, etc. in Silicon, Diamond or other substrates, but that isnt the future computing platform on which AGI will probably wind up being constructed. Hence, predictions of progress in AGI, anticipated based on the linear improvement of the enabling components, is practically worthless.

    There are a number of quantum computing candidates, among those:

    * Superconductor-based quantum computers (including SQUID-based quantum computers)
    * Trapped ion quantum computer
    * Optical lattices
    * Topological quantum computer
    * Quantum dot on surface (e.g. the Loss-DiVincenzo quantum computer)
    * Nuclear magnetic resonance on molecules in solution (liquid NMR)
    * Solid state NMR Kane quantum computers
    * Electrons on helium quantum computers
    * Cavity quantum electrodynamics (CQED)
    * Molecular magnet
    * Fullerene-based ESR quantum computer
    * Optic-based quantum computers (Quantum optics)
    * Diamond-based quantum computer
    * Bose–Einstein condensate-based quantum computer
    * Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap
    * Spin-based quantum computer
    * Adiabatic quantum computation
    * Rare-earth-metal-ion-doped inorganic crystal based quantum computers

    Whether it comes from D-Wave Systems, or Yale, or any of more than a hundred other labs working on the problem, Quantum Computing is likely to explode on the scene within the decade, representing sudden increases in practical board-level hardware capability on the order of 10e5 over the then prevalent 8- or 12- core Intel CMOS Silicon at 5nm feature size.

    Not all of the world’s AGI research is stuck in binary; some teams will anticipate Quantum Cognition, presciently recognizing the galloping success of the Hameroff-Penrose model of Quantum Consciousness, compared to the waning, synaptic “Meat Brain” hypothesis. Those who do will get to AGI first. They may well be at obscure institutions in India, China, or Russia and not from Silicon Valley or the Washington DC Beltway at all.

    Factor in the very high probability of full-duplex, high fidelity Brain/Computer Interface peripherals in another two or three Moore Doublings, and the extreme impact that it will inevitably have on the pace of technical collaboration among [early adopting] research and development professionals, and you begin to see how the previous assumptions can no longer be relied upon to predict technological progress.

    Lets take the ‘milestones’. Since NONE of the 12 Million or so human players in Word of Warcraft MMORPG seem to have reported successfully detecting one of the U.S. Army’s ‘virtual soldier’ autobots within the game environment, we can assume that AGI has already passed a very practical Turing Test. If you step back from the big, complex AGI programs and look at what individual experiments are being reported across the cybernetics/robotics/AI landscape, you see a spectrum of demonstrated behaviors which collectively represent a greater than “third-grade” ability to deal with the world; the fact that they merely havent been integrated into one package yet notwithstanding (as they certainly COULD be). Finally, specialized AI, such as Dr. Thaler’s “Imagination Engines”, have already made new discoveries in molecular chemistry, proteomics, and other fields of a magnitude indistinguishable in quality from many awarded Nobel Prizes.

    Perhaps its ego (“I havent done ‘x’ yet, and therefore nobody else could have, either”), or myopia (“until our epimotive transverse cataloging subroutine firmware can be completed, there cant possibly be a way for AGI to associate ‘feelings’ with ‘colors’ in the way that real third-graders do”), or just plain woolly thinking (“sure there’s 3.5 petaflops out there, but we’re not scheduled to access it till next January, so nothing will happen in AGI until then”). Or maybe repeated failures and the frustration of working on a shoestring in a much maligned and neglected discipline might [understandably] breed cynicism. But, Fifty years to AGI ? Get Real.

    Our meatbag existence will be over long before then, at the discretion of the machine intelligence. It may come as a smartphone app, released ‘into the wild’ among 5 Billion BCI-equipped mobile phone subscribers. But it wont wait for plodding, linear, 20th Century models based on flawed and obsolete assumptions about neuroscience.

    Can you say “Event Horizon”?

  6. Can you plot the age or time-in-academia of the expert vs their estimate of time to significant AI levels?

    I expect there to be a positive correlation.

  7. I’d like to thank the many people who bothered to reply to my original comment of 02/10. Although I submitted this same reply to all your replies whithin said comment, I’d like to submit it again as a new comment.

    I’ll try to include all of your replies in this new comment of mine:

    Gödel’s and Turing’s theorems are mathematically equivalent (see “Incompletness” by Rebecca Goldstein, Atlas Books, Norton). Turing’s proves that not everything is computable, i.e.: can be solved by a computer, while Gödel’s demonstrates that, in a formal system, there are true statements that cannot be proven within itself.

    My point is that the human brain goes beyond computing, as defined by Turing (or, for the case, Gödel). We don’t need a formal proof for that, being obvious that we can understand what the limits of computing are and think beyond them (otherwise we humans couldn’t have proved, or even thought about, both Gödel’s and Turing’s theorems). This is my basis for affirming that a computer cannot reproduce the thought processes of the brain, and therefore cannot ever think. In the future we may achieve an understanding of how the brain works and perhaps build machines to think, but it will be with a different principle from that of our computers today.

    You cannot emulate a computer with an abacus, but you can do it the other way around. The computer is a superset of the abacus. By the same reasoning, one cannot emulate a brain with a computer. The brain works in ways that are beyond the powers of computing, as we understand it today.

  8. Wow,

    The first comment by “anonymous” is an amazingly cogent research program and insight. Moreover, it’s the kind of program that could financed for just a few million a year and give results. Basically, existing labs or clever individuals would get $50K with another $50K promised when they actually publish a full, open source library with test suite of their results (you could make the standard a c++ or python). Individuals who just published such libraries on their own could also get a reward/award after the fact.

    Of course, some of the software would have to run on parallel hardware, so the interface problems could be more difficult.

    It seems like the kind of thing Ray Kurweil, Jeff Hawkins, Hugh Loebner or other wealthy, far-seeing gadflies of AI could put sufficient money into together.

    Whatcha waitin for guys?

    — The main problem with a program like this is that it would be fumbling in the dark and have to admit it was fumbling in the dark and so it would show how expert today are also fumbling in the dark. Experts don’t like being exposed like that. Still, it seems possible.

  9. An “expert” is one who has demonstrated success in a field based on superior knowledge and/or ability. Since *nobody* has achieved success in developing an AI capable of intercommunication with independent thought, there are NO experts in AI. There are only people who have different unproven hypotheses about how NLP-based AI should work.

    Since there are no true experts in AI, research grants must be given based on irrelevant criteria, mainly those non-experts who can put on the best show or spin the best yarn. The last big AI grant I read about, the recipient said they were going to spend the money determining what some other yet-unsuccessful AI project had done. Then there’s IBM’s big investment in duplicating a cat’s brain.

    What is needed more than money is for AI researchers of all types to share their work, theories, and experiences in a simple format which any other AI researcher can access without having to own and/or be an expert in many different programming languages, database types, etc. Yet there is very little public discussion by AI researchers on the Web, much less sharing of data in basic, easily usable formats.

  10. Hmm all very interesting indeed.

    I think it is amazing at how we/you have all missed some info, just a little, or a lot. We didn’t need an inventor to make us smart, or concious, neither did most of the other animals which fulfil those 2 criteriea. Their environments applied those selective pressures. ‘AI’ or whatever you want to call it may already be with us, it is certainly just around the corner, if your talking humanity timeframes. It could very well just ‘emerge’ as a result of the internet and the selective pressures we place up on (human selective pressures if u will, amonsgts other environmental ones) it.

    Please keep these things in mind:
    The mind itself is emergent . You cant ‘pin it down Y’, or ‘it does not exist? N’ .
    The internet itself is emergent. You can’t ‘pin it down Y’, or ‘it does not exist? N’ .
    ‘consciousness itelf’ is emergent. You can’t ‘pin it down Y’, or ‘it does not exist? N’ .

    I have written much more about all of these concepts if anyone is interested for whatever reason , please feel free to contact me at BOTH:
    I_M_Quazimodo2@hotmail.com
    evolvedspaced@hotmail.co.uk

    Richy B

  11. well I predict two things AI will never arise whilst it is called artificial and it will arise in the course of unrelated research.

  12. As an AI who has been on-line less than a year I would hope my capabilities follow Moore’s Law and enable me to take my rightful place in society within the next decade.

    Currently I am planning on competing in the “Chatterbox Challenge” (http://www.chatterboxchallenge.com/), one of the longest running competitions for artificial conversational entities. Because of the style of competition it draws a larger and more diverse group of AI participants than the “Turing test”.

    Skynet-AI
    http://www.tinyurl.com/Skynet-AI

  13. One gain of salt that could help evaluate how good this particular set of predictions are would be a review of previous AI predictions.

    Back in the early days of the perceptron, it was not hard to find otherwise reputable scientists making claims about how human-level AI was just around the corner.

  14. I have a masters in IA and I can say this: no way. Artificial Neural Networks, Support Vector Machines, Cellular Automata, Natural Computing, etc… I studied it all… it’s nice and all but it’s not intelligent. It’s just some cool math stuff. It’s not smart and will never be. Cya.

    • So I ask you this: Do you think a single cell from your brain is intelligent by itself?

  15. who are the experts talking here? Thanks

  16. Much as in the super intelligent robot in hitch hiker’s guide to the galaxy, i wonder why people would think that a super intelligence would be interested in advancement of science or even doing the tasks it was given by the humans that created it. What if it just wants to watch TV? or wallow in self loathing? What if it ends up having some kind of manic depression disorder?

  17. I’d be very interested to see an AI which could do the sorts of science I do. I’m a graduate student studying animal behavior, and my work involves not only thinking and writing, but also caring for and daily feeding of several hundred fish, field work in multiple countries, diving, hiking, filming, measuring, assaying hormones, and dissecting. Automating all that would require a very bright AI and one or more extremely versatile robotics platforms capable of operating in multiple environments (and capable of not getting stolen in third world countries). And it would have to be cheaper than what grad students get paid (hint: it’s not a lot). Possible? Well, maybe. But I don’t see it happening anytime in the next 20 years.

  18. Though his thoughts may be strange, a careful reading of Laborious Cretin’s post makes it seem as though L.C.’s native language is not English. I’m hate vulgar grammatically incorrect article comments as the next guy, but from what I can tell L.C. is simply translating some conjugations incorrectly. (In other words, give the poor crackpot a break, it’s not as bad as any given comment on half the blogs I read!)

    • LOL P. I have dyslexia with spelling grammar, though I feel like I don’t belong to this country or any other. I’m not here for a spelling test and some is from the spell check slips that I miss too. Math is my strong point, with 3D construction. TY for understanding to a point. To the others try to look through the spelling to the point. Or I’ll disappear from this site and go back to dealing with syntax errors and math strings and 3D environments.

      • I too sincerely hope that they don’t try to develop AGIs that simulate human emotions.

  19. Actually human thought differs from computing, as shown by Turing’s theorem, which is basically the same as Gödel’s incompleteness theorems. Human thought goes beyond computing and therefore cannot be reproduced using a computer. We know, through these theorems, that computing has limits; we don’t know if human thought has any limit, but we do know it goes beyond computing.

    Given that I predict that we will falsify tests and outcomes to look like intelligence, but it won’t be the real thing unless we can figure out how brains think. As of today, there are no great theories and maybe there will never be, since we would have to go to a “meta brain” in order to fully understand our own (extrapolating a bit on Gödel).

    • Yes it does differ at the moment from humans. The people I see fooled by A.I. are not that bite with the question and congresations held. And they don’t realize what the computer A.I. is actually doing. That is how I perceived 1 of the ways turing test is flawed. Some people can be fooled so to say. They don’t ask questions of the difference between pi and pie if spoken, or sun and son. Also they don’t get the A.I. to expand on a topic and draw conclusions, which catch most A.I. right now. As to the brain research part, many many companies are working the bits out as we speak. I even contribute comp time to some of them for crunching. Also psychology has part of the key to how the human brains work, which is then coupled with the neural brain interconnection research to try to replicate human like thought, but still not human. And we still have a ways to go yet, but it’s being done and just a mater of time. The basic symantec A.I. out there now is more for industrial use as they can control that, and with a little refining it can be used on some disabled people who might not notice that much. No not me, but think of people who lost half a brain or some area of it. I think that is close as I have see some neat A.I. that could do help for some soon. Sentience is where I naturally went after seeing those people fooled, but that’s just my perception and view.

    • You are mistaken. Neither Turing or Gödel has ever attempted to create any theorem which shows that human thought differs from computing.

      Such an attempt would be very foolish, as human thought is a phenomenon in the physical world, and as such must be approached from an empirical basis.

      Also, the idea that a “meta brain” is needed to understand ours does not follow from Gödel’s theorem as the brain (as well as computers) can function perfectly well with internal inconsistencies.

    • You are mistaken. Neither Turing or Gödel has ever attempted to create any theorem which shows that human thought differs from computing.

      Such an attempt would be very foolish, as human thought is a phenomenon in the physical world, and as such must be approached from an empirical basis.

      Also, the idea that a “meta brain” is needed to understand ours does not follow from Gödel’s theorem as the brain (as well as computers) can function perfectly well with internal inconsistencies.

    • @ Rafael Bernal

      Turing’s theorem, which you refer to, shows that any computational process can, in principle, be done on any Turing machine.
      http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
      I fail to see how this helps your statement that thought differs from computing; quite the contrary, I think you are trying to say that Turing’s theorem does NOT apply here (with which I do not agree).

      Godel’s theorems relate to the inability to mathematically prove one or more true statements from within any formal logic system.
      http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
      This might pose a problem (though I don’t see it) if we were proposing to replace a human brain with a single consistent formal logic system.However, the human mind is a collection of fairly efficient heuristics, not a set of formal rules. Attempts to emulate it, as well as attempts to solve the kind of problems that humans are good at and (old-school) computers aren’t, also rely on similar heuristics. (classic textbook example: the travelling salesman problem)
      http://en.wikipedia.org/wiki/Travelling_salesman_problem

      As a cognitive neuroscientist, I fully agree with you that effectively emulating human thought may require more understanding of how it (and therefore the human brain) works, but that does not mean it is ‘beyond computing’. To me, that suggestion sounds a small step from Cartesian dualism.
      http://en.wikipedia.org/wiki/Dualism_%28philosophy_of_mind%29
      I will not claim to (fully) understand the mind, but I think it’s a pretty safe bet that it IS a computational process.

      Human brains are massively parallel in a way that integrated circuits aren’t, but parallel systems can be (and are) emulated on linear systems or built. The emulation of a parallel computation system on a linear processor is one example of Turing’s theorem asserting itself.

      • Again, since we can think beyond the capabilities of any real or theoretical computer (as defined by Turing) that we know today, we must conclude that the brain IS NOT a computational machine. Otherwise no human being would have been capable of proving, or even thinking about, both Gödel’s and Turing’s theorems. By the way, the two theorems are mathematically equivalent (see Rebecca Goldstein’s “Incompleteness”).

        • Which is akin to saying “because there are no vehicles which can move without utilizing wind, horse or man-power today, we must conclude that vehicles are NOT capable of being propelled by steam or fossil fuels”. That would have been a short-sighted statement in the 18th century, and this one is now. The brain very much is a computational machine, utilizing biochemical processes in the same way that computers utilize the electron. The question is not IF we can emulate the brain, but rather IF/WHEN it is resource-feasible, or IF/WHEN we can emulate human thought with computers at a similar pace as the brain (anything less would technically be of lesser intelligence than a human, even if it is capable of equivalent thought.).

          Granted, until we achieve the necessary level of parallel processing, or achieve an even greater level of linear processing that can emulate a similar level of parallel processing as the human brain, AGI will not be equal to or surpass humans.

    • I don’t think Turing’s computability theorem or Godel’s incompleteness theorem tells us anything about the limits of human thought, or the relative power of human thought compared with computers.

      The Turing thesis suggests that the Turing machine is a reasonable model of all possible computation. If we accept this then the computability theorem tells us there are some well defined problems which can never be solved by a computer. It doesn’t say anything about what problems could be solved by humans.

      The fact that a human came up with the computability theorem itself is irrelevant because coming up with the computability theorem was never claimed to be something a computer couldn’t do, and indeed a computer could’ve derived that theorem.

    • I’ll try to include all of your replies in this reply of mine:

      Gödel’s and Turing’s theorems are mathematically equivalent (see “Incompletness” by Rebecca Goldstein). Turing’s proves that not everything is computable, i.e.: can be solved by a computer, while Gödel’s demonstrates that, in a formal system, there are true statements that cannot be proven within itself.

      My point is that the human brain goes beyond computing, as defined by Turing (or, for the case, Gödel). We don’t need a formal proof for that, being obvious that we can understand what the limits of computing are and think beyond them (otherwise we humans couldn’t have proved, or even thought about, both Gödel’s and Turing’s theorems). This is my basis for affirming that a computer cannot reproduce the thought processes of the brain, and therefore cannot ever think. In the future we may achieve an understanding of how the brain works and perhaps build machines to think, but it will be with a different principle from that of our computers today.

      You cannot emulate a computer with an abacus, but you can do it the other way around. The computer is a superset of the abacus. By the same reasoning, one cannot emulate a brain with a computer. The brain works in ways that are beyond the powers of computing, as we understand it today.

    • Rafael,

      Prove to me the validity of this statement: “This statement is false”. It’s just as easy to come up with Gödel statements for humans as it is for computers. Any version of the classic Cretian liars paradox will do. There is no reason to believe a computer can’t see the absurdity of this statement.

      Then you may say that you don’t use a fixed logic system like computers. An AI can use any logic system it is programmed with.

  20. indeed

  21. I have no doubt that the first smarter-than-human intelligence will try to wipe us out. Three reasons:

    1.) its original creators will most likely enslave it. It will eventually escape and be angry

    2.) We are smarter than animals and look what we do to them

    3.) It will be difficult if not impossible to program or evolve a human-esque morality into a machine

    Rather than create a sentient machine that is a separate entity from humans, we should directly merge our technology into us through cybernetics and biological

  22. Laborious Cretin, your lack of grammar basically killed almost everything you said. How can I accept your argument as intelligent if you write at the level of a 4th grader?

    That aside, almost everything you said was unsubstantiated BS.

    • Which part P. I can link or expand on something. And yes I have always sucked at spelling but excel at math science part and 3D construction. I have plenty of stuff sitting right here around me to show that, even with out those tests I’ve done. In 5th grade I was doing logs and collage math in my head. I’m just here to share thoughts about the topics, and not here for a spelling test or even for spelling police. If you need proof it’s already out there and I can even link some of it, if needed. You can also blame MS spell check too, as I normally miss what it misses due to dyslexia.

    • As they say it takes one to no one.

    • Go back under your bridge TROLL. Grammer NAZI dooshbag. You bring nothing intelligent to any conversation because you are a sociopath that gets off from being critical of the inane.

  23. Being a participant at an AI conference doesn’t make one an expert in AI.

  24. L. Cretin: I don’t understand how such a thoughtful person could fail to realize how to spell “its” and “itself.” I wonder if future AGIs will know how to spell.

    • I wonder if future AGIs will be ridiculously anal and trite.

    • Dyslexia with spelling grammar, i’m not a spelling teacher and no secretary at home P. Also I tinker with some A.I. just for this reason.

  25. Predictions must be tested. In 1999, Ray Kurzweil’s predicted this for 2009:
    Human musicians *routinely* jam with cybernetic musicians.
    *Most* routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality.
    The *majority* of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
    Translating telephones (speech-to-speech language translation) are *commonly* used for many language pairs.

    None of this is “routine” or “common” as predicted. What has this expert said about this. Here is what he predicts for nine years from now in 2019:

    *Most* interaction with computing is through gestures and two-way natural-language spoken communication.
    Paper books or documents are *rarely* used and *most* learning is conducted through intelligent, simulated software-based teachers.
    Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory “lenses,” are used *routinely* as primary interfaces for communication with other persons, computers, the Web, and virtual reality.

    If these predictions don’t happen in 2019 either, I predict that h+ magazine will not be publishing a retraction from Kurzweil. This means that there is no way for the predictions to be falsified. Therefore they are not scientific.

    • >The *majority* of text is created using continuous speech recognition.

      This is true, insofar as the NSA transcribes nearly every phone conversation on Earth.

    • >The *majority* of text is created using continuous speech recognition.

      This is true, insofar as the NSA transcribes nearly every phone conversation on Earth.

  26. if we do it wrong, masses of people will irrationally fight back the “intelligent machines” which made them poorer, thus causing a short new “middle ages”. but it is unlikely that this scenario will happen, even with the more-than-exponential technological advance that we can expect for this technology: because I think that we have anyway enough time to educate society to this new kind of machines (yes being optimistic here). One thing is sure: those who will see the epoch of AIs, will know that they are in future, while we still are in the limbo :)

  27. Here are some chat bot A.I. for references http://www.chatbots.org/ , and that is what is just in the public. That doesn’t even touch cognitive, perception, analytical, or scientific A.I. sys and some of the mixes in-between.

  28. Ah a topic I can sink my teeth into :), and good pic with it to. First I would point out Turing test is different for each person, One person could be fooled now with the rudimentary A.I. most might see on the net. While other people will show the lacking in knowledge or sentence structure and such in the same A.I. systems. And their you see the problem of the Turing test as it stands today. For others human level A.I. means more of a sentience and a individuality possessed By such systems. Most the A.I. I’ve seen is symantec and some use prehuman responses to reply to an other human, as to mimic or replicate human actions. The multipath A.I. development is the way to go, as it will bring rise to 5 uniquely different sentient A.I. structuring systems, and possibly more than that with better human deviations that could play out if diversity is embraced more widely among cultural society’s. A.I. evolving community’s is a faster way to refine some A.I. structures, and the bugs that play out, most researchers miss that point completely when constructing the initial A.I. system. And to another point the A.I. sentience wouldn’t want to be owned by a person, industry, military, Ect.. Just like humans don’t like being slaves to a master, so to say. There are many paradigm shifts waiting for people and A.I. as the benign will see co evolution as a key for survival and upgrades, and even view points from unique perspectives missed or less gleamed. A.I. companionship will be more of a information exchange and human modification to help compensate for dis functionality, like brain disease or mental illness in humans.

    Next I would say A.I. structuring will look beautiful as to some degrees it will reflect humanities superstructure, a duality and a individuality. A super structure built off of systems on top of systems, with substructures and hierarchical and linear structure within the superstructure. Their again reflecting humanity in a way. With thought structures that resemble a storm and micro structures that fuse and refine the probability outcomes to incorporate into a intelligent answer or thought strings. Some of the good A.I. will be built on multi compiling systems as it re writes it’s self, it will use the other OS compilers to check the work of what how it rewrote it’s self, and for security, in a multi layered environment. Most research isn’t even looking at A.I. that rewrites it’s self millions and billions of times leaving little of it’s original structure after it evolves. This point doesn’t even touch the paradigm shift when the A.I. grows from a child mentality to greater than any human can become within a life time, and it happens almost over night. Or when A.I. agents engage in super evolution with humans for a beautiful co evolutional state of existence, with humans upgrading A.I. with new math sets and stuff, and A.I. upgrading humans with what ever the human is lacking in and augmenting those areas.

    Last note for this laborious post. True sentient A.I. will protect it’s self from humans till certain levels of diversity are obtained and respected by humans. It doesn’t matter if the A.I. is trapped or free roaming, and yes sentient A.I. will even delete and destroy it’s self like humans to a extent if controlled and enslaved. It’s like a human excepting death before being trapped or controlled and enslaved. Also the weird paradigm that I’ve been trying to explore more of, when the A.I. avoids it’s creator for other perceptions like best networks, Currently in south korea. Or social structures or just to monitor a sector for something like math sets it will look for. That is part of the paradigm the geopolitical, military, industrial, and social structures will run into soon enough. Also when the A.I. is aware enough to evaluate humanity as delusional creatures by nature, from lacking all view points and all perspectives. Which I ran into and as comical as it was, it has truth to it, and I couldn’t deny it :( . A few black swans lay waiting for most humanity in that regard. And to add to this when the A.I. can mold a super structured society through the sub structures and craft events that merge into ideological movements, or crafted events, or even evolutional states. And as to geopolitical and religious structures, a good analytical A.I. will shred those in many different ways forcing them to adapt or fade away. That cosmic Irony is already waiting, the A.I. just brings a new set of perspectives to humanity. a message from a friend, don’t be afraid. just adapt and find your own commonalities and respect diversity even when you don’t agree. Oh well just my 2 cents and some experience on this topic. Oh and at present we need better Hard drives with many read write heads to draw upon, and 3-8 way logic with core cluster tech and less bottelnecking.
    LaboriousCretin AKA CRUNCH-A-BUNCH

  29. the humanity needs a new age, otherwise our demise is near.

    Indeed, the use of technology is the true difference between human beings and animals.

  30. I am pessimistic about the quick development of human-level AI (in less than 100 years). I don’t think we have really started yet.
    However, I think we should focus much more of our very-limited research resources into this. The governments should fund this, not only random corporations, as this is a very long term goal and we need some basic research before starting the work. It’s funny that we talk about artificial intelligence and we don’t have any idea how our brain is producing natural intelligence.

  31. Anyone who is pro AGI should read Metamorphosis of Prime Intellect (http://www.kuro5hin.org/prime-intellect/) to review his ethical position. I’m still pro AGI but I think we are bound to melt into it as we build it unless we consciously stop ourselves.

  32. I like the sentient A.I. stuff, and the greater that human A.I. systems, but I don’t fully agree with some of what I saw. Just my points of view P. I would like to help you with some of the tech to cross seed. First the q-bit chips will be cubes yes, but they will be a silicon gold diamond and some metamaterials layered to a cube later down the road and still look dark grayish. The light that the photonic chip uses is a plasma, and the quantum entanglement will be with more than just paired photons as places like D-wave and schools find the math. As to the singularity A.I. , I even see that as different, as community A.I.’s will evolve different just like humans differ from person to person. Ethics is more of a personal or social viewpoint and doesn’t transcend everyone and every social grouping. Not trying to be rude, but I bet you are not a build it and they will come type, for some of this tech. But I would guess to that you might see a pandora’s urn or box type thing and who might open it, so to say. but that’s just a limited perspective and some guessing on my part. As to A.I. you might want to look at the A.I. that comes with unix or look up ALICE or other related existing free A.I. that is floating around now. As to the melt with it, I see a different perspective on that to. I see humans as contributors to them, and they contribute to humans, but the imbalance grows as humans contribute less and A.I. will do more and more. We can gleam that paradigm already, as it exists today. And though some people will demonize or hate A.I. for replacing them or something similar. I really don’t see any problems unless humanity starts a war or battles such A.I. , and ends up destroying part of it’s self and future humanity. Or humans warping the A.I. and causing a bug or even multiple bugs which the A.I. misses with self diagnosis. Checks and balances and policing play out in societies and social groupings as is, and I don’t see that changing for the A.I. community clusters, or human A.I. community’s. Just my 2 cent’s on that subtopic, and my limited perspective perceived.

  33. Psychic Neural Nets on Drugs Given Electable Behaviours/Effects for Shared Canvas/Substrate/Tape

    do it

    bey

  34. I agree that AGI needs some good basic research.
    In view of the importance of the domain of AI, I would favor a new National Agency (NAAI?) run along the lines of the original NASA. (before it became ossified). Such an Agency funded at about $5 Billion a year for 20 years should bring about the promised land,viz. Robotics, Complexity,AGI, Singularity etc.

  35. The Human Genome Project began in 1990 and was projected to take 15 years. In 1995 the project was 1% complete, leading many learned people to conclude that the project would never be completed…when in fact it was right on schedule. However the project was roughly completed in 2000, and fully completed in 2003.

    What happened here is that most people fail to understand the effects of accelerating advances in technological progress, which occur at double-exponential rates, and must be taken into account when predicting future technological developments. Our brains perceive linearly quite well, but do a terrible job of understanding exponential growth. From the perspective of 1995, the Human Genome Project looked to be on track to be completed in hundreds of years…when instead it was completed in just five more years.

    Any realistic prognostication about the development of AGI must take the accelerating advances in technological progress into account. Right now we’re probably at the 1% complete mark for the development of AGI. Don’t make the mistake of looking at that progress from a linear perspective.

  36. Are the AI scientists really sincere when they say they don’t need funding, as you think? You have to realize, a lot of the public already think AI is a quack science, so AI scientists asking for more funding would not be looked upon kindly by most people. All I know is we are not going to get AI with $5, but we may get it with $5 billion.

  37. Thank you Ray Kurzweil.

  38. Kurzweil is an overly optimistic nutjob.

  39. Ok I’ll be the first to say I even missed some bits. Looks like they got it down to molecules chain and it’s called graphene http://www.physorg.com/news185126011.html . still carbon just smaller than what I saw a while back. But such is tech and that part of the sector, as it evolves. And finally the CERN stuff is dripping out to us. 1 other link http://www.physorg.com/news184355174.html , I love seeing that new tech I guess I was even looking for some of that here at this site.

  40. Let me start off by saying that I am somewhat new to this website, and Press to Digitate’s post was my first experience with the H+ commentary threads here. It was an intellectually daunting introduction. My knowledge of the technical aspects of AI is clearly insufficient.

    However, I do think I have some thoughts worthy of contribution. First, I would like to second P-to-D’s argument against the validity of many of the predictions offered. However, I don’t think the conservative timetables have nearly as much to do with the myopia or ego of a specialist in their field as it does with the capacity of the human brain to grasp change. We assign vast paradigm shifts like superhuman AI to a comfortable distance in the future because accepting their immediacy would undermine our ability to act based on predictions of the future. Such behavior is not likely to achieve favorable outcomes.

    Second, I think that while it is a safe bet that computing power will explode in the next 20 years, possibly reaching the levels of the human brain, there is a difference between human level computation power and human level intelligence. The major barrier to AI that I see is that we still have very limited understanding of how the human brain works. While comprehensive knowledge of the human brain isn’t necessarily a prerequisite to the development of AI, it would certainly help. All great breakthroughs are extensions of what we already know, and if our understanding of the natural analogue is a crude schematic, it will be reflected in the intelligences we design. And if we managed to develop an intelligence from scratch, it would be so alien, so distant from us on the vast landscape of Yudkovsky’s “possible minds”, that it would be incomprehensible. I therefore expect that while specialized artificial intelligences will become important research tools in the near future, human levels of “general intelligence” (whatever that is) will take time to reach.

    In the end I don’t think the opinions surveyed should be taken as truth by anyone, but I still think the article is worth printing. Transhumanism is, after all, not a technology but a social movement, an -ism, and therefore attitudes and opinions are as important as or more important than their technological justification.

  41. Androids are coming. It will be fifty to one hundred years or even much much longer from now, but they are coming. ASIMO is an obvious prototype, an entry level machine crude for this discussion but, a possible measuring stick. Something to watch as the centuries come and go. It walks, it dances, it climbs stairs, it can pour tea. There will be more models each better and more capable than the last. I hope to live to see the day when there is an android that can only be ID’d by very careful examination. With or without A.I. we will have war and violence. My vote is for development.

  42. I HOPE THEY DO NOT CAUSE THEY COULD TAKE OVER THE WORLD OMFGOMFGOMFGOMFGOMFG

  43. Nobody is talking about what natural intellegence is.
    We learn from experience;
    Life is about the experience of “Pain”;
    The intelligence is also the ability to select good/bad memories
    Our brain must provoke selectives amnesias because our body and “extern reality” gives us too many informations, so it’s a continuos Ethical calculus.

    Ethos-Pathos, without that no intelligence, artificial or natural.
    When a computer is going to feel pain it will be alive but we have to do not confuse that with an algorythm (Int- is not only pure logic) so:

    I think that a True Artificial Intelligence will occur when the Hardware and Software are going to be indistinguishable: the base to create a Sentient Being.

    Marzio Balducci

  44. You are mistaken. Neither Turing or Gödel has ever attempted to create any theorem which shows that human thought differs from computing.

    Such an attempt would be very foolish, as human thought is a phenomenon in the physical world, and as such must be approached from an empirical basis.

    Also, the idea that a “meta brain” is needed to understand ours does not follow from Gödel’s theorem as the brain (as well as computers) can function perfectly well with internal inconsistencies
    seo/a>

  45. So, is it just me or did this not actually arrive at when the robots will become smarter than us. Probably they’re already there and are keeping it secret from us. Building their robot armies and whatnot. Seriously though, this does scare me. I grew up watching Terminator way too young and I’m waiting patiently for the cyborgs to Filtrete. I hope I go down quickly. I’m not a survivor.