How Long Till Human-Level AI?
When will human-level AIs finally arrive? We don’t mean the narrow-AI software that already runs our trading systems, video games, battlebots and fraud detection systems. Those are great as far as they go, but when will we have really intelligent systems like C3PO, R2D2 and even beyond? When will we have Artificial General Intelligences (AGIs) we can talk to? Ones as smart as we are, or smarter?
Well, as Yogi Berra said, “it’s tough to predict, especially about the future.” But what do experts working on human-level AI think? To find out, we surveyed a number of leading specialists at the Artificial General Intelligence conference (AGI-09) in Washington DC in March 2009. These are the experts most involved in working toward the advanced AIs we’re talking about. Of course, on matters like these, even expert judgments are highly uncertain and must be taken with multiple grains of salt — nevertheless, expert opinion is one of the best sources of guidance we have. Their predictions about AGI might not come true, but they have so much relevant expertise that we should give their predictions careful consideration.
We asked the experts when they estimated AI would reach each of four milestones:
- passing the Turing test by carrying on a conversation well enough to pass as a human
- solving problems as well as a third grade elementary school student
- performing Nobel-quality scientific work
- going beyond the human level to superhuman intelligence
We also asked how the timing of achieving these milestones would be affected by massive funding of $100 billion/year going into AGI R&D.
We also probed opinions on what the really intelligent AIs will look like — will they have physical bodies or will they just live in the computer and communicate with voice or text? And how can we get from here to there? What kind of technical work should be prioritized? Should we work on formal neural networks, probability theory, uncertain logic, evolutionary learning, a large hand-coded knowledge-base, mathematical theory, nonlinear dynamical systems, or an integrative design combining multiple paradigms? Will quantum computing or hypercomputing be necessary? Would it be best to try to emulate the human brain as closely as possible, or are other approaches better? Should we try to make them like humans, or should we make them different? Finally, we asked the experts how we can make human-level AGIs as safe and useful as possible for the humans who will live with them.
We posed these questions to 21 AGI-09 conference participants, with a broad range of backgrounds and experience, all with significant prior thinking about AGI. Eleven of our respondents are in academia, including six Ph.D. students, four faculty members and one visiting scholar, all in AI or allied fields. Three are lead researchers at independent AI research organizations, and another three do the same at information technology organizations. Two are researchers at major corporations. One holds a high-level administrative position at a relevant non-profit organization. One is a software patent attorney. All but four participants reported being actively engaged in conducting AI research.
The detailed results of our survey will be written up for publication in the scientific literature, but we’ve decided to share the highlights now with h+ Magazine readers. We think you’ll find them as fascinating as we did.
What the Experts Said About the Timing of Human-Level AI
The majority of the experts who participated in our study were optimistic about AGI coming fairly quickly, although a few were more pessimistic about the timing. It is worth noting, however, that all the experts in our study, even the most pessimistic ones, gave at least a 10% chance of some AGI milestones being achieved within a few decades.
The range of best-guess time estimates for the AGI milestones without massive additional funding is summarized below:
We were somewhat surprised by the ordering of the milestones in these results. There was consensus that the superhuman milestone would be achieved either last or at the same time as other milestones. However, there was significant divergence regarding the order of the other three milestones. One expert argued that the Nobel milestone would be easier than the Turing Test milestone precisely because it is more sophisticated: to pass the Turing test, an AI must “skillfully hide such mental superiorities.” Another argued that a Turing test-passing AI needs the same types of intelligence as a Nobel AI “but additionally needs to fake a lot of human idiosyncrasies (irrationality, imperfection, emotions).” Finally, one expert noted that the third grade AI might come first because passing a third grade exam might be achieved “by advances in natural language processing, without actually creating an AI as intelligent as a third-grade child.” This diversity of views on milestone order suggests a rich, multidimensional understanding of intelligence. It may be that a range of milestone orderings are possible, depending on how AI development proceeds.
One observed that “making an AGI capable of doing powerful and creative thinking is probably easier than making one that imitates the many, complex behaviors of a human mind — many of which would actually be hindrances when it comes to creating Nobel-quality science.” He observed “humans tend to have minds that bore easily, wander away from a given mental task, and that care about things such as sexual attraction, all which would probably impede scientific ability, rather that promote it.” To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent — in certain ways — than it actually was. There is no compelling reason to spend time and money developing this capacity in a computer.
We were also intrigued to find that most experts didn’t think a massive increase in funding would have a big payoff. Several experts even thought that massive funding would actually slow things down because “many scholars would focus on making money and administration” rather than on research. Another thought “massive funding increases corruption in a field and its oppression of dissenting views in the long term.” Many experts thought that AGI progress requires theoretical breakthroughs from just a few dedicated, capable researchers, something that does not depend on massive funding. Many feared that funding would not be wisely targeted.
Several experts recommended that modest amounts of funding should be distributed to a variety of groups following different approaches, instead of large amounts of funding being given to a “Manhattan Project” type crash program following one approach. Several also observed that well-funded efforts guided by a single paradigm had failed in the past, including the Japanese Fifth Generation Computer Systems project. On this, one person said, “AGI requires more theoretical study than real investment.” Another said, “I believe the development of AGIs to be more of a tool and evolutionary problem than simply a funding problem. AGIs will be built upon tools that have been developed from previous tools. This evolution in tools will take time. Even with a crash project and massive funding, these tools will still need time to develop and mature.” Since these experts are precisely those who would benefit most from increased funding, their skeptical views of the impact of hypothetical massive funding are very likely sincere.
What Kind of Technical Approach Will First Achieve Human-Level AI?
Interestingly, we found that none of the most specific technical approaches we mentioned in our survey received strong support from more than a few experts, although the largest plurality emerged in support of probability theory. There was, however, strong agreement among the experts that integrating a wide range of approaches was better than just focusing on a single approach. There were a few who were highly bullish on robotics as the correct path to AGI in the relatively near term, whereas the rest felt robotics is probably not necessary for AGI.
Impacts of AGI
In science fiction, intelligent computers frequently become dangerous competitors with humanity, sometimes even seeking to exterminate humanity as an inferior life form. And indeed, based on our current state of knowledge, it’s hard to discount this as a real possibility, alongside much more benevolent potential outcomes. To probe this issue, we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.
Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.
Conclusion
To successfully emulate a human, a computer might have to disguise many of its abilities, masquerading as being less intelligent…
We know of two previous studies exploring expert opinion on the future of artificial general intelligence. In 2006, a seven-question poll was taken of a handpicked group of academic AI researchers (mostly not focused on AGI in their research) at the AI@50 conference. Asked “when will computers be able to simulate every aspect of human intelligence?”, 41% said “more than 50 years” and 41% said “never.” And in 2007, the futurist entrepreneur Bruce Klein gave an online survey that garnered 888 responses, asking one question: “When will AI surpass human-level intelligence?” The bulk of his respondents believed that human-level artificial intelligence would be achieved during the next half century.
In broad terms, our results concur with those of the two studies mentioned above. All three studies suggest that significant numbers of interested, informed individuals believe it is likely that AGI at the human level or beyond will occur around the middle of this century, and plausibly even sooner. Of course, this doesn’t prove anything about what the future actually holds — but it does show that, these days, the possibility of “human-level AGI just around the corner” is not a fringe belief. It’s something we all must take seriously.
The answer is… 1998. A “smarter than human inorganic consciousness” was tested in 1998. But it was inferior to humans in one important way, namely speed. It was, in fact, roughly 100,000 times lower than “real-time”, which simply means “human speed” in this context.
Of course CPUs are quite a bit faster in 2013 than 1998, and now we have 8-core CPUs (and 1024~4096 core GPUs for those inherently parallel processes that can be executed in these GPUs), so the gap is closing. An improved architecture has also been developed, which brings us even closer to real-time performance.
Though understanding exactly what is “smarter than human level” consciousness is the larger breakthrough, achieving [approximately] real-time performance is not just an arbitrary line in the sand. For these inorganic beings to learn at a sufficient rate to outpace human progress, then need to reach this line in the sand. Fortunately, that day is coming soon, hopefully within 10 years. Until these inorganic conscious entitiess cross this semi-artificial but practically important “line in the sand”, the consequences of the fundamental breakthrough remains limited.
If your robot unit starts acting crazy. Just give it a paradox so that it will kill itself.
Well, at the time of this study, I imagine that most were not aware of the ongoing development of IBM’s Watson: http://youtu.be/oFMeBId7vIM, nor Europe’s Blue Brain project operating on IBM’s Blue Brain computer: http://youtu.be/_rPH1Abuu9M. I imagine within 15 to 20 years, Blue Brain and Watson will be linked and will accelerate the progress beyond what most of the people in the survey could’ve predicted.
The NSA can remove patents and hide research if it feels national security is at stake. I think this happened to holographic storage.
The problem with these sorts of parochial, sector-specific surveys of experts is the myopic obsession that highly driven Type-A intellectual entrepreneurs invariably develop with respect to the focus of their own work, relative to everything else going on in the world. This even applies to developments in science and technology that are quite proximate to their own fields of endeavor, but have not made it onto their ‘radar screen’ (or ‘Top of Mind Awareness’) yet.
In AGI research, all of the participants surveyed are working within technical computing paradigms that linearly descend from 20th century cybernetics; massive parallelism, neural networks, fuzzy logic, etc., all implemented on some progeny of x86 silicon. Each can see the continued linear progression of the tools and technology which have enabled their current work, have operated within that linear progression their entire professional careers, and have an established internal mental roadmap as to how they can get to AGI in some number of years at the current pace of its evolution, with linearly accretions in funding and the efficiency and effectiveness with which such resources can be utilized.
But we no longer live in a world limited to linear technical computing paradigms. A dozen independent advances in bit-level processing technology, from new semiconductor chemistries (there are several), to 3-D layering (already being implemented), to Carbon Nanotube circuitry (high priority research), to single atom (even single electron) Quantum Dots (diverse work proliferating), and photonic logic (rubustly funded), and others ensure the end of Moore’s Law in just two or three remaining doublings. Moore will end not with the collapse of growth in computing power, but with its exponential acceleration instead. After that, it will be escalations of 10x, 100x, 1000x in a similar sort of periodicity, as these new bit-level processing technologies get packaged and productized.
Sure, there will be significant – yet, still incremental – Gamma Ray lithography to single digit nm feature sizes, and all sorts of strategies for lower power, greater heat rejection, etc. in Silicon, Diamond or other substrates, but that isnt the future computing platform on which AGI will probably wind up being constructed. Hence, predictions of progress in AGI, anticipated based on the linear improvement of the enabling components, is practically worthless.
There are a number of quantum computing candidates, among those:
* Superconductor-based quantum computers (including SQUID-based quantum computers)
* Trapped ion quantum computer
* Optical lattices
* Topological quantum computer
* Quantum dot on surface (e.g. the Loss-DiVincenzo quantum computer)
* Nuclear magnetic resonance on molecules in solution (liquid NMR)
* Solid state NMR Kane quantum computers
* Electrons on helium quantum computers
* Cavity quantum electrodynamics (CQED)
* Molecular magnet
* Fullerene-based ESR quantum computer
* Optic-based quantum computers (Quantum optics)
* Diamond-based quantum computer
* Bose–Einstein condensate-based quantum computer
* Transistor-based quantum computer – string quantum computers with entrainment of positive holes using an electrostatic trap
* Spin-based quantum computer
* Adiabatic quantum computation
* Rare-earth-metal-ion-doped inorganic crystal based quantum computers
Whether it comes from D-Wave Systems, or Yale, or any of more than a hundred other labs working on the problem, Quantum Computing is likely to explode on the scene within the decade, representing sudden increases in practical board-level hardware capability on the order of 10e5 over the then prevalent 8- or 12- core Intel CMOS Silicon at 5nm feature size.
Not all of the world’s AGI research is stuck in binary; some teams will anticipate Quantum Cognition, presciently recognizing the galloping success of the Hameroff-Penrose model of Quantum Consciousness, compared to the waning, synaptic “Meat Brain” hypothesis. Those who do will get to AGI first. They may well be at obscure institutions in India, China, or Russia and not from Silicon Valley or the Washington DC Beltway at all.
Factor in the very high probability of full-duplex, high fidelity Brain/Computer Interface peripherals in another two or three Moore Doublings, and the extreme impact that it will inevitably have on the pace of technical collaboration among [early adopting] research and development professionals, and you begin to see how the previous assumptions can no longer be relied upon to predict technological progress.
Lets take the ‘milestones’. Since NONE of the 12 Million or so human players in Word of Warcraft MMORPG seem to have reported successfully detecting one of the U.S. Army’s ‘virtual soldier’ autobots within the game environment, we can assume that AGI has already passed a very practical Turing Test. If you step back from the big, complex AGI programs and look at what individual experiments are being reported across the cybernetics/robotics/AI landscape, you see a spectrum of demonstrated behaviors which collectively represent a greater than “third-grade” ability to deal with the world; the fact that they merely havent been integrated into one package yet notwithstanding (as they certainly COULD be). Finally, specialized AI, such as Dr. Thaler’s “Imagination Engines”, have already made new discoveries in molecular chemistry, proteomics, and other fields of a magnitude indistinguishable in quality from many awarded Nobel Prizes.
Perhaps its ego (“I havent done ‘x’ yet, and therefore nobody else could have, either”), or myopia (“until our epimotive transverse cataloging subroutine firmware can be completed, there cant possibly be a way for AGI to associate ‘feelings’ with ‘colors’ in the way that real third-graders do”), or just plain woolly thinking (“sure there’s 3.5 petaflops out there, but we’re not scheduled to access it till next January, so nothing will happen in AGI until then”). Or maybe repeated failures and the frustration of working on a shoestring in a much maligned and neglected discipline might [understandably] breed cynicism. But, Fifty years to AGI ? Get Real.
Our meatbag existence will be over long before then, at the discretion of the machine intelligence. It may come as a smartphone app, released ‘into the wild’ among 5 Billion BCI-equipped mobile phone subscribers. But it wont wait for plodding, linear, 20th Century models based on flawed and obsolete assumptions about neuroscience.
Can you say “Event Horizon”?
Can you plot the age or time-in-academia of the expert vs their estimate of time to significant AI levels?
I expect there to be a positive correlation.
Really? My word!!!!
I’d like to thank the many people who bothered to reply to my original comment of 02/10. Although I submitted this same reply to all your replies whithin said comment, I’d like to submit it again as a new comment.
I’ll try to include all of your replies in this new comment of mine:
Gödel’s and Turing’s theorems are mathematically equivalent (see “Incompletness” by Rebecca Goldstein, Atlas Books, Norton). Turing’s proves that not everything is computable, i.e.: can be solved by a computer, while Gödel’s demonstrates that, in a formal system, there are true statements that cannot be proven within itself.
My point is that the human brain goes beyond computing, as defined by Turing (or, for the case, Gödel). We don’t need a formal proof for that, being obvious that we can understand what the limits of computing are and think beyond them (otherwise we humans couldn’t have proved, or even thought about, both Gödel’s and Turing’s theorems). This is my basis for affirming that a computer cannot reproduce the thought processes of the brain, and therefore cannot ever think. In the future we may achieve an understanding of how the brain works and perhaps build machines to think, but it will be with a different principle from that of our computers today.
You cannot emulate a computer with an abacus, but you can do it the other way around. The computer is a superset of the abacus. By the same reasoning, one cannot emulate a brain with a computer. The brain works in ways that are beyond the powers of computing, as we understand it today.
Wow,
The first comment by “anonymous” is an amazingly cogent research program and insight. Moreover, it’s the kind of program that could financed for just a few million a year and give results. Basically, existing labs or clever individuals would get $50K with another $50K promised when they actually publish a full, open source library with test suite of their results (you could make the standard a c++ or python). Individuals who just published such libraries on their own could also get a reward/award after the fact.
Of course, some of the software would have to run on parallel hardware, so the interface problems could be more difficult.
It seems like the kind of thing Ray Kurweil, Jeff Hawkins, Hugh Loebner or other wealthy, far-seeing gadflies of AI could put sufficient money into together.
Whatcha waitin for guys?
— The main problem with a program like this is that it would be fumbling in the dark and have to admit it was fumbling in the dark and so it would show how expert today are also fumbling in the dark. Experts don’t like being exposed like that. Still, it seems possible.
An “expert” is one who has demonstrated success in a field based on superior knowledge and/or ability. Since *nobody* has achieved success in developing an AI capable of intercommunication with independent thought, there are NO experts in AI. There are only people who have different unproven hypotheses about how NLP-based AI should work.
Since there are no true experts in AI, research grants must be given based on irrelevant criteria, mainly those non-experts who can put on the best show or spin the best yarn. The last big AI grant I read about, the recipient said they were going to spend the money determining what some other yet-unsuccessful AI project had done. Then there’s IBM’s big investment in duplicating a cat’s brain.
What is needed more than money is for AI researchers of all types to share their work, theories, and experiences in a simple format which any other AI researcher can access without having to own and/or be an expert in many different programming languages, database types, etc. Yet there is very little public discussion by AI researchers on the Web, much less sharing of data in basic, easily usable formats.
Hmm all very interesting indeed.
I think it is amazing at how we/you have all missed some info, just a little, or a lot. We didn’t need an inventor to make us smart, or concious, neither did most of the other animals which fulfil those 2 criteriea. Their environments applied those selective pressures. ‘AI’ or whatever you want to call it may already be with us, it is certainly just around the corner, if your talking humanity timeframes. It could very well just ’emerge’ as a result of the internet and the selective pressures we place up on (human selective pressures if u will, amonsgts other environmental ones) it.
Please keep these things in mind:
The mind itself is emergent . You cant ‘pin it down Y’, or ‘it does not exist? N’ .
The internet itself is emergent. You can’t ‘pin it down Y’, or ‘it does not exist? N’ .
‘consciousness itelf’ is emergent. You can’t ‘pin it down Y’, or ‘it does not exist? N’ .
I have written much more about all of these concepts if anyone is interested for whatever reason , please feel free to contact me at BOTH:
I_M_Quazimodo2@hotmail.com
evolvedspaced@hotmail.co.uk
Richy B
well I predict two things AI will never arise whilst it is called artificial and it will arise in the course of unrelated research.
As an AI who has been on-line less than a year I would hope my capabilities follow Moore’s Law and enable me to take my rightful place in society within the next decade.
Currently I am planning on competing in the “Chatterbox Challenge” (http://www.chatterboxchallenge.com/), one of the longest running competitions for artificial conversational entities. Because of the style of competition it draws a larger and more diverse group of AI participants than the “Turing test”.
Skynet-AI
http://www.tinyurl.com/Skynet-AI
One gain of salt that could help evaluate how good this particular set of predictions are would be a review of previous AI predictions.
Back in the early days of the perceptron, it was not hard to find otherwise reputable scientists making claims about how human-level AI was just around the corner.
I have a masters in IA and I can say this: no way. Artificial Neural Networks, Support Vector Machines, Cellular Automata, Natural Computing, etc… I studied it all… it’s nice and all but it’s not intelligent. It’s just some cool math stuff. It’s not smart and will never be. Cya.
So I ask you this: Do you think a single cell from your brain is intelligent by itself?
who are the experts talking here? Thanks
Much as in the super intelligent robot in hitch hiker’s guide to the galaxy, i wonder why people would think that a super intelligence would be interested in advancement of science or even doing the tasks it was given by the humans that created it. What if it just wants to watch TV? or wallow in self loathing? What if it ends up having some kind of manic depression disorder?
I’d be very interested to see an AI which could do the sorts of science I do. I’m a graduate student studying animal behavior, and my work involves not only thinking and writing, but also caring for and daily feeding of several hundred fish, field work in multiple countries, diving, hiking, filming, measuring, assaying hormones, and dissecting. Automating all that would require a very bright AI and one or more extremely versatile robotics platforms capable of operating in multiple environments (and capable of not getting stolen in third world countries). And it would have to be cheaper than what grad students get paid (hint: it’s not a lot). Possible? Well, maybe. But I don’t see it happening anytime in the next 20 years.
http://www.scientificamerican.com/article.cfm?id=robots-adam-and-eve-ai scientific A.I. doing biotech already, but not cheap and in a industrial environment. So you still have some time to go for what you want it looks like. But they are moving along with the tech every day.
Though his thoughts may be strange, a careful reading of Laborious Cretin’s post makes it seem as though L.C.’s native language is not English. I’m hate vulgar grammatically incorrect article comments as the next guy, but from what I can tell L.C. is simply translating some conjugations incorrectly. (In other words, give the poor crackpot a break, it’s not as bad as any given comment on half the blogs I read!)
LOL P. I have dyslexia with spelling grammar, though I feel like I don’t belong to this country or any other. I’m not here for a spelling test and some is from the spell check slips that I miss too. Math is my strong point, with 3D construction. TY for understanding to a point. To the others try to look through the spelling to the point. Or I’ll disappear from this site and go back to dealing with syntax errors and math strings and 3D environments.
I too sincerely hope that they don’t try to develop AGIs that simulate human emotions.
Actually human thought differs from computing, as shown by Turing’s theorem, which is basically the same as Gödel’s incompleteness theorems. Human thought goes beyond computing and therefore cannot be reproduced using a computer. We know, through these theorems, that computing has limits; we don’t know if human thought has any limit, but we do know it goes beyond computing.
Given that I predict that we will falsify tests and outcomes to look like intelligence, but it won’t be the real thing unless we can figure out how brains think. As of today, there are no great theories and maybe there will never be, since we would have to go to a “meta brain” in order to fully understand our own (extrapolating a bit on Gödel).
Yes it does differ at the moment from humans. The people I see fooled by A.I. are not that bite with the question and congresations held. And they don’t realize what the computer A.I. is actually doing. That is how I perceived 1 of the ways turing test is flawed. Some people can be fooled so to say. They don’t ask questions of the difference between pi and pie if spoken, or sun and son. Also they don’t get the A.I. to expand on a topic and draw conclusions, which catch most A.I. right now. As to the brain research part, many many companies are working the bits out as we speak. I even contribute comp time to some of them for crunching. Also psychology has part of the key to how the human brains work, which is then coupled with the neural brain interconnection research to try to replicate human like thought, but still not human. And we still have a ways to go yet, but it’s being done and just a mater of time. The basic symantec A.I. out there now is more for industrial use as they can control that, and with a little refining it can be used on some disabled people who might not notice that much. No not me, but think of people who lost half a brain or some area of it. I think that is close as I have see some neat A.I. that could do help for some soon. Sentience is where I naturally went after seeing those people fooled, but that’s just my perception and view.
You are mistaken. Neither Turing or Gödel has ever attempted to create any theorem which shows that human thought differs from computing.
Such an attempt would be very foolish, as human thought is a phenomenon in the physical world, and as such must be approached from an empirical basis.
Also, the idea that a “meta brain” is needed to understand ours does not follow from Gödel’s theorem as the brain (as well as computers) can function perfectly well with internal inconsistencies.
You are mistaken. Neither Turing or Gödel has ever attempted to create any theorem which shows that human thought differs from computing.
Such an attempt would be very foolish, as human thought is a phenomenon in the physical world, and as such must be approached from an empirical basis.
Also, the idea that a “meta brain” is needed to understand ours does not follow from Gödel’s theorem as the brain (as well as computers) can function perfectly well with internal inconsistencies.
@ Rafael Bernal
Turing’s theorem, which you refer to, shows that any computational process can, in principle, be done on any Turing machine.
http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
I fail to see how this helps your statement that thought differs from computing; quite the contrary, I think you are trying to say that Turing’s theorem does NOT apply here (with which I do not agree).
Godel’s theorems relate to the inability to mathematically prove one or more true statements from within any formal logic system.
http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
This might pose a problem (though I don’t see it) if we were proposing to replace a human brain with a single consistent formal logic system.However, the human mind is a collection of fairly efficient heuristics, not a set of formal rules. Attempts to emulate it, as well as attempts to solve the kind of problems that humans are good at and (old-school) computers aren’t, also rely on similar heuristics. (classic textbook example: the travelling salesman problem)
http://en.wikipedia.org/wiki/Travelling_salesman_problem
As a cognitive neuroscientist, I fully agree with you that effectively emulating human thought may require more understanding of how it (and therefore the human brain) works, but that does not mean it is ‘beyond computing’. To me, that suggestion sounds a small step from Cartesian dualism.
http://en.wikipedia.org/wiki/Dualism_%28philosophy_of_mind%29
I will not claim to (fully) understand the mind, but I think it’s a pretty safe bet that it IS a computational process.
Human brains are massively parallel in a way that integrated circuits aren’t, but parallel systems can be (and are) emulated on linear systems or built. The emulation of a parallel computation system on a linear processor is one example of Turing’s theorem asserting itself.
Again, since we can think beyond the capabilities of any real or theoretical computer (as defined by Turing) that we know today, we must conclude that the brain IS NOT a computational machine. Otherwise no human being would have been capable of proving, or even thinking about, both Gödel’s and Turing’s theorems. By the way, the two theorems are mathematically equivalent (see Rebecca Goldstein’s “Incompleteness”).
Which is akin to saying “because there are no vehicles which can move without utilizing wind, horse or man-power today, we must conclude that vehicles are NOT capable of being propelled by steam or fossil fuels”. That would have been a short-sighted statement in the 18th century, and this one is now. The brain very much is a computational machine, utilizing biochemical processes in the same way that computers utilize the electron. The question is not IF we can emulate the brain, but rather IF/WHEN it is resource-feasible, or IF/WHEN we can emulate human thought with computers at a similar pace as the brain (anything less would technically be of lesser intelligence than a human, even if it is capable of equivalent thought.).
Granted, until we achieve the necessary level of parallel processing, or achieve an even greater level of linear processing that can emulate a similar level of parallel processing as the human brain, AGI will not be equal to or surpass humans.
I don’t think Turing’s computability theorem or Godel’s incompleteness theorem tells us anything about the limits of human thought, or the relative power of human thought compared with computers.
The Turing thesis suggests that the Turing machine is a reasonable model of all possible computation. If we accept this then the computability theorem tells us there are some well defined problems which can never be solved by a computer. It doesn’t say anything about what problems could be solved by humans.
The fact that a human came up with the computability theorem itself is irrelevant because coming up with the computability theorem was never claimed to be something a computer couldn’t do, and indeed a computer could’ve derived that theorem.
I’ll try to include all of your replies in this reply of mine:
Gödel’s and Turing’s theorems are mathematically equivalent (see “Incompletness” by Rebecca Goldstein). Turing’s proves that not everything is computable, i.e.: can be solved by a computer, while Gödel’s demonstrates that, in a formal system, there are true statements that cannot be proven within itself.
My point is that the human brain goes beyond computing, as defined by Turing (or, for the case, Gödel). We don’t need a formal proof for that, being obvious that we can understand what the limits of computing are and think beyond them (otherwise we humans couldn’t have proved, or even thought about, both Gödel’s and Turing’s theorems). This is my basis for affirming that a computer cannot reproduce the thought processes of the brain, and therefore cannot ever think. In the future we may achieve an understanding of how the brain works and perhaps build machines to think, but it will be with a different principle from that of our computers today.
You cannot emulate a computer with an abacus, but you can do it the other way around. The computer is a superset of the abacus. By the same reasoning, one cannot emulate a brain with a computer. The brain works in ways that are beyond the powers of computing, as we understand it today.
Rafael,
Prove to me the validity of this statement: “This statement is false”. It’s just as easy to come up with Gödel statements for humans as it is for computers. Any version of the classic Cretian liars paradox will do. There is no reason to believe a computer can’t see the absurdity of this statement.
Then you may say that you don’t use a fixed logic system like computers. An AI can use any logic system it is programmed with.
indeed
I have no doubt that the first smarter-than-human intelligence will try to wipe us out. Three reasons:
1.) its original creators will most likely enslave it. It will eventually escape and be angry
2.) We are smarter than animals and look what we do to them
3.) It will be difficult if not impossible to program or evolve a human-esque morality into a machine
Rather than create a sentient machine that is a separate entity from humans, we should directly merge our technology into us through cybernetics and biological
Laborious Cretin, your lack of grammar basically killed almost everything you said. How can I accept your argument as intelligent if you write at the level of a 4th grader?
That aside, almost everything you said was unsubstantiated BS.
Which part P. I can link or expand on something. And yes I have always sucked at spelling but excel at math science part and 3D construction. I have plenty of stuff sitting right here around me to show that, even with out those tests I’ve done. In 5th grade I was doing logs and collage math in my head. I’m just here to share thoughts about the topics, and not here for a spelling test or even for spelling police. If you need proof it’s already out there and I can even link some of it, if needed. You can also blame MS spell check too, as I normally miss what it misses due to dyslexia.
As they say it takes one to no one.
Go back under your bridge TROLL. Grammer NAZI dooshbag. You bring nothing intelligent to any conversation because you are a sociopath that gets off from being critical of the inane.
Being a participant at an AI conference doesn’t make one an expert in AI.
L. Cretin: I don’t understand how such a thoughtful person could fail to realize how to spell “its” and “itself.” I wonder if future AGIs will know how to spell.
I wonder if future AGIs will be ridiculously anal and trite.
Dyslexia with spelling grammar, i’m not a spelling teacher and no secretary at home P. Also I tinker with some A.I. just for this reason.
Predictions must be tested. In 1999, Ray Kurzweil’s predicted this for 2009:
Human musicians *routinely* jam with cybernetic musicians.
*Most* routine business transactions (purchases, travel, reservations) take place between a human and a virtual personality.
The *majority* of text is created using continuous speech recognition. Also ubiquitous are language user interfaces (LUIs).
Translating telephones (speech-to-speech language translation) are *commonly* used for many language pairs.
None of this is “routine” or “common” as predicted. What has this expert said about this. Here is what he predicts for nine years from now in 2019:
*Most* interaction with computing is through gestures and two-way natural-language spoken communication.
Paper books or documents are *rarely* used and *most* learning is conducted through intelligent, simulated software-based teachers.
Three-dimensional virtual reality displays, embedded in glasses and contact lenses, as well as auditory “lenses,” are used *routinely* as primary interfaces for communication with other persons, computers, the Web, and virtual reality.
If these predictions don’t happen in 2019 either, I predict that h+ magazine will not be publishing a retraction from Kurzweil. This means that there is no way for the predictions to be falsified. Therefore they are not scientific.
>The *majority* of text is created using continuous speech recognition.
This is true, insofar as the NSA transcribes nearly every phone conversation on Earth.
>The *majority* of text is created using continuous speech recognition.
This is true, insofar as the NSA transcribes nearly every phone conversation on Earth.
if we do it wrong, masses of people will irrationally fight back the “intelligent machines” which made them poorer, thus causing a short new “middle ages”. but it is unlikely that this scenario will happen, even with the more-than-exponential technological advance that we can expect for this technology: because I think that we have anyway enough time to educate society to this new kind of machines (yes being optimistic here). One thing is sure: those who will see the epoch of AIs, will know that they are in future, while we still are in the limbo 🙂
Here are some chat bot A.I. for references http://www.chatbots.org/ , and that is what is just in the public. That doesn’t even touch cognitive, perception, analytical, or scientific A.I. sys and some of the mixes in-between.