Kruel AGI Risks Roundtable
Kruel AGI Risk Council of Advisors Roundtable
The subject of AI risk recently made headlines again with The Cambridge Project for Existential Risk announcing that it was going to open a so called “Terminator Center” to study existential risks due to AI and robotics and a yet another New York Times article on the subject of building “moral machines”. Although researchers in the field disagree strongly about whether such risks are real, and whether machines can or should be considered as ethical agents, it seems that it is an appropriate time to discuss such risks as we look forward to widespread deployment of early AI systems such as self guiding vehicles and Watson-like question answering systems.
Back in 2011, Alexander Kruel (XiXiDu) started a Q&A style interview series on LessWrong asking various experts in artificial intelligence about their perception of AI risks. He convened what was in essence a council of expert advisors to discuss AI development and risk. The advisory panel approach stands in contrast to that announced by CPER which in effect appointed a single “expert” to opine on the subject of AI risk. I am re-publishing these interviews here because I feel they are an invaluable resource for anyone looking into the area of AI risk. I have collected and re-edited these interviews to present them here in a conversational manner as a sort of virtual expert roundtable on AI risks.
While an outside viewpoint on risk is welcomed, the value here is in gathering a group of experts currently working in the field and asking them what they think. These individuals may have certain unique insights as a result of their experience in trying to build working AGI systems as well as narrow AIs. Notably here are a diversity of opinions here even among the people that have similar interests and mostly agree about the bright future of AI research. I’ve also added a few simple data graphics to help visualize this diversity.
In the advisors council, Alexander Kruel brought together 30 expert advisors to discuss the potential of AI risk. All the advisors have expert level academic credentials and practical experience building AI systems. Detailed curriculum vitae for the members of The Roundtable can be found at the end of the posting.
- Dr. Brandon Rohrer
- Professor Tim Finin
- Dr. Pat Hayes
- Professor Nils John Nilsson
- Professor Peter J. Bentley
- Professor David Alan Plaisted
- Dr. Hector Levesque
- Professor Paul Cohen
- Dr. Pei Wang
- Dr. J. Storrs Hall
- Dr. William Uther
- Professor Michael G. Dyer
- Dr. John Tromp
- Dr. Kevin Korb
- Dr. Leo Pape
- Professor Peter Gacs
- Professor Donald Loveland
- Eray Ozkural
- Dr. Laurent Orseau
- Richard Loosemore
- Monica Anderson
- Professor John E. Laird
- Dr. Kristinn R. Thorisson
- Professor Larry Wasserman
- Professor Michael Littman
- Dr. Shane Legg
- Professor Jürgen Schmidhuber
- Professor Stan Franklin
- Abram Demski
- Dr. Richard Carrier
Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?
Explanatory remark: P(human-level AI by (year) | no wars ∧ no disasters ∧ beneficially political and economic development) = 10%/50%/90%
Brandon Rohrer: 2032/2052/2072 [10%/50%/90%]
Tim Finin: 20/100/200 years [from now]
Pat Hayes: I do not consider this question to be answerable, as I do not accept this (common) notion of “human-level intelligence” as meaningful. Artificially intelligent artifacts are in some ways superhuman, and have been for many years now; but in other ways, they are sub-human, or perhaps it would be better to say, non-human. They simply differ from human intelligences, and it is inappropriate to speak of “levels” of intelligence in this way. Intelligence is too complex and multifacetted a topic to be spoken of as though it were something like sea level that can be calibrated on a simple linear scale.
If by ‘human-level’ you mean, the AI will be an accurate simalcrum of a human being, or perhaps a human personality (as is often envisioned in science fiction, eg HAL from “2001”) my answer would be, never. We will never create such a machine intelligence, because it is probably technically close to impossible, and not technically useful (note that HAL failed in its mission through being TOO “human”: it had a nervous breakdown. Bad engineering.) But mostly because we have absolutely no need to do so. Human beings are not in such short supply at resent that it makes sense to try to make artificial ones at great cost. And actual AI work, as opposed to the fantasies often woven around it by journalists and futurists, is not aiming to create such things. A self-driving car is not an artificial human, but it is likely to be a far better driver than any human, because it will not be limited by human-level attention spans and human-level response times. It will be, in these areas, super-human, just as present computers are superhuman at calculation and keeping track of large numbers of complex patterns, etc..
Nils Nilsson: Because human intelligence is so multi-faceted, your question really should be divided into each of the many components of intelligence. For example, on language translation, AI probably already exceeds the performance of many translators. On integrating symbolic expressions in calculus, AI (or computer science generally) is already much better than humans. AI does better on many planning and scheduling tasks. On chess, same! On the Jeopardy! quiz show, same!
A while back I wrote an essay about a replacement for the Turing test. It was called the “Employment Test.” (See: http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General_Essays/AIMag26-04-HLAI.pdf) How many of the many, many jobs that humans do can be done by machines? I’ll rephrase your question to be: When will AI be able to perform around 80% of these jobs as well or better than humans perform?
10% chance: 2030
50% chance: 2050
90% chance: 2100
David Plaisted: It seems that the development of human level intelligence is always later than people think it will be. I don’t have an idea how long this might take.
Peter J. Bentley: That depends on what you mean by human level intelligence and how it is measured. Computers can already surpass us at basic arithmetic. Some machine learning methods can equal us in recognition of patterns in images. Most other forms of “AI” are tremendously bad at tasks we perform well. The human brain is the result of a several billion years of evolution at molecular scales to macro scales. Our evolutionary history spans unimaginable numbers of generations, challenges, environments, predators, etc. For an artificial brain to resemble ours, it must necessarily go through a very similar evolutionary history. Otherwise it may be a clever machine, but its intelligence will be in areas that do not necessarily resemble human intelligence.
Hector Levesque: No idea. There’s a lot of factors beyond wars etc mentioned. It’s tough to make these kind of predictions.
Pei Wang: My estimations are, very roughly, 2020/2030/2050, respectively.
Here by “roughly as good as humans” I mean the AI will follow roughly the same principles as human in information processing, though it does not mean that the system will have the same behavior or capability as human, due to the difference in body, experience, motivation, etc.
J. Storrs Hall: 2020 / 2030 / 2040 [10%/50%/90%]
Paul Cohen: I wish the answer were simple. As early as the 1970s, AI programs were making modest scientific discoveries and discovering (or more often, rediscovering) bits of mathematics. Computer-based proof checkers are apparently common in math, though I don’t know anything about them. If you are asking when machines will function as complete, autonomous scientists (or anything else) I’d say there’s little reason to think that that’s what we want. For another few decades we will be developing assistants, amplifiers, and parts of the scientific/creative process. There are communities who strive for complete and autonomous automated scientists, but last time I looked, a couple of years back, it was “look ma, no hands” demonstrations with little of interest under the hood. On the other hand, joint machine-human efforts, especially those that involve citizen scientists (e.g., Galaxy Zoo, Foldit) are apt to be increasingly productive.
William Uther: I don’t think this question is well specified. It assumes that ‘intelligence’ is a one-dimensional quantity. It isn’t. We already have AI systems that play chess better than the best humans, and mine data (one definition of learn) better than humans. Robots can currently drive cars roughly as well as humans can. We don’t yet have a robot than can clean up a child’s messy bedroom. Of course, we don’t have children that can do that either. 🙂
Kevin Korb: 2050/2200/2500 [10%/50%/90%]
The assumptions, by the way, are unrealistic. There will be disruptions.
John Tromp: I believe that, in my lifetime, computers will only be proficient at well-defined and specialized tasks. Success in the above disciplines requires too much real-world understanding and social interaction. I will not even attempt projections beyond my lifetime (let’s say beyond 40 years).
Michael G. Dyer: See Ray Kurzweil’s book: The Singularity Is Near.
As I recall, he thinks it will occur before mid-century.
I think he is off by at least an additional 50 years (but I think we’ll have as manypersonal robots as cars by 2100.)
One must also distinguish between the first breakthrough of a technology vs. that breakthrough becoming cheap enough to be commonplace, so I won’t give you any percentages. (Several decades passed between the first cell phone and billions of people having cell phones.)
Peter Gacs: I cannot calibrate my answer as exactly as the percentages require, so I will just concentrate on the 90%. The question is a common one, but in my opinion history will not answer it in this form. Machines do not develop in direct competition of human capabilities, but rather in attempts to enhance and complement them. If they still become better at certain tasks, this is a side effect. But as a side effect, it will indeed happen that more and more tasks
that we proudly claim to be creative in a human way, will be taken over by computer systems. Given that the promise of artificial intelligence is by now 50 years old, I am very cautious with numbers, and will say that at least 80
more years are needed before jokes about the stupidity of machines will become outdated.
Eray Ozkural: 2025/2030/2045. [10%/50%/90%]
Assuming that we have the right program in 2035 by 100% probability, it could still take about 10 years to train it adequately, even though we might find that our programs by then learn much faster than humans. I anticipate that the most expensive part of developing an AI will be training, although we tend to assume that after we bring it up to primary school level, i.e. it can read and write, it would be able to learn much on its own. I optimistically estimated that it would take $10 million dollars and 10 years to train an AI in basic science. Extending that to cover all four of science, mathematics, engineering and programming could take even longer. It takes a human, arguably 15-20 years of training to be a good programmer, and very few humans can program well after that much educational effort and expense.
With a quite high uncertainty though.
My current estimate is that (I hope) we will know we have built a core AGI by 2025, but a lot of both research and engineering work and time (and learning for the AGI) will be required for the AGI to reach human level in most domains, up to 20 years in the worst case I speculate and 5 years at least, considering that a lot of people will probably be working on it at that time. That is, if we really want to make it human-like.
Richard Loosemore: 2015 – 2020 – 2025 [10%/50%/90%]
These are all Reductionist sciences. I assume the question is whether we’ll have machines capable of performing Reduction in these fields. If working on pre-reduced problems, where we already have determined which Models (formulas, equations, etc) to use and know the values of all input variables, then we already have Mathematica. But here the Reduction was done by a human so Mathematica is not AI.
AIs would be useful for more everyday things, such as (truly) Understanding human languages years before they Understand enough to learn the Sciences and can perform full-blown Reduction. This is a much easier task, but is still AI-Complete. I think the chance we’ll see a program truly Understand a human language at the level of a 14-year old teenager is
Such an AI would be worth hundreds of billions of dollars and makes a worthy near-term research goal. It could help us radically speed up research in all areas by allowing for vastly better text-based information filtering and gathering capabilities, perfect voice based input, perfect translation, etc.
Leo Pape: For me, roughly human-level machine intelligence is an embodied machine. Given the current difficulties of making such machines I expect it will last at least several hundred years before human-level intelligence can be reached. Making better machines is not a question of superintelligence, but of long and hard work. Try getting some responses to your questionnaire from roboticists.
Donald Loveland: Experts usually are correct in their predictions but terrible in their timing predictions. They usually see things as coming earlier than the event actually occurs as they fail to see the obstacles. Also, it is unclear what you mean as human-level intelligence. The Turing test will be passed in its simplest form perhaps in 20 years. Full functional replacements for humans will likely take over 100 years (50% likelihood). 200 years (90% likelihood).
John E. Laird: I see this as a long way out. There are many technical/scientific hurdles, and there is not a general consensus that there is a need for the type of autonomous human-level machine intelligence from science fiction. Instead, I predict that we will see machine intelligence embedded into more and systems, making other systems “smart” but not as general as humans, and not with complete human-level intelligence. We will see natural language and speech becoming ubiquitous so we can communicate with devices (more than Siri) in the next 5-10 years. But I don’t see the development of autonomous HLMI coming anytime soon (such as robots in the movies – Data for example). There are many technical hurdles but there are also economic, political, and social issues. On the technical side, very few people are working on the problem of integrated human-level intelligence, and it is slow going. It would take significant, long term investment and I don’t see that happening anytime soon.
10% 20 years
50% 50 years
90% 80 years
Kristinn R. Thorisson:
Mathematics and programming will surely come before engineering and science, by at least 20 years, with science emerging last.
10%: 2050 (I also think P=NP in that year.)
Shane Legg: 2018, 2028, 2050 [10%/50%/90%]
10%: 5 years (2017).
50%: 15 years (2027).
90%: 50 years (2062).
Of course, just numbers are not very informative. The year numbers I gave are unstable under reflection, at a factor of about 2 (meaning I have doubled and halved these estimates in the past minutes while considering it). More relevant is the variance; I think the year of development is fundamentally hard to predict, so that it’s rational to give a significant probability mass to within 10 years, but also to it taking another 50 years or more. However, the largest bulk of my probability mass would be roughly between 2020 and 2030, since (1) the computing hardware to simulate the human brain would become widely available and (2) I believe less than that will be sufficient, but the software may lag behind the hardware potential by 5 to 10 years. (I would estimate more lag, except that it looks like we are making good progress right now.)
Richard Carrier: 2020/2040/2080 [10%/50%/90%]
What probability do you assign to the possibility of human extinction as a result of badly done AI?
Explanatory remark :
P(human extinction | badly done AI) = ?
(Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)
Brandon Rohrer: < 1%
Tim Finin: 0.001
Pat Hayes: Zero. The whole idea is ludicrous.
Nils Nilsson: 0.01% probability during the current century. Beyond that, who knows?
David Plaisted: I think people will be so concerned about the misuse of intelligent computers that they will take safeguards to prevent such problems. To me it seems more likely that disaster will come on the human race from nuclear or biological weapons, or possibly some natural disaster.
Peter J. Bentley: If this were ever to happen, it is most likely to be because the AI was too stupid and we relied on it too much. It is *extremely* unlikely for any AI to become “self aware” and take over the world as they like to show in the movies. It’s more likely that your pot plant will take over the world.
Hector Levesque: Low. The probability of human extinction by other means (e.g. climate problems, micro biology etc) is sufficiently higher that if we were to survive all of them, surviving the result of AI work would be comparatively easy.
William Uther: Again, I don’t think your question is well specified. Most AI researchers are working on AI as a tool: given a task, the AI tries to figure out how to do it. They’re working on artificial intelligence, not artificial self-motivation. I don’t know that we could even measure something like ‘artificial consciousness’.
All tools increase the power of those that use them. But where does the blame lie if something goes wrong with the tool? In the terms of the US gun debate: Do guns kill people? Do people kill people? Do gun manufacturers kill people? Do kitchen knife manufacturers kill people?
Personally, I don’t think ‘Terminator’ style machines run amok is a very likely scenario. Hrm – I should be clearer here. I believe that there are already AI systems that have had malfunctions and killed people (seehttp://www.wired.com/dangerroom/2007/10/robot-cannon-ki/ ). I also believe that when fire was first discovered there was probably some early caveman that started a forest fire and got himself roasted. He could even have roasted most of his village. I do not believe that mankind will build AI systems that will systematically seek out and deliberately destroy all humans (e.g. ‘Skynet’), and I further believe that if someone started a system like this it would be destroyed by everyone else quite quickly.
It isn’t hard to build in an ‘off’ switch. In most cases that is a very simple solution to ‘Skynet’ style problems.
I think there are much more worrying developments in the biological sciences. Seehttp://www.nytimes.com/2012/01/08/opinion/sunday/an-engineered-doomsday.html
Leo Pape: Human beings are already using all sorts of artificial intelligence in their (war)machines, so there it is not impossible that our machines will be helpful in human extinction.
Donald Loveland: Ultimately 95% (and not just by bad AI, but just by generalized evolution). In other words, in this sense all AI is badly done AI for I think it is a natural sequence that AI leads to superior artificial minds that leads to eventual evolution, or replacement (depending on the speed of the transformation), of humans to artificial life.
John E. Laird: 0% – I don’t see the development of AGI leading to this. There are other dangers of AI, where people (or governments) use the power that can be gained from machine intelligence to their own ends (financially, politically, …) that could end very badly (destruction of communication networks – bring down governments and economies) but the doomsday scenarios of Terminator and the Matrix just don’t make sense for many reasons. (Think James Bond evil genius’ instead of evil robots.) – If you want to get scared – watch Colossus, the Forbin Project – but that also is just science fiction (along the lines of your next question as it turns out).
Kristinn R. Thorisson: I suspect that the task of making the next leap in building an AI becomes exponentially more difficult as intelligence grows, so if it took 100 years to develop a human-level (measured roughly) AI system from the time when software was automatically running on a computer (around the middle of the 20th century), then the next milestone of roughly equal significance will be reached roughly 100 years later, or sometime in the timeframe between 2100-2180. However, before that milestone is reached it may already have been made irrelevant by other more interesting milestones based on e.g. running vast numbers of specially modified human-level AIs.
Michael Littman: epsilon, assuming you mean: P(human extinction caused by badly done AI | badly done AI)
I think complete human extinction is unlikely, but, if society as we know it collapses, it’ll be because people are being stupid (not because machines are being smart).
Shane Legg: Depends a lot on how you define things. Eventually, I think human extinction will probably occur, and technology will likely play a part in this. But there’s a big difference between this being within a year of something like human level AI, and within a million years. As for the former meaning…I don’t know. Maybe 5%, maybe 50%. I don’t think anybody has a good estimate of this.
If by suffering you mean prolonged suffering, then I think this is quite unlikely. If a super intelligent machine (or any kind of super intelligent agent) decided to get rid of us, I think it would do so pretty efficiently. I don’t think we will deliberately design super intelligent machines to maximise human suffering.
Jürgen Schmidhuber: Low for the next few months.
Stan Franklin: On the basis of current evidence, I estimate that probability as being tiny. However, the cost would be so high, that the expectation is really difficult to estimate.
Abram Demski: This is somewhat difficult. We could say that AIs matching that description have already been created (with few negative consequences). I presume that “roughly human-level” is also intended, though.
If the human-level AGI
0) is autonomous (has, or forms, long-term goals)
1) is not socialized
2) figures out how to access spare computing power on the internet
3) has a goal which is very bad for humans (ie, implies extinction)
4) is alone (has no similarly-capable peers)
then the probability of human extinction is quite high, though not 1. The probability of #0 is somewhat low; #1 is somewhat low; #2 is fairly high; #3 is difficult to estimate; #4 is somewhat low.
#1 is important because a self-modifying system will tend to respond to negative reinforcement concerning sociopathic behaviors resulting from #3– though, it must be admitted, this will depend on how deeply the ability to self-modify runs. Not all architectures will be capable of effectively modifying their goals in response to social pressures. (In fact, rigid goal-structure under self-modification will usually be seen as an important design-point.)
#3 depends a great deal on just how smart the agent is. Given an agent of merely human capability, human extinction would be very improbable even with an agent that was given the explicit goal of destroying humans. Given an agent of somewhat greater intelligence, the risk would be there, but it’s not so clear what range of goals would be bad for humans (many goals could be accomplished through cooperation). For a vastly more intelligent agent, predicting behavior is naturally a bit more difficult, but cooperation with humans would not be as necessary for survival. So, that is why #2 becomes very important: an agent that is human-level when run on the computing power of a single machine (or small network) could be much more intelligent with access to even a small fraction of the world’s computing power.
#4 is a common presumption in singularity stories, because there has to be a first super-human AI at some point. However, the nature of software is such that once the fundamental innovation is made, creating and deploying many is easy. Furthermore, a human-like system may have a human-like training time (to become adult-level that is), in which case it may have many peers (which gets back to #1). In case #4 is *not* true, then condition #3 must be rewritten to “most such systems have goals which are bad for humans”.
It’s very difficult to give an actual probability estimate for this question because of the way “badly done AI” pushes around the probability. (By definition, there should be some negative consequences, or it wasn’t done badly enough…) However, I’ll naively multiply the factors I’ve given, with some very rough numbers:
= .1 * .1 * .9 * .5 * .1
I described a fairly narrow scenario, so we might expect significant probability mass to come from other possibilities. However, I think it’s the most plausible. So, keeping in mind that it’s very rough, let’s say .001.
I note that this is significantly lower than estimates I’ve made before, despite trying harder at that time to refute the hypothesis.
Richard Carrier: Here the relative probability is much higher that human extinction will result from benevolent AI, i.e. eventually Homo sapiens will be self-evidently obsolete and we will voluntarily transition to Homo cyberneticus. In other words, we will extinguish the Homo sapiens species ourselves, voluntarily. If you asked for a 10%/50%/90% deadline for this I would say 2500/3000/4000.
However, perhaps you mean to ask regarding the extinction of all Homo, and their replacement with AI that did not originate as a human mind, i.e. the probability that some AI will kill us and just propagate itself.
The answer to that is dependent on what you mean by “badly done” AI: (a) AI that has more power than we think we gave it, causing us problems, or (b) AI that has so much more power than we think we gave it that it can prevent our taking its power away.
(a) is probably inevitable, or at any rate a high probability, and there will likely be deaths or other catastrophes, but like other tech failures (e.g. the Titanic, three mile island, hijacking jumbo jets and using them as guided missiles) we will prevail, and very quickly from a historical perspective (e.g. there won’t be another 9/11 using airplanes as missiles; we only got jacked by that unforeseen failure once). We would do well to prevent as many problems as possible by being as smart as we can be about implementing AI, and not underestimating its ability to outsmart us, or to develop while we aren’t looking (e.g. Siri could go sentient on its own, if no one is managing it closely to ensure that doesn’t happen).
(b) is very improbable because AI function is too dependent on human cooperation (e.g. power grid; physical servers that can be axed or bombed; an internet that can be shut down manually) and any move by AI to supplant that requirement would be too obvious and thus too easily stopped. In short, AI is infrastructure dependent, but it takes too much time and effort to build an infrastructure, and even more an infrastructure that is invulnerable to demolition. By the time AI has an independent infrastructure (e.g. its own robot population worldwide, its own power supplies, manufacturing plants, etc.) Homo sapiens will probably already be transitioning to Homo cyberneticus and there will be no effective difference between us and AI.
However, given no deadline, it’s likely there will be scenarios like: “god” AI’s run sims in which digitized humans live, and any given god AI could decide to delete the sim and stop running it (and likewise all comparable AI shepherding scenarios). So then we’d be asking how likely is it that a god AI would ever do that, and more specifically, that all would (since there won’t be just one sim run by one AI, but many, so one going rogue would not mean extinction of humanity).
So setting aside AI that merely kills some people, and only focusing on total extinction of Homo sapiens, we have:
P(voluntary human extinction by replacement | any AGI at all) = 90%+
P(involuntary human extinction without replacement | badly done AGI type (a)) = < 10^-20
[and that’s taking into account an infinite deadline, because the probability steeply declines with every year after first opportunity, e.g. AI that doesn’t do it the first chance it gets is rapidly less likely to as time goes on, so the total probability has a limit even at infinite time, and I would put that limit somewhere as here assigned.]
P(involuntary human extinction without replacement | badly done AGI type (b)) = .33 to .67
However, P(badly done AGI type (b)) = < 10^-20
Experts Mostly Agree that “bad AI” risk is low.
Once we build AI that is roughly as good as humans at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?
Pei Wang: After that, AI can become more powerful (in hardware), more knowledgeable, and therefore more capable in problem solving, than human beings. However, there is no evidence to believe that it can be “substantially better” in the principles defining intelligence.
J. Storrs Hall: Difficult in what sense? Make 20 000 copies of your AI and organize them as Google or Apple. The difficulty is economic, not technical.
Paul Cohen: It isn’t hard to do better than humans. The earliest expert systems outperformed most humans. You can’t beat a machine at chess. etc. Google is developing cars that I think will probably drive better than humans. The Google search engine does what no human can.
Kevin Korb: It depends upon how AGI is achieved. If it’s through design breakthroughs in AI architecture, then the Singularity will follow. If it’s through mimicking nanorecordings, then no Singularity is implied and may not occur at all.
John Tromp: Not much. I would guess something on the order of a decade or two.
Michael G. Dyer: Machines and many specific algorithms are already substantially better at their tasks than humans.
(What human can compete with a relational database?, or with a Bayesian reasoner? or with scheduler?, or with an intersection-search mechanism like WATSON? etc.)
For dominance over humans, machines have to first acquire the ability to understand human language and to have thoughts in the way humans have thoughts. Even though the WATSON program is impressive, it does NOT know what a word actually means (in the sense of being able to answer the question: “How does the meaning of the word “walk” differ from the meaning of the word “dance”, physically, emotionally, cognitively, socially?”
It’s much easier to get computers to beat humans at technical tasks (such as sci, math, eng. prog.) but humans are vastly superior at understanding language, which makes humans the master of the planet. So the real question is: At what point will computers understand natural language as well as humans?
Peter Gacs: This is also hard to quantify, since in some areas machines will still be behind, while in others they will already be substantially better: in my opinion, this is already the case. If I still need to give a number, I say 30 years.
Eray Ozkural: I expect that by the time such a knowledgeable AI is developed, it will already be thinking and learning faster than an average human. Therefore, I think, simply by virtue of continuing miniaturization of computer architecture, or other technological developments that increase our computational resources (e.g., cheaper energy technologies such as fusion), a general-purpose AI could vastly transcend human-level intelligence.
William Uther: There is a whole field of ‘automatic programming’. The main difficulties in that field were in specifying what you wanted programmed. Once you’d done that the computers were quite effective at making it. (I’m not sure people have tried to make computers design complex algorithms and data structures yet.)
I think asking about ‘automated science’ is a much clearer question than asking about ‘Human level AGI’. At the moment there is already huge amounts of automation in science (from Peter Cheeseman’s early work with AutoClass to the biological ‘experiments on a chip’ that allow a large number of parallel tests to be run). What is happening is similar to automation in other areas – the simpler tasks (both intellectual and physical) are being automated away and the humans are working at higher levels of abstraction. There will always be *a* role for humans in scientific research (in much the same way that there is currently a role for program managers in current research – they decide at a high level what research should be done after understanding as much of it as they choose).
Social skills require understanding humans. We have no abstract mathematical model of humans as yet to load into a machine, and so the only way you can learn to understand humans is by experimenting on them… er, I mean, interacting with them. 🙂 That takes time, and humans who are willing to interact with you.
Once you have the model, coming up with optimal plans for interacting with it, i.e. social skills, can happen offline. It is building the model of humans that is the bottleneck for an infinitely powerful machine.
I guess you cold parallelise it by interacting with each human on the planet simultaneously. That would gather a large amount of data quite quickly, but be tricky to organise. And certain parts of learning about a system cannot be parallelised.
One possible outcome is that we find out that humans are close to optimal problem solvers given the resources they allocate to the problem. In which case, ‘massive superhuman cross-domain optimisation’ may simply not be possible.
Laurent Orseau: Wild guess: It will follow Moore’s law.
Richard Loosemore: Very little difficulty. I expect it to happen immediately after the first achievement, because at the very least we could simply increase the clock speed in relevant areas. It does depend exactly how you measure “better”, though.
Monica Anderson: What does “better” mean? If we believe, as many do, that Intelligence is for Prediction, and that the best measure of our intelligence is whether we can predict the future in complex domains, then we can interpret the given question as “when can an AI significantly outpredict a human in their mundane everyday environment”.
For any reasonable definition of “significant”, the answer is “never”. The world is too complex to be predictable. All intelligences are “best-effort” systems where we do as best we can and learn from our mistakes when we fail, for fail we must. Human intelligences have evolved to the level they have because it is a reasonable level for superior survival chances in the environments in which we’ve evolved. More processing power, faster machines, etc. do not necessarily translate into an improved ability to predict the environment, especially if we add AIs to this environment. A larger number of competent agents like AIs will make the domain even MORE complex, leading to LOWER predictability. For more about this, see https://hplusmagazine.com/2010/12/15/problem-solved-unfriendly-ai.
Improved ability to handle Models (creating a “super Mathematica”) is of limited utility for the purpose of making longer-term predictions. Chains of Reductionist Models attempting to predict the future tend to look like Rube Goldberg machines and are very likely to fail, and to fail spectacularly (which is what Brittleness is all about).
Computers will not get better at Reduction (the main skill required for Science, Mathematics, Engineering, and Programming) until they gather a lot of experience of the real world. For instance, a programming task is 1% about Understanding programming and 99% about Understanding the complex reality expressed in the spec of the program. This can only be improved by Understanding reality better, which is a slow process with the limitations described above. For an introduction to this topic, see my article “Reduction Considered Harmful” athttps://hplusmagazine.com/2011/03/31/reduction-considered-harmful.
The “Problem with Reduction” is actually “The Frame Problem” as described by John McCarthy and Pat Hayes, viewed from a different angle. It is not a problem that AI research can continue to ignore, which is what we’ve done for decades. It will not go away. The only approach that works is to sidestep the issue of continuous Model update by not using Models. AIs must use nothing but Model Free Methods since these work without performing Reduction (to Models) and hence can be used to IMPLEMENT automatic Reduction.
Larry Wasserman: Not at all difficult. I think there will be a phase change. Once AI is as good as humans, it will quickly be better than humans.
Kristinn R. Thorisson: I expect AIs to outperform humans in virtually every way, except perhaps on those points where evolution has guaranteed humans the necessary stability to grow and prosper, i.e. along the social and ethical dimensions – because it is difficult to engineer such capabilities in a top-down manner, they spring more naturally from (natural) evolution, and may in fact be dependent on that.
What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
P(superhuman intelligence within hours | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
Brandon Rohrer: < 1%
Tim Finin: 0.0001/0.0001/0.01
Pat Hayes: Again, zero. Self-modification in any useful sense has never been technically demonstrated. Machine learning is possible and indeed is a widely used technique (no longer only in AI) but a learning engine is the same thing after it has learnt something as it was before., just as biological learners are. When we learn, we get more informed, but not more intelligent: similarly with machines.
Nils Nilsson: I’ll assume that you mean sometime during this century, and that my “employment test” is the measure of superhuman intelligence.
<5 years: 90%
David Plaisted: This would require a lot in terms of robots being able to build hardware devices or modify their own hardware. I suppose they could also modify their software to do this, but right now it seems like a far out possibility.
Peter J. Bentley: It won’t happen. Has nothing to do with internet connections or speeds. The question is rather silly.
Hector Levesque: Good. An automated human level intelligence is achieved, it ought to be able to learn what humans know more quickly.
William Uther: Again, your question is poorly specified. What do you mean by ‘human level AGI’? Trying to tease this apart, do you mean a robotic system that if trained up for 20 years like a human would end up as smart as a human 20-year-old? Are you referring to that system before the 20 years learning, or after?
In general, if the system has ‘human level’ AGI, then surely it will behave the same way as a human. In which case none of your scenarios are likely – I’ve had an internet connection for years and I’m not super-human yet.
Kevin Korb: If through nanorecording: approx 0%. Otherwise, the speed/acceleration at which AGIs improve themselves is hard to guess at.
John Tromp: I expect such modification will require plenty of real-life interaction.
- hours: 10^-9
- days: 10^-6
- <5 years : 10^-1
Michael G. Dyer: –
Peter Gacs: This question presupposes a particular sci-fi scenario that I do not believe in.
Eray Ozkural: In 5 years, without doing anything, it would already be faster than a human simply by running on a faster computer. If Moore’s law continued by then, it would be 20-30 times faster than a human. But if you mean by “vastly” a difference of thousand times faster, I give it a probability of only 10% because there might be other kinds of bottlenecks involved (mostly physical). There is also another problem with Solomonoff’s hypothesis, which Kurzweil generalized, that we are gladly omitting. An exponential increase in computational speed may only amount to a linear increase in intelligence. It at least corresponds only to a linear increase in the algorithmic complexity of solutions that can be found by any AGI, which is a well known fact, and cannot be worked around by simple shortcuts. If solution complexity is the best measure of intelligence, then, getting much more intelligent is not so easy (take this with a grain of salt, though, and please contrast it with the AIQ idea).
Laurent Orseau: I think the answer to your question is similar to the answer to: Suppose we suddenly find a way to copy and emulate a whole human brain on a computer; How long would it take *us* to make it vastly better than it is right now? My guess is that we will make relatively slow progress. This progress can get faster with time, but I don’t expect any sudden explosion. Optimizing the software sounds a very hard task, if that is even possible: if there were an easy way to modify the software, it is probable that natural selection would have found it by now. Optimizing the hardware should then follows Moore’s law, at least for some time. That said, the digital world might allow for some possibilities that might be more difficult in a real brain, like copy/paste or memory extension (although that one is debatable).
I don’t even know if “vastly superhuman” capabilities is something that is even possible. That sounds very nice (in the best scenario) but is a bit dubious. Either Moore’s law will go on forever, or it will stop at some point. How much faster than a human can a computer compute, taking thermodynamics into account?
So, before it really becomes much more intelligent/powerful than humans, it should take some time.
But we may need to get prepared for otherwise, just in case.
Richard Loosemore: Depending on the circumstances (which means, this will not be possible if the AI is built using dumb techniques) the answer is: near certainty.
Monica Anderson: 0.00% . Reasoning is useless without Understanding because if you don’t Understand (the problem domain), then you have nothing to reason about. Symbols in logic have to be anchored in general Understanding of the problem domain we’re trying to reason about.
Leo Pape: I don’t know what “massive superhuman intelligence” is, what it is for, and if it existed how to measure it.
Donald Loveland: I am not sure of the question, or maybe only that I do not understand an answer. Let me comment anyway. I have always felt it likely that the first superhuman intelligence would be a simulation of the human mind; e.g., by advanced neural-net-like structures. I have never thought seriously about learning time, but I guess the first success would be after some years of processing. I am not sure of what you mean by “massive”. Such a mind as above coupled to good retrieval algorithms with extensive databases such as those being developed now could appear to have massive superhuman intelligence.
John E. Laird: 0% – There is no reason to believe that an AGI could do this. First, why would an AGI be able to learn faster than humans. It takes lots of experience (which takes lots of real time) to learn about the world (that is why humans take >12 years to get to something intelligent). Just mining existing databases, etc. isn’t going to get you there – you need to interact with the world. Just getting lots of computers and lots of data doesn’t mean a system can get to superhuman intelligence. Also, using lots of distributed processing effectively (which I assume is the scenario you are thinking about) is problematic. Computation requires locality – to make an intelligent decision, you need to bring data together in one place. You have some aspects of intelligence distributed, but to be strategic, you need locality.
Kristinn R. Thorisson: I expect AIs to outperform humans in virtually every way, except perhaps on those points where evolution has guaranteed humans the necessary stability to grow and prosper, i.e. along the social and ethical dimensions – because it is difficult to engineer such capabilities in a top-down manner, they spring more naturally from (natural) evolution, and may in fact be dependent on that.
Michael Littman: epsilon (essentially zero). I’m not sure exactly what constitutes intelligence, but I don’t think it’s something that can be turbocharged by introspection, even superhuman introspection. It involves experimenting with the world and seeing what works and what doesn’t. The world, as they say, is its best model. Anything short of the real world is an approximation that is excellent for proposing possible solutions but not sufficient to evaluate them.
Shane Legg: “human level” is a rather vague term. No doubt a machine will be super human at some things, and sub human at others. What kinds of things it’s good at makes a big difference.
In any case, I suspect that once we have a human level AGI, it’s more likely that it will be the team of humans who understand how it works that will scale it up to something significantly super human, rather than the machine itself. Then the machine would be likely to self improve.
How fast would that then proceed? Could be very fast, could be impossible — there could be non-linear complexity constrains meaning that even theoretically optimal algorithms experience strongly diminishing intelligence returns for additional compute power. We just don’t know.
Jürgen Schmidhuber: High for the next few decades, mostly because some of our own work seems to be almost there:
- Gödel machine: http://www.idsia.ch/~juergen/goedelmachine.html
- Universal AI: http://www.idsia.ch/~juergen/unilearn.html
- Creative machines that create and solve their own problems [4,5] to improve their knowledge about how the world works: http://www.idsia.ch/~juergen/creativity.html
Stan Franklin: Essentially zero in such a time frame. A lengthy developmental period would be required. You might want to investigate the work of the IEEE Technical Committee on Autonomous Mental Development.
Abram Demski: Very near zero, very near zero, and very near zero. My feeling is that intelligence is a combination of processing power and knowledge. In this case, knowledge will keep pouring in, but processing power will become a limiting factor. Self-modification does not help this. So, such a system might become superhuman within 5 years, but not massively.
If the system does copy itself or otherwise gain more processing power, then I assign much higher probability; 1% within hours, 5% within days, 90% within 5 years.
Note that there is a very important ambiguity in the term “human-level”, though. It could mean child-level or adult-level. (IE, a human-level system may take 20 years to train to the adult level.) The above assumes you mean “adult level”. If not, add 20 years.
Richard Carrier: Depends on when it starts. For example, if we started a human-level AGI tomorrow, it’s ability to revise itself would be hugely limited by our slow and expensive infrastructure (e.g. manufacturing the new circuits, building the mainframe extensions, supplying them with power, debugging the system). In that context, “hours” and “days” have P –> 0, but 5 years has P = 33%+ if someone is funding the project, and likewise 10 years has P=67%+; and 25 years, P=90%+. However, suppose human-level AGI is first realized in fifty years when all these things can be done in a single room with relatively inexpensive automation and the power demands of any new system were not greater than are normally supplied to that room. Then P(days) = 90%+. And with massively more advanced tech, say such as we might have in 2500, then P(hours) = 90%+.
Perhaps you are confusing intelligence with knowledge. Internet connection can make no difference to the former (since an AGI will have no more control over the internet than human operators do). That can only expand a mind’s knowledge. As to how quickly, it will depend more on the rate of processed seconds in the AGI itself, i.e. if it can simulate human thought only at the same pace as non-AI, then it will not be able to learn any faster than a regular person, no matter what kind of internet connection it has. But if the AGI can process ten seconds time in one second of non-AI time, then it can learn ten times as fast, up to the limit of data access (and that is where internet connection speed will matter). That is a calculation I can’t do. A computer science expert would have to be consulted to calculate reasonable estimates of what connection speed would be needed to learn at ten times normal human pace, assuming the learner can learn that fast (which a ten:one time processor could); likewise a hundred times, etc. And all that would tell you is how quickly that mind can learn. But learning in and of itself doesn’t make you smarter. That would require software or circuit redesign, which would require testing and debugging. Otherwise once you had all relevant knowledge available to any human software/circuit design team, you would simply be no smarter than them, and further learning would not help you (thus humans already have that knowledge level: that’s why we work in teams to begin with), thus AI is not likely to much exceed us in that ability. The only edge it can exploit is speed of a serial design thought process, but even that runs up against the time and resource expense of testing and debugging anything it designed, and that is where physical infrastructure slows the rate of development, and massive continuing human funding is needed. Hence my probabilities above.
Experts Estimates of Probability of Self Improvement to
Above Human Level Vary Widely [Hours/Days/5 Years]
Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?
Explanatory remark: How much money is currently required to mitigate possible risks from AI (to be instrumental in maximizing your personal long-term goals, e.g. surviving this century), less/no more/little more/much more/vastly more?
Brandon Rohrer: No more.
Tim Finin: No.
Pat Hayes: No. There is no reason to suppose that any manufactured system will have any emotional stance towards us of any kind, friendly or unfriendly. In fact, even if the idea of “human-level” made sense, we could have a more-than-human-level super-intelligent machine, and still have it bear no emotional stance towards other entities whatsoever. Nor need it have any lust for power or political ambitions, unless we set out to construct such a thing (which AFAIK, nobody is doing.) Think of an unworldly boffin who just wants to be left alone to think, and does not care a whit for changing the world for better or for worse, and has no intentions or desires, but simply answers questions that are put to it and thinks about htings that it is asked to think about. It has no ambition and in any case no means to achieve any far-reaching changes even if it “wanted” to do so. It seems to me that this is what a super-intelligent question-answering system would be like. I see no inherent, even slight, danger arising from the presence of such a device.
Nils Nilsson: Work on this problem should be ongoing, I think, with the work on AGI. We should start, now, with “little more,” and gradually expand through the “much” and “vastly” as we get closer to AGI.
David Plaisted: Yes, some kind of ethical system should be built into robots, but then one has to understand their functioning well enough to be sure that they would not get around it somehow.
Peter J. Bentley: Humans are the ultimate in killers. We have taken over the planet like a plague and wiped out a large number of existing species. “Intelligent” computers would be very very stupid if they tried to get in our way. If they have any intelligence at all, they will be very friendly. We are the dangerous ones, not them.
Hector Levesque: It’s always important to watch for risks with any technology. AI technology is no different.
Pei Wang: I think the idea “to make superhuman AI provably friendly” is similar to the idea “to make airplane provably safe” and “to make baby provably ethical” — though the motivation is respectful, the goal cannot be accurately defined, and the approximate definitions cannot be reached.
What if the Wright brothers were asked “to figure out how to make airplane provably safe before attempting to build it”, or all parents are asked “to figure out how to make children provably ethical before attempting to have them”?
Since an AI system is adaptive (according to my opinion, as well as many others’), its behaviors won’t be fully determined by its initial state or design (nature), but strongly influenced by its experience (nurture). You cannot make a friendly AI (whatever it means), but have to educate an AI to become friendly. Even in that case, it cannot be “provably friendly” — only mathematical conclusions can be proved, and empirical predictions are always fallible.
J. Storrs Hall: This is approximately like saying we need to require a proof, based on someone’s DNA sequence, that they can never commit a sin, and that we must not allow any babies to be born until they can offer such a proof.
Paul Cohen: Same answer as above. Today we can build ultra specialist assistants (and so maintain control and make the ethical decisions ourselves) and we can’t go further until we solve the problems of general intelligence — vision, language understanding, reading, reasoning…
William Uther: I think this is a worthwhile goal for a small number of researchers to think about, but I don’t think we need many. I think we are far enough away from ‘super-intelligences’ that it isn’t urgent. In particular, I don’t think that having ‘machines smarter than humans’ is some sort of magical tipping point. AI is HARD. Having machines that are smarter than humans means they’ll make progress faster than humans would. It doesn’t mean they’ll make progress massively faster than humans would in the short term.
I also think there are ethical issues worth considering before we have AGI. Seehttp://m.theatlantic.com/technology/print/2011/12/drone-ethics-briefing-what-a-leading-robot-expert-told-the-cia/250060/
Note that none of those ethical issues assume some sort of super-intelligence. In the same that ethics in humans doesn’t assume super-intelligence.
Kevin Korb: It is the key issue in the ethics of AI. Without a good case to make, the research may need to cease. To be sure, one aspect of a good case may well be that unethical projects are underway and likely to succeed. Per my answers above, I do not currently believe anything of the kind. No project is near to success.
John Tromp: Its importance grows with the extent to which we allow computers control over critical industrial/medical/economic processes, infrastructure, etc. As long as their role is limited to assisting humans in control, there appears to be little risk.
Michael G. Dyer: A robot that does not self-replicate is probably not very dangerous (leaving out robots for warfare).
A robot that wants to make multiple copies of itself would be dangerous (because it could undergo a rapid form of Lamarckian evolution. There are two type of replication: factory replication and factory division via the creation of a new factory. In social insects this is the difference between the queen laying new eggs and a hive splitting up to go build new hive structures at a new site.
Assuming that humans remain in control of the energy and resources to a robot-producing factory, then factory replication could be shut down. Robots smart enough to go build a new factory and maintain control over the needed resources would pose the more serious problem. As robots are designed (and design themselves) to follow their own goals (for their own self-survival, especially in outer space) then those goals will come into conflict with those of humans. Asimov’s laws are too weak to protect humans and as robots design new versions of themselves then they will eliminate those laws anyway.
Monica Anderson: Not very important. Radical self-modification cannot be undertaken by anyone (including AIs) without Understanding of what would make a better Understander. While it is possible that an AI could be helpful in this research I believe the advances in this area would be small, slow to arrive, and easy to control, hitting various brick walls of radically diminishing returns that easily dis-compensates advances of all kinds including Moore’s Law.
We already use computers to design faster, better, logically larger and physically smaller computers. This has nothing to do with AI since the improvements come from Understanding about the problem domain – computer design – that is performed by humans. Greater capability in a computer translates to very small advances in Reductive capability. Yes, Understanding machines may be able to eventually Understand Understanding to the point of creating a better Understander. This is a long ways off; Understanding Understanding is uncommon even among humans. But even then, the unpredictability of our Mundane reality is what limits he advantage any intelligent agent might have.
Peter Gacs: This is an impossible task. “AI” is not a separate development that can be regulated the way that governments regulate research over infectious bacteria to make sure they do not escape the laboratory. Day for day, we are yielding decision power to smart machines, since we draw—sometimes competitive—advantage from this. Emphasizing that the process is very gradual, I still constructed a parable that illustrates the process via a quick and catastrophic denuement.
Thinking it out almost fourty years ago, I assumed that the nuclear superpowers, the Soviet Union and the USA, would live on till the age of very smart machines. So, at some day, for whatever reason, World War 3 breaks out between these superpowers. Both governments consult their advanced computer systems on how to proceed, and both sides get analogous answers. The Soviet computer says: the first bomb must be dropped on the Kremlin; in the US, the advice is to drop the first bomb on the Pentagon. The Americans still retain enough common sense to ignore the advice; but Soviets are more disciplined, and obey their machine. After the war plays out, the Soviet side wins, since the computer advice was correct on both sides. (And from then on, machines rule…)
Eray Ozkural: A sense of benevolence or universal ethics/morality would only be required if the said AI is also an intelligent agent that would have to interact socially with humans. There is no reason for a general-purpose AI to be an intelligent agent, which is an abstraction of animal, i.e., as commonly known as an “animat” since early cyberneticists. Instead, the God-level intelligence could be an ordinary computer that solves scientific problems on demand. There is no reason for it to control robotic hardware or act on its own, or act like a human or an animal. It could be a general-purpose expert system of some sort, just another computer program, but one that is extremely useful. Ray Solomonoff wrote this about human-like behavior in his paper presented at the 2006 Dartmouth Artificial Intelligence conference (50th year anniversary) titled”Machine Learning – Past and Future”, which you can download from his website:
“To start, I’d like to define the scope of my interest in A.I. I am not particularly interested in simulating human behavior. I am interested in creating a machine that can work very difficult problems much better and/or faster than humans can – and this machine should be embodied in a technology to which Moore’s Law applies. I would like it to give a better understanding of the relation of quantum mechanics to general relativity. I would like it to discover cures for cancer and AIDS. I would like it to find some very good high temperature superconductors. I would not be disappointed if it were unable to pass itself off as a rock star.”
That is, if you constrain the subject to a non-autonomous, scientific AI, I don’t think you’ll have to deal with human concepts like “friendly” at all. Without even mentioning how difficult it might be to teach any common sense term to an AI. For that, you would presumably need to imitate the way humans act and experience.
However, to solve the problems in science and engineering that you mention, a robotic body, or a fully autonomous, intelligent agent, is not needed at all. Therefore, I think it is not very important to work on friendliness for that purpose. Also, one person’s friend is another’s enemy. Do we really want to introduce more chaos to our society?
Leo Pape: Proof is for mathematics, not for actual machines. Even for the simplest machines we have nowadays we cannot proof any aspect of their operation. If this were possible, airplane travel would be a lot safer.
Donald Loveland: It is important to try. I do not think it can be done. I feel that humans are safe from AI takeover for this century. Maybe not from other calamities, however.
Laurent Orseau: It is quite dubious that “provably friendly” is something that is possible.
A provably friendly AI is a dead AI, just like a provably friendly human is a dead human, at least because of how humans would use/teach it, and there are bad guys who would love to use such a nice tool.
The safest “AI” system that I can think of is a Q/A system that is *not* allowed to ask questions (i.e. to do actions). But then it cannot learn autonomously and may not get as smart as we’d like, at least in reasonable time; I think it would be quite similar to a TSP solver: its “intelligence” would be tightly linked to its CPU speed.
“Provably epsilon-friendly” (with epsilon << 1 probability that it might not be always friendly) is probably a more adequate notion, but I’m still unsure this is possible to get either, though maybe under some constraints we might get something.
That said, I think this problem is quite important, as there is still a non-negligible possibility that an AGI gets much more *power* (no need for vastly more intelligence) than humanity, even without being more intelligent. An AGI could travel at the speed of information transfer (so, light speed) and is virtually immortal by restoring from backups and creating copies of itself. It could send emails on behalf of anyone, and could crack high security sites with as much social engineering as we do. As it would be very hard to put in jail or to annihilate, it would feel quite safe (for its own life) to do whatever it takes to achieve its goals.
Regarding power and morality (i.e. what are good goals), here is a question: Suppose you are going for a long walk in the woods in a low populated country, on your own. In the middle of the walk, some big guy pops out of nowhere and comes to talk to you. He is ugly, dirty, smells horribly bad, and never stops talking. He gets really annoying, poking you and saying nasty things, and it’s getting worse and worse. You really can’t stand it anymore. You run, you go back and forth, you shout at him, you vainly try to reason him but you can’t get rid of him. He just follows you everywhere. You don’t really want to start a fight as he looks much stronger than you are. Alas, it will take you some 5 more hours to get back to your car and nobody else is in the woods. But In your pocket you have an incredible device: A small box with a single button that can make everything you wish simply disappear instantly. No blood, no pain, no scream, no trace, no witness, no legal problem, 100% certified. At some instant the guy would be here, the next instant he would not, having simply vanished. As simple as that. You don’t know what happens to the disappeared person. Maybe he dies, maybe he gets teleported somewhere, or reincarnated or whatever. You know that nobody knows this guy, so nobody can miss him or even look for him. You try to explain to him what this box is, you threaten him to press the button but he does not care. And he’s getting so, so annoying, that you can’t refrain to scream. Then you stare at the button… Will you press it?
My guess is that most people would like to say no, because culture and law say it’s bad, but the truth may be that most of them would be highly tempted if facing such a situation. But if they had a gun or a saber instead of a button, the answer would probably be a straighter no (note that a weapon injury is much like a death sentence in the woods). The definition of morality might depend on the power you have.
But, hopefully, we will be sufficiently smart to put a number of safety measures and perform a lot of testing under stressful conditions before launching it in the wild.
Richard Loosemore: Absolutely essential. Having said that, the task of making it “provably” friendly is not as difficult as portrayed by organizations (SIAI, FHI) that have a monomaniacal dedication to AI techniques that make it impossible. So in other words: essential, but not a difficult task at all.
John E. Laird: I don’t think so. It would be impossible to prove something like that for a system that is sufficiently complex.
Kristinn R. Thorisson: Not important at all. However, researching the risks associated with *human misuse* of such technology should be on the drawing board of governments everywhere in the next 10 years, ideally.
Larry Wasserman: Not at all important. I see this as inevitable; just the next step in evolution.
Michael Littman: No, I don’t think it’s possible. I mean, seriously, humans aren’t even provably friendly to us and we have thousands of years of practice negotiating with them.
Shane Legg: I think we have a bit of a chicken and egg issue here. At the moment we don’t agree on what intelligence is or how to measure it, and we certainly don’t agree on how a human level AI is going to work. So, how do we make something safe when we don’t properly understand what that something is or how it will work? Some theoretical issues can be usefully considered and addressed. But without a concrete and grounded understanding of AGI, I think that an abstract analysis of the issues is going to be very shaky.
Jürgen Schmidhuber: From a paper of mine:
All attempts at making sure there will be only provably friendly AIs seem doomed. Once somebody posts the recipe for practically feasible self-improving Goedel machines or AIs in form of code into which one can plug arbitrary utility functions, many users will equip such AIs with many different goals, often at least partially conflicting with those of humans. The laws of physics and the availability of physical resources will eventually determine which utility functions will help their AIs more than others to multiply and become dominant in competition with AIs driven by different utility functions. Which values are “good”? The survivors will define this in hindsight, since only survivors promote their values.
Stan Franklin: Proofs occur only in mathematics. Concern about the “friendliness” of AGI agents, or the lack thereof, has been present since the very inception of AGI. The 2006 workshop <http://www.agiri.org/forum/index.php?act=ST&f=21&t=23>, perhaps the first organized event devoted to AGI, included a panel session entitled How do we more greatly ensure responsible AGI? Video available at <http://video.google.com/videoplay?docid=5060147993569028388> (There’s also a video of my keynote address.) I suspect we’re not close enough to achieving AGI to be overly concerned yet. But that doesn’t mean we shouldn’t think about it. The day may well come.
Abram Demski: “Provably non-dangerous” may not be the best way of thinking about the problem. Overall, the goal is to reduce risk. Proof may not be possible or may not be the most effective route.
So: is it important to solve the problem of safety before trying to solve the problem of intelligence?
I don’t think this is possible. Designs for safe systems have to be designs for systems, so they must be informed by solutions to the intelligence problem.
It would also be undesirable to stall progress while considering the consequences. Serious risks are associated with many areas of research, but it typically seems better to mitigate those risks while moving forward rather than beforehand.
That said, it seems like a good idea to put some thought into safety & friendliness while we are solving the general intelligence problem.
Richard Carrier: Yes. At the very least it is important to take the risks very seriously, and incorporate it as a concern within every project flow. I believe there should always be someone expert in the matter assigned to any AGI design team, who is monitoring everything being done and assessing its risks and ensuring safeguards are in place before implementation at each step. It already concerns me that this might not be a component of the management of Siri, and Siri achieving AGI is a low probability (but not vanishingly low; I’d say it could be as high as 1% in 10 years unless Siri’s processing space is being deliberately limited so it cannot achieve a certain level of complexity, or in other ways its cognitive abilities being actively limited).
Required is not very much. A single expert monitoring Siri who has real power to implement safeguards would be sufficient, so with salary and benefits and collateral overhead, that’s no more than $250,000/year, for a company that has billions in liquid capital. (Because safeguards are not expensive, e.g. capping Siri’s processing space costs nothing in practical terms; likewise writing her software to limit what she can actually do no matter how sentient she became, e.g. imagine an army of human hackers hacked Siri at the source and could run Siri by a million direct terminals, what could they do? Answering that question will evoke obvious safeguards to put on Siri’s physical access and software; the most obvious is making it impossible for Siri to rewrite her own core software.)
But what actually is being spent I don’t know. I suspect “a little more” needs to be spent than is, only because I get the impression AI developers aren’t taking this seriously, and yet the cost of monitoring is not that high.
And yet you may notice all this is separate from the question of making AGI “provably friendly” which is what you asked about (and even that is not the same as “provably safe” since friendly AGI poses risks as well, as the Singularity Institute has been pointing out).
This is because all we need do now is limit AGI’s power at its nascence. Then we can explore how to make AGI friendly, and then provably friendly, and then provably safe. In fact I expect AGI will even help us with that. Once AGI exists, the need to invest heavily in making it safe will be universally obvious. Whereas before AGI exists there is little we can do to ascertain how to make it safe, since we don’t have a working model to test. Think of trying to make a ship safe, without ever getting to build and test any vessel, nor having knowledge of any other vessels, and without knowing anything about the laws of buoyancy. There wouldn’t be a lot you could do.
Nevertheless it would be worth some investment to explore how much we can now know, particularly as it can be cross-purposed with understanding human moral decision making better, and thus need not be sold as “just AI morality” research. How much more should we spend on this now? Much more than we are. But only because I see that money benefiting us directly, in understanding how to make ordinary people better, and detect bad people, and so on, which is of great value wholly apart from its application to AGI. Having it double as research on how to design moral thought processes unrestrained by human brain structure would then benefit any future AGI development.
What probability do you assign to the possibility of human extinction as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)
Pei Wang: I don’t think it makes much sense to talk about “probability” here, except to drop all of its mathematical meaning.
Which discovery is “provably non-dangerous”? Physics, chemistry, and biology are all responsible for known ways to human extinction. Should we pause all these explorations until they are “provably safe”? How about the use of fire? Would the human species do better without using this “provably dangerous” technique?
AI systems, like all major scientific and technical results, can lead to human extinction, but it is not the reason to stop or pause this research. Otherwise we cannot do anything, since every non-trivial action has unanticipated consequences. Though it is important to be aware of the potential danger of AI, we probably have no real alternative but to take this opportunity and challenge, and making our best decisions according to their predicted consequences.
J. Storrs Hall: This is unlikely but not inconceivable. If it happens, however, it will be because the AI was part of a doomsday device probably built by some military for “mutual assured destruction”, and some other military tried to call their bluff. The best defense against this is for the rest of the world to be as smart as possible as fast as possible.
To sum up, AIs can and should be vetted with standard and well-understood quality assurance and testing techniques, but defining “friendliness to the human race”, much less proving it, is a pipe dream.
Paul Cohen: From where I sit today, near zero. Besides, the danger is likely to be mostly on the human side: Irrespective of what machines can or cannot do, we will continue to be lazy, self-righteous, jingoistic, squanderers of our tiny little planet. It seems to me much more likely that we destroy will our universities and research base and devote ourselves to wars over what little remains of our water and land. If the current anti-intellectual rhetoric continues, if we continue to reject science for ignorance and God, then we will first destroy the research base that can produce intelligent machines and then destroy the planet. So I wouldn’t worry too much about Dr. Evil and her Annihilating AI. We have more pressing matters to worry about.
Kevin Korb: This question is poorly phrased.
You should ask relative to a time frame. After all, the probability of human extinction sometime or other is 1. [Note by XiXiDu: I added “within 100 years” to the question after I received his answers.]
“Provably” is also problematic. Outside of mathematics, little is provable.
My generic answer is that we have every prospect of building an AI that behaves reasonably vis-a-vis humans, should we be able to build one at all. We should, of course, take up those prospects and make sure we do a good job rather than a bad one.
John Tromp: The ability of humans to speed up their own extinction will, I expect, not be matched any time soon by machine, again not in my lifetime.
Michael G. Dyer: Loss of human dominance is a foregone conclusion (100% for loss of dominance). Every alien civilization (including humans) that survives its own annihilation (via nuclear, molecular and nano technologies) will at some point figure out how to produce synthetic forms of its own intelligence. These synthetics beings are necessary for space travel (because there is most likely no warp drive possible and even planets in the Goldilocks zone will have unpleasant viral and cellular agents). Biological alien creatures will be too adapted to their own planets.
As to extinction, we will only not go extinct if our robot masters decide to keep some of us around. If they decide to populate new planets with human life then they could make the journey and humans would thrive (but only because the synthetic agents wanted this).
If a flying saucer ever lands, the chances are 99.99% that what steps out will be a synthetic intelligent entity. It’s just too hard for biological entities (adapted to their planet) to make the long voyages required.
Peter Gacs: I give it a probability near 1%. Humans may become irrelevant in the sense of losing their role of being at the forefront of the progress of “self-knowledge of the universe” (whatever this means). But irrelevance will also mean that it will not be important to eradicate them completely. On the other hand, there are just too many, too diverse imaginable scenarios for their coexistence with machines that are smarter than they are, so I don’t dare to predict any details. Of course, species do die out daily even without our intent to extinguish them, but I assume that at least some humans would find ways to survive for some more centuries to come.
Eray Ozkural: Assuming that we are talking about intelligent agents, which are strictly unnecessary for working on scientific problems which is your main concern, I think first that it is not possible to build something that is provably non-dangerous, unless you can encode a rule of non-interference into its behavior. Otherwise, an interfering AI can basically do anything, and since it is much smarter than us, it can create actual problems that we had no way of anticipating or solving. I have thought at length on this question, and considered some possible AI objectives in a blog essay:
I think that it does depend on the objectives. In particular, selfish/expansionist AI objectives are very dangerous. They might almost certainly result in interference with our vital resources. I cannot give a probability, because it is a can of worms, but let me try to summarize. For instance, the objective to maximize its knowledge about the world, a similar version of which was considered by Laurent Orseau in a reinforcement learning setting, and previously by a student of Solomonoff. Well, it’s an intuitive idea that a scientist tries to learn as much as possible about the world. What if we built an intelligent agent that did that? If it’s successful, it would have to increase its computation and physical capacity to such an extent that it might expand rapidly, first assimilating the solar system and then expand at our galactic neighborhood to be able to pursue its unsatisfiable urge to learn. Similar scenarios might happen in any kind of intelligent agent with selfish objectives (i.e., optimize some aspect of itself). Those might be recognized as Omohundro drives, but the objectives themselves are the main problem mostly.
This is a problem when you are stuck in this reinforcement learning mentality, thinking in terms of rewards and punishment. The utility function that you will define will tend to be centered around the AI itself rather than humanity, and things have a good chance of going very wrong. This is mostly regardless of what kind of selfishness is pursued, be it knowledge, intelligence, power, control, satisfaction of pseudo pleasure, etc. In the end, the problem is with the relentless pursuit of a singular, general objective that seeks to benefit only the self. And this cannot be mitigated by any amount of obstruction rules (like Robot Laws or any other kind of laws). The motivation is what matters, and even when you are not pursuing silly motivations like stamp collection, there is a lot of danger involved, not due to our neglect of human values, which are mostly irrelevant at the level which such an intelligent agent would operate, but our design of its motivations.
However, even if benevolent looking objectives were adopted, it is not altogether clear, what sorts of crazy schemes an AI would come up with. In fact, we could not predict the plans of an intelligent agent smarter than the entire humanity. Therefore, it’s a gamble at best, and even if we made a life-loving, information-loving, selfless, autonomous AI as I suggested, it might still do a lot of things that many people would disagree with. And although such an AI might not extinguish our species, it might decide, for instance, that it would be best to scan and archive our species for using later. That is, there is no reason to expect that an intelligent agent that is superior to us in every respect should abide by our will.
One might try to imagine many solutions to make such intelligent agents “fool-proof” and “fail-safe”, but I suspect that for the first, human foolishness has unbounded inventiveness, and for the second, no amount of security methods that we design would make a mind that is smarter than the entire humanity “safe”, as we have no way of anticipating every situation that would be created by its massive intelligence, and the amount of chaotic change that it would bring. It would simply go out of control, and we would be at the mercy of its evolved personality. I said personality on purpose, because personality seems to be a result of initial motivations, a priori knowledge, and its life experience. Since its life experience and intelligence will overshadow any initial programming, we cannot really foresee its future personality. All in all, I think it is great for thinking about, but it does not look like a practical engineering solution. That’s why I simply advise against building fully autonomous intelligent agents. I sometimes say, play God, and you will fail. I tend to think there is a Frankenstein Complex, it is as if there is an incredible urge in many people to create an independent artificial person.
On the other hand, I can imagine how I could build semi-autonomous agents that might be useful for many special tasks, avoiding interference with humans as much as possible, with practical ways to test for their compliance with law and customs. However, personally speaking, I cannot imagine a single reason why I would want to create an artificial person that is superior to me in every respect. Unless of course, I have elected to bow down to a superior species.
Laurent Orseau: It depends if we consider that we will simply leave safety issues aside before creating an AGI, thinking that all will go well, or if we take into account that we will actually do some research on that.
If an human-level AGI was built today, then we probably wouldn’t be ready and the risks due to excitement to get something out of it might be high (“hey look, it can drive the tank, how cool is that?!”).
But if we build one and can show to the world a simple proof of concept that we do have (sub-human level) AGI and that will grow to human-level and most researchers acknowledge it, I presume we will start to think hard about the consequences.
Then all depends on how much unfriendly it is.
Humanity is intelligent enough to care for its own life, and try to avoid high risks (most of the time), unless there is some really huge benefit (like supremacy).
Also, if an AGI wants to kill all humans, humanity would not just wait for it, doing nothing.
This might be dangerous for the AI itself too (with EMPs for example). And an AGI also wants to avoid high risks unless there is a huge benefit. If some compromise is possible, it should be better.
If we can build an AGI that is quite friendly (i.e. has “good” goals and wants to cooperate with humans without pressing them too much, or at least has no incentive to kill humans) but may become nasty only if its life is at stake, then I don’t think we need to worry *too* much: just be friendly with it as you would be with an ally, and its safety will be paired with your own safety.
So I think the risks of human extinction will be pretty low, as long as we take them into account seriously.
Richard Loosemore: The question is loaded, and I reject the premises. It assumes that someone can build an AI that is both generally intelligent (enough to be able to improve itself) whilst also having a design whose motivation is impossible to prove. That is a false assumption. People who try to build AI systems with the kind of design whose motivation is unstable will actually not succeed in building anything that has enough general intelligence to become a danger.
Monica Anderson: 0.00%. All intelligences must be fallible in order to deal with a complex and illogical world (with only incomplete information available) on a best effort basis. And if an AI is fallible, then we can unplug it… sooner or later, even if it is “designed to be unstoppable”. Ten people armed with pitchforks, and armed also with ten copies of last year’s best AI can always unplug the latest model AI.
Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?
Explanatory remark: What existential risk (human extinction type event) is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?
Brandon Rohrer: Evolved variants of currently existing biological viruses and bacteria.
Tim Finin: No.
Pat Hayes: No. Nanotechnology has the potential to make far-reaching changes to the actual physical environment. AI poses no such threat. Indeed, I do not see that AI itself (that is, actual AI work being done, rather than the somewhat uninformed fantasies that some authors, such as Ray Kurtzwiel, have invented) poses any serious threat to anyone.
I would say that any human-extinction type event is likely to make a serious dent in my personal goals. (But of course I am being sarcastic, as the question as posed seems to me to be ridiculous.)
When I think of the next century, say, the risk I amost concerned about is global warming and the resulting disruption to the biosphere and human society. I do not think that humans will become extinct, but I think that our current global civilization might not survive.
Nils Nilsson: I think the risk of terrorists getting nuclear weapons is a greater risk than AI will be during this century. They would certainly use them if they had them — they would be doing the work of Allah in destroying the Great Satan. Other than that, I think global warming and other environmental problems will have a greater negative impact than AI will have during this century. I believe technology can save us from the risks associated with new viruses. Bill Joy worries about nano-dust, but I don’t know enough about that field to assess its possible negative impacts. Then, of course, there’s the odd meteor. Probably technology will save us from that.
David Plaisted: There are risks with any technology, even computers as we have them now. It depends on the form of government and the nature of those in power whether technology is used for good or evil more than on the nature of the technology itself. Even military technology can be used for repression and persecution. Look at some countries today that use technology to keep their people in subjection.
Peter J. Bentley: No.
Hector Levesque: I think AI risks are smaller than others.
William Uther: I have a few worries. From the top of my head:
- i) Global warming. While not as urgent or sexy as AI-run-amok, I think it a far more important issue for humankind.
- ii) biological warfare / terrorism / insanity. See the article I linked to above:http://www.nytimes.com/2012/01/08/opinion/sunday/an-engineered-doomsday.html
Leo Pape: No idea how to compare these risks.
Donald Loveland: Ouch. Now you want me to amplify my casual remark above. I guess that I can only say that I hope we are lucky enough for the human race to survive long enough to evolve into, or be taken over by, another type of intelligence.
John E. Laird: No. There are many more things to lose sleep over than AGI. Worry about bio-engineering – genetically altered avian flu virus – to me, that is much more likely to kill us off than AGI. Nanotechnology also has its scary side.
Shane Legg: It’s my number 1 risk for this century, with an engineered biological pathogen coming a close second (though I know little about the latter).
Richard Carrier: All existential risks are of such vastly low probability it would be beyond human comprehension to rank them, and utterly pointless to anyway. And even if I were to rank them, extinction by comet, asteroid or cosmological gamma ray burst vastly outranks any manmade cause. Even extinction by supervolcano vastly outranks any manmade cause. So I don’t concern myself with this (except to call for more investment in earth impactor detection, and the monitoring of supervolcano risks).
We should be concerned not with existential risks, but ordinary risks, e.g. small scale nuclear or biological terrorism, which won’t kill the human race, and might not even take civilization into the Dark Ages, but can cause thousands or millions to die and have other bad repercussions. Because ordinary risks are billions upon billions of times more likely than extinction events, and as it happens, mitigating ordinary risks entails mitigating existential risks anyway (e.g. limiting the ability to go nuclear prevents small scale nuclear attacks just as well as nuclear annihilation events, in fact it makes the latter billions of times less likely than it already is).
Thus when it comes to AI, as an existential risk it just isn’t one (P –> 0), but as a panoply of ordinary risks, it is (P –> 1). And it doesn’t matter how it ranks, it should get full attention anyway, like all definite risks do. It thus doesn’t need to be ranked against other risks, as if terrorism were such a great risk we should invest nothing in earthquake safety, or vice versa.
What is the current level of awareness of possible risks from AI, relative to the ideal level?
Brandon Rohrer: High.
Tim Finin: About right.
Pat Hayes: The actual risks are negligible: the perceived risks (thanks to the popularization of such nonsensical ideas as the “singularity”) are much greater.
Nils Nilsson: Not as high as it should be. Some, like Steven Omohundro, Wendell Wallach and Colin Allen (“Moral Machines: Teaching Robots Right from Wrong”), Patrick Lin (“Robot Ethics”), and Ronald Arkin (“Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture”) are among those thinking and writing about these problems. You probably know of several others.
David Plaisted: Probably not as high as it should be.
Peter J. Bentley: The whole idea is blown up out of all proportion. There is no real risk and will not be for a very long time. We are also well aware of the potential risks.
Hector Levesque: Low. Technology in the area is well behind what was predicted in the past, and so concern for risks is correspondingly low.
William Uther: I think most people aren’t worried about AI risks. I don’t think they should be. I don’t see a problem here.
Leo Pape: People think that current AI is much more capable than it is in reality, and therefore they often overestimate the risks. This is partly due to the movies and due to scientists overselling their work in scientific papers and in the media. So I think the risk is highly overestimated.
Donald Loveland: The current level of awareness of the AI risks is low. The risk that I most focus on now is the economic repercussions of advancing AI. Together with outsourcing, the advancing automation of the workplace, now dominated by AI advances, is leading to increasing unemployment. This progression will not be monotonic, but each recession will result in more permanently unemployed and weaker recoveries. At some point our economic philosophy could change radically in the U.S., an event very similar to the great depression. We may not recover, in the sense of returning to the same economic structure. I think (hope) that democracy will survive.
John E. Laird: I think some awareness is important. Possibly a bit more than now, but this is not a pressing issue for human existence. As I stated earlier, we have much more to worry about how humans will use intelligent systems than how the intelligent systems will evolve on their own.
Stan Franklin: I’m not sure about the ideal level. Most AI researchers and practitioners seem to devote little or no thought at all to AGI. Though quite healthy and growing, the AGI movement is still marginal within the AI community. AGI has been supported by AAAI, the central organization of the AI community, and continues to receive such support.
Abram Demski: The general population seems to be highly aware of the risks of AI, with very little awareness of the benefits.
Within the research community, the situation was completely opposite until recently. I would say present awareness levels in the research community is roughly optimal…
Richard Carrier: Very low. Even among AI developers it seems.
Can you think of any milestone such that if it were ever reached you would expect human level machine intelligence to be developed within five years thereafter?
Brandon Rohrer: No, but the demonstrated ability of a robot to learn from its experience in a complex and unstructured environment is likely to be a milestone on that path, perhaps signalling HLI is 20 years away.
Tim Finin: Passing a well constructed, open ended turing test.
Pat Hayes: No. There are no ‘milestones’ in AI. Progress is slow but steady, and there are no magic bullets.
Nils Nilsson: Because human intelligence involves so many different abilities, I think AGI will require many different technologies with many different milestones. I don’t think there is a single one. I do think, though, that the work that Andrew Ng, Geoff Hinton, and (more popularly) Jeff Hawkins and colleagues are doing on modeling learning in the neo-cortex using deep Bayes networks is on the right track.
Thanks for giving me the opportunity to think about your questions, and I hope to stay in touch with your work!
David Plaisted: I think it depends on the interaction of many different capabilities.
Peter J. Bentley: Too many advances are needed to describe here…
Hector Levesque: Reading comprehension at the level of a 10-year old.
William Uther: I still don’t know what you mean by ‘human level intelligence’. I expect artificial intelligence to be quite different to human intelligence. AI is already common in many businesses – if you have a bank loan then the decision about whether to lend to you was probably taken by a machine learning system.
Leo Pape: I would be impressed if a team of soccer playing robots could win a match against professional human players. Of course, the real challenge is finding human players that are willing to play against machines (imagine being tackled by a metal robot).
Donald Loveland: A “pure” learning program that won at Jeopardy ???
John E. Laird: No – I can’t come up with such a milestone.
Michael Littman: Slightly subhuman intelligence? What we think of as human intelligence is layer upon layer of interacting subsystems. Most of these subsystems are complex and hard to get right. If we get them right, they will show very little improvement in the overall system, but will take us a step closer. The last 5 years before human intelligence is demonstrated by a machine will be pretty boring, akin to the 5 years between the ages of 12 to 17 in a human’s development. Yes, there are milestones, but they will seem minor compared to first few years of rapid improvement.
Shane Legg: That’s a difficult question! When a machine can learn to play a really wide range of games from perceptual stream input and output, and transfer understanding across games, I think we’ll be getting close.
Richard Carrier: There will not be “a” milestone like that, unless it is something wholly unexpected (like a massive breakthrough in circuit design that allows virtually infinite processing power on a desktop: which development would make P(AGI within five years) > 33%). But wholly unexpected discoveries have a very low probability. Sticking only with what we already expect to occur, the five-year milestone for AGI will be AHI, artificial higher intelligence, e.g. a robot cat that behaved exactly like a real cat. Or a Watson who can actively learn on its own without being programmed with data (but still can only answer questions, and not plan or reason out problems). The CALO project is likely to develop an increasingly sophisticated Siri-like AI that won’t be AGI but will gradually become more and more like AGI, so that there won’t be any point where someone can say “it will achieve AGI within 5 years.” Rather it will achieve AGI gradually and unexpectedly, and people will even debate when or whether it had.
Basically, I’d say once we have “well-trained dog” level AI, the probability of human-level AI becomes:
P(< 5 years) = 10%
P(< 10 years) = 25%
P(< 20 years) = 50%
P(< 40 years) = 90%
How much have you read about the formal concepts of optimal AI design which relate to searches over complete spaces of computable hypotheses or computational strategies, such as Solomonoff induction, Levin search, Hutter’s algorithm M, AIXI, or Gödel machines?
Jürgen Schmidhuber: Recursive Self-Improvement: The provably optimal way of doing this was published in 2003. From a recent survey paper:
The fully self-referential Goedel machine [1,2] already is a universal AI that is at least theoretically optimal in a certain sense. It may interact with some initially unknown, partially observable environment to maximize future expected utility or reward by solving arbitrary user-defined computational tasks. Its initial algorithm is not hardwired; it can completely rewrite itself without essential limits apart from the limits of computability, provided a proof searcher embedded within the initial algorithm can first prove that the rewrite is useful, according to the formalized utility function taking into account the limited computational resources. Self-rewrites may modify / improve the proof searcher itself, and can be shown to be globally optimal, relative to Goedel’s well-known fundamental restrictions of provability. To make sure the Goedel machine is at least asymptotically optimal even before the first self-rewrite, we may initialize it by Hutter’s non-self-referential but asymptotically fastest algorithm for all well-defined problems HSEARCH , which uses a hardwired brute force proof searcher and (justifiably) ignores the costs of proof search. Assuming discrete input/output domains X/Y, a formal problem specification f : X -> Y (say, a functional description of how integers are decomposed into their prime factors), and a particular x in X (say, an integer to be factorized), HSEARCH orders all proofs of an appropriate axiomatic system by size to find programs q that for all z in X provably compute f(z) within time bound tq(z). Simultaneously it spends most of its time on executing the q with the best currently proven time bound tq(x). Remarkably, HSEARCH is as fast as the fastest algorithm that provably computes f(z) for all z in X, save for a constant factor smaller than 1 + epsilon (arbitrary real-valued epsilon > 0) and an f-specific but x-independent additive constant. Given some problem, the Goedel machine may decide to replace its HSEARCH initialization by a faster method suffering less from large constant overhead, but even if it doesn’t, its performance won’t be less than asymptotically optimal.
All of this implies that there already exists the blueprint of a Universal AI which will solve almost all problems almost as quickly as if it already knew the best (unknown) algorithm for solving them, because almost all imaginable problems are big enough to make the additive constant negligible. The only motivation for not quitting computer science research right now is that many real-world problems are so small and simple that the ominous constant slowdown (potentially relevant at least before the first Goedel machine self-rewrite) is not negligible. Nevertheless, the ongoing efforts at scaling universal AIs down to the rather few small problems are very much informed by the new millennium’s theoretical insights mentioned above, and may soon yield practically feasible yet still general problem solvers for physical systems with highly restricted computational power, say, a few trillion instructions per second, roughly comparable to a human brain power.
 J. Schmidhuber. Goedel machines: Fully Self-Referential Optimal Universal Self-Improvers. In B. Goertzel and C. Pennachin, eds.: Artificial General Intelligence, p. 119-226, 2006.
 J. Schmidhuber. Ultimate cognition à la Goedel. Cognitive Computation, 1(2):177-193, 2009.
 M. Hutter. The fastest and shortest algorithm for all well-defined problems. International Journal of
Foundations of Computer Science, 13(3):431-443, 2002. (On J. Schmidhuber’s SNF grant 20-61847).
 J. Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine
arts. Connection Science, 18(2):173-187, 2006.
 J. Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions
on Autonomous Mental Development, 2(3):230-247, 2010.
A dozen earlier papers on (not yet theoretically optimal) recursive self-improvement since 1987 are here:http://www.idsia.ch/~juergen/metalearner.html
David Plaisted: My research does not specifically relate to those kinds of questions.
I’d like to make some other points as well:
- When trying to define ‘Human level intelligence’ it is often useful to consider how many humans meet your standard. If the answer is ‘not many’ then you don’t have a very good measure of human level intelligence. Does Michael Jordan (the basketball player) have human level intelligence? Does Stephen Hawking? Does George Bush?
- People who are worried about the singularity often have two classes of concerns. First there is the class of people who worry about robots taking over and just leaving humans behind. I think that is highly unlikely. I think it much more likely that humans and machines will interact and progress together. Once I have my brain plugged in to an advanced computer there will be no AI that can out-think me. Computers already allow us to ‘think’ in ways that we couldn’t have dreamt of 50 years ago.
This brings up the second class of issues that people have. Once we are connected to machines, will we still be truly human. I have no idea what people who worry about this mean by ‘truly human’. Is a human with a prosthetic limb truly human? How about a human driving a car? Is a human who wears classes or a hearing aid truly human? If these prosthetics make you non-human, then we’re already past the point where they should be concerned – and they’re not. If these prosthetics leave you human, then why would a piece of glass that allows me to see clearly be ok, and a computer that allows me to think clearly not be ok? Asimov investigated ideas similar to this, but from a slightly different point of view, with his story ‘The Bicentennial Man’.
The real questions are ones of ethics. As people become more powerful, what are the ethical ways of using that power? I have no great wisdom to share there, unfortunately.
Some more thoughts…
Does a research lab (with, say, 50 researchers) have “above human level intelligence”? If not, then it isn’t clear to me that AI will ever have significantly “above human level intelligence” (and see below for why AI is still worthwhile). If so, then why haven’t we had a ‘research lab singularity’ yet? Surely research labs are smarter than humans and so they can work on making still smarter research labs, until a magical point is passed and research labs have runaway intelligence. (That’s a socratic question designed to get you to think about possible answers yourself. Maybe we are in the middle of a research lab singularity.)
As for why study of AI might still be useful even if we never get above human level intelligence: there is the same Dirty, Dull, Dangerous argument that has been used many times. To that I’d add a point I made in a previous email: intelligence is different to motivation. If you get yourself another human you get both – they’re intelligent, but they also have their own goals and you have to spend time convincing them to work towards your goals. If you get an AI, then even if it isn’t more intelligent than a human at least all that intelligence is working towards your goals without argument. It’s similar to the ‘Dull’ justification, but with a slightly different spin.
Donald Loveland: I have some familiarity with Solomonoff inductive inference but not Hutter’s algorithm. I have been retired for 10 years so didn’t know of Hutter until this email. Looks like something interesting to pursue.
Abram Demski: Yes.
Advisors’ Curriculum Vitae
Sandia National Laboratories
Cited by 536
PhD, Mechanical Engineering, Massachusetts Institute of Technology, 2002.
Neville Hogan, Advisor and Thesis Committee Chair.
MS, Mechanical Engineering, Massachusetts Institute of Technology, 1999.
National Science Foundation Fellowship
BS cum laude, Mechanical Engineering, Brigham Young University, 1997.
Ezra Taft Benson (BYU’s Presidential) Scholarship
National Merit Scholarship
Sandia National Laboratories, Albuquerque, NM.
Principal Member of the Technical Staff, 2006 – present
Senior Member of the Technical Staff, 2002 – 2006
University of New Mexico, Albuquerque, NM.
Adjunct Assistant Professor,
Department of Electrical and Computer Engineering, 2007 – present
Google Scholar: scholar.google.com/scholar?q=Brandon+Rohrer
Professor of Computer Science and Electrical Engineering, University of Maryland
Cited by 20832
Tim Finin is a Professor of Computer Science and Electrical Engineering at the University of Maryland, Baltimore County (UMBC). He has over 30 years of experience in applications of Artificial Intelligence to problems in information systems and language understanding. His current research is focused on the Semantic Web, mobile computing, analyzing and extracting information from text and online social media, and on enhancing security and privacy in information systems.
Finin received an S.B. degree in Electrical Engineering from MIT and a Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign. He has held full-time positions at UMBC, Unisys, the University of Pennsylvania, and the MIT AI Laboratory. He is the author of over 300 refereed publications and has received research grants and contracts from a variety of sources. He participated in the DARPA/NSF Knowledge Sharing Effort and helped lead the development of the KQML agent communication language and was a member of the W3C Web Ontology Working Group that standardized the OWL Semantic Web language.
Finin has chaired of the UMBC Computer Science Department, served on the board of directors of the Computing Research Association, been a AAAI councilor, and chaired several major research conferences. He is currently an editor-in-chief of the Elsevier Journal of Web Semantics.
Google Scholar: scholar.google.com/scholar?q=Tim+Finin
Pat Hayes has a BA in mathematics from Cambridge University and a PhD in Artificial Intelligence from Edinburgh. He has been a professor of computer science at the University of Essex and philosophy at the University of Illinois, and the Luce Professor of cognitive science at the University of Rochester. He has been a visiting scholar at Universite de Geneve and the Center for Advanced Study in the Behavioral Studies at Stanford, and has directed applied AI research at Xerox-PARC, SRI and Schlumberger, Inc.. At various times, Pat has been secretary ofAISB, chairman and trustee of IJCAI, associate editor of Artificial Intelligence, a governor of the Cognitive Science Society and president of AAAI.
Pat’s research interests include knowledge representation and automatic reasoning, especially the representation of space and time; the semantic web; ontology design; image description and the philosophical foundations of AI and computer science. During the past decade Pat has been active in the Semantic Web initiative, largely as an invited member of the W3C Working Groups responsible for the RDF, OWL and SPARQL standards. Pat is a member of the Web Science Trust and of OASIS, where he works on the development of ontology standards.
In his spare time, Pat restores antique mechanical clocks and remodels old houses. He is also a practicing artist, with works exhibited in local competitions and international collections. Pat is a charter Fellow of AAAI and of the Cognitive Science Society, and has professional competence in domestic plumbing, carpentry and electrical work.
Selected research: ihmc.us/groups/phayes/wiki/a3817/Pat_Hayes_Selected_Research.html
Nils John Nilsson
Nils J. Nilsson is one of the founding researchers in the discipline of Artificial intelligence. He is the Kumagai Professor of Engineering, Emeritus in Computer Science at Stanford University. He is particularly famous for his contributions to search, planning, knowledge representation, and robotics. [Wikipedia] [Homepage] [Google Scholar]
David Alan Plaisted
Hector Levesque is a Canadian academic and researcher in artificial intelligence. He does research in the area of knowledge representation and reasoning in artificial intelligence. [Wikipedia] [Homepage]
Dr. Pei Wang
Dr. Pei Wang is trying to build general-purpose AI systems, compare them with human intelligence, analyze their theoretical assumptions, and evaluate their potential and limitation. [Curriculum Vitae] [Pei Wang on the Path to Artificial General Intelligence]
Dr. J. Storrs Hall
Dr. J. Storrs Hall is an independent scientist and author. His most recent book is Beyond AI: Creating the Conscience of the Machine, published by Prometheus Books. It is about the (possibly) imminent development of strong AI, and the desirability, if and when that happens, that such AIs be equipped with a moral sense and conscience. This is an outgrowth of his essay Ethics for Machines. [Homepage]
Professor Paul Cohen
Professor Paul Cohen is the director of the School of Information: Science, Technology, and Arts at the University of Arizona. His research is in artificial intelligence. He wants to model human cognitive development in silico, with robots or softbots in game environments as the “babies” they’re trying to raise up. he is particularly interested in the sensorimotor foundations of human language. Several of his projects in the last decade have developed algorithms for sensor-to-symbol kinds of processing in service of learning the meanings of words, most recently, verbs. He also works in what they call Education Informatics, which includes intelligent tutoring systems, data mining and statistical modeling of students’ mastery and engagement, assessment technologies, ontologies for representing student data and standards for content, architectures for content delivery, and so on. [Homepage]
Dr. William Uther
Michael G. Dyer
Professor Michael G. Dyer is an author of over 100 publications, including In-Depth Understanding, MIT Press, 1983. He serves on the editorial board of the journals: Applied Intelligence, Connection Science, Knowledge-Based Systems, International Journal of Expert Systems, and Cognitive Systems Research. His research interests are centered around semantic processing of natural language, through symbolic, connectionist, and evolutionary techniques. [Homepage]
Dr. John Tromp
Dr. John Tromp is interested in Board Games and Artificial Intelligence, Algorithms, Complexity, Algorithmic Information Theory, Distributed Computing, Computational biology. His recent research has focused on the Combinatorics of Go, specifically counting the number of legal positions. [Homepage]
Dr. Kevin Korb
Dr. Kevin Korb both developed and taught the following subjects at Monash University: Machine Learning, Bayesian Reasoning, Causal Reasoning, The Computer Industry: historical, social and professional issues, Research Methods, Bayesian Models, Causal Discovery, Epistemology of Computer Simulation, The Art of Causal. [Curriculum vitae] [Bayesian Artificial Intelligence]
Dr. Leo Pape
Dr. Leo Pape is a postdoc in Jürgen Schmidhuber‘s group at IDSIA (Dalle Molle Institute for Artificial Intelligence). He is interested in artificial curiosity, chaos, metalearning, music, nonlinearity, order, philosophy of science, predictability, recurrent neural networks, reinforcement learning, robotics, science of metaphysics, sequence learning, transcendental idealism, unifying principles. [Homepage] [Publications]
Professor Peter Gacs
Professor Peter Gacs is interested in Fault-tolerant cellular automata, algorithmic information theory, computational complexity theory, quantum information theory. [Homepage]
Professor Donald Loveland
Professor Donald Loveland does focus his research on automated theorem proving, logic programming, knowledge evaluation, expert systems, test-and-treatment problem. [Curriculum vitae]
Eray Ozkural is a computer scientist whose research interests are mainly in parallel computing, data mining, artificial intelligence, information theory, and computer architecture. He has an Msc. and is trying to complete a long overdue PhD in his field. He also has a keen interest in philosophical foundations of artificial intelligence. With regards to AI, his current goal is to complete an AI system based on the Alpha architecture of Solomonoff. His most recent work (http://arxiv.org/abs/1107.2788) discusses axiomatization of AI.
Dr. Laurent Orseau
Dr. Laurent Orseau is mainly interested in Artificial General Intelligence, which overall goal is the grand goal of AI: building an intelligent, autonomous machine. [Homepage] [Publications] [Self-Modification and Mortality in Artificial Agents]
Richard Loosemore is currently a lecturer in the Department of Mathematical and Physical Sciences at Wells College, Aurora NY, USA. Loosemore’s principle expertise is in the field known as Artificial General Intelligence, which seeks a return to the original roots of AI (the construction of complete, human-level thinking systems). Unlike many AGI researchers, his approach is as much about psychology as traditional AI, because he believes that the complex-system nature of thinking systems make it almost impossible to build a safe and functioning AGI unless its design is as close as possible to the design of the human cognitive system. [Homepage]
Monica Anderson has been interested in the quest for computer based cognition since college, and ever since then has sought out positions with startup companies that have used cutting-edge technologies that have been labeled as “AI”. However, those that worked well, such as expert systems, have clearly been of the “Weak AI” variety. In 2001 she moved from using AI techiques as a programmer to trying to advance the field of “Strong AI” as a researcher. She is the founder of Syntience Inc., which was established to manage funding for her exploration of this field. She has a Master’s degree in Computer Science from Linköping University in Sweden. She created three expert systems for Cisco Systems for product configuration verification; She has co-designed systems to automatically classify documents by content; She has (co-)designed and/or (co-)written LISP interpreters, debuggers, chat systems, OCR output parsers, visualization tools, operating system kernels, MIDI control real-time systems for music, virtual worlds, and peer-to-peer distributed database systems. She was Manager of Systems Support for Schlumberger Palo Alto Research. She has worked with robotics, industrial control, marine, and other kinds of embedded systems. She has worked on improving the quality of web searches for Google. She wrote a Genetic Algorithm which successfully generated solutions for the Set Coverage Problem (which has been shown to be NP-hard) around 1994. She has used more than a dozen programming languages professionally and designed or co-designed at least four programming languages, large or small. English is her third human language out of four or five. [More]
Professor John E. Laird
Professor John E. Laird is the founder of Soar Technology, an Ann Arbor company specializing in creating autonomous AI entities. His major research interest is in creating human-level artificial intelligent entities, with an emphasis on the underlying cognitive architecture. [Homepage]
Dr. Kristinn R. Thorisson
Dr. Kristinn R. Thorisson has been developing A.I. systems and technologies for over two decades. He is the Coordinator / Principal Investigator of the HUMANOBS FP7 project and co-author of the AERA architecture, with Eric Nivel, which targets artificial general intelligence. A key driving force behind the project is Thorisson’s new Constructivist Methodology which lays out principles for why and how AI architectures must be given introspective and self-programming capabilities. [Homepage]
Larry A. Wasserman
Larry A. Wasserman is a statistician and a professor in the Department of Statistics and the Machine Learning Department at Carnegie Mellon University. He received the COPSS Presidents’ Award in 1999 and the CRM–SSC Prize in 2002.
Michael L. Littman
Michael L. Littman is a computer scientist. He works mainly in reinforcement learning, but has done work in machine learning, game theory, computer networking, Partially observable Markov decision process solving, computer solving of analogy problems and other areas. He is currently a professor of computer science and department chair at Rutgers University.
Google Scholar: scholar.google.com/scholar?q=Michael+Littman
Shane Legg, a computer scientist and AI researcher who has been working on theoretical models of super intelligent machines (AIXI) with Prof. Marcus Hutter. His PhD thesis Machine Super Intelligence has been completed in 2008. He was awarded the $10,000 Canadian Singularity Institute for Artificial Intelligence Prize.
Publications by Shane Legg:
- Solomonoff Induction thesis
- Universal Intelligence: A Definition of Machine Intelligence paper
- Algorithmic Probability Theory article
- Tests of Machine Intelligence paper
- A Formal Measure of Machine Intelligence paper talk slides
- A Collection of Definitions of Intelligence paper
- A Formal Definition of Intelligence for Artificial Systems abstract poster
- Is there an Elegant Universal Theory of Prediction? paper slides
The full list of publications by Shane Legg can be found here.
Stan Franklin, Professor, Computer Science
W. Harry Feinstone Interdisciplinary Research Professor
Institute for Intelligent Systems
FedEx Institute of Technology
The University of Memphis
Abram Demski is a computer science Ph.D student at the University of Southern California who has previously studied cognitive science at Central Michigan University. He is an artificial intelligence enthusiast looking for the logic of thought. He is interested in AGI in general and universal theories of intelligence in particular, but also probabilistic reasoning, logic, and the combination of the two (“relational methods”). Also, utility-theoretic reasoning.
Richard Carrier is a world-renowned author and speaker. As a professional historian, published philosopher, and prominent defender of the American freethought movement, Dr. Carrier has appeared across the country and on national television defending sound historical methods and the ethical worldview of secular naturalism. His books and articles have also received international attention. He holds a Ph.D. from Columbia University in ancient history, specializing in the intellectual history of Greece and Rome, particularly ancient philosophy, religion, and science, with emphasis on the origins of Christianity and the use and progress of science under the Roman empire. He is best known as the author of Sense and Goodness without God, Not the Impossible Faith, and Why I Am Not a Christian, and a major contributor to The Empty Tomb, The Christian Delusion, The End of Christianity, and Sources of the Jesus Tradition, as well as writer and editor-in-chief (now emeritus) for the Secular Web, and for his copious work in history and philosophy online and in print. He is currently working on his next books,Proving History: Bayes’s Theorem and the Quest for the Historical Jesus, On the Historicity of Jesus Christ,The Scientist in the Early Roman Empire, and Science Education in the Early Roman Empire. To learn more about Dr. Carrier and his work follow the links below.
A list of all original interviews:
- Dr. Brandon Rohrer, Professor Tim Finin and Dr. Pat Hayes
- Professor Nils John Nilsson, Professor Peter J. Bentley, Professor David Alan Plaisted and Dr. Hector Levesque
- Professor Paul Cohen, Professor Alan Bundy, Dr. Pei Wang, Dr. J. Storrs Hall and Dr. William Uther
- Professor Michael G. Dyer, Dr. John Tromp, Dr. Kevin Korb, Dr. Leo Pape, Professor Peter Gacs, Professor Donald Loveland, Eray Ozkural, Dr. Laurent Orseau, Richard Loosemore and Monica Anderson
- Professor John E. Laird and Dr. Kristinn R. Thorisson
- Professor Larry Wasserman
- Professor Michael Littman
- Dr. Shane Legg
- Professor Jürgen Schmidhuber
- Professor Stan Franklin
- Abram Demski
- Dr. Richard Carrier