Problem Solved: Unfriendly AI

This is the first in a series of articles by Monica Anderson.

Imagine a stage, perhaps a few years from now, in a large ballroom in a hotel in Bangkok. The reigning world champion of chess is defending his title against Deep Six, the latest generation of chess playing computers. It looks bleak for the human, as the computer has managed to establish a position which, it has predicted, will lead to checkmate in eleven moves. With its cold blue lights calmly blinking it is completely controlling the game, leading it to an inevitable, logical conclusion like a train on a track.

Meanwhile, in an adjacent dining room, a waiter spills some Grand Marnier when preparing Crepes Suzette, and the tablecloth catches fire. Flame detectors turn on a segment of the fire sprinkler system. The dining room is soaked, but so is the ballroom where the chess championships are held.  Water rains down on Deep Six, shorting it out. Deep Six releases its magic smoke and all its lights go out.

Nobody predicted that.

The purpose of intelligence is prediction.  Humans have used their minds to predict the behavior of sabertooth tigers, the right day in spring to plant the crops, and the behavior of opponents in games like chess and tennis.  Predictive capabilities actually predate chess by hundreds of millions of years.  An animal running on a rocky beach needs to predict where its legs will land and when to start activating the leg muscles to handle each impact.  Here, the prediction reaches only milliseconds into the future.  Often this is enough.  The ability to better predict what an opponent will do in a fight to the death conveys the ability to parry a strike in time rather than milliseconds too late.  But better than that, the first rule of street fighting is “Be Somewhere Else”.  Predicting that a fight is imminent is more useful than predicting the first blow.  Predation requires intelligence, but more superior intelligences develop safer techniques like scavenging, ambushing, pack hunting, weapons, traps, and Trojan Horses.

An evolutionary arms race has ratcheted intelligence levels ever since the first brain-like nerve clusters appeared.  Longer term predictions, predictions that are more often correct, and predictions that are more specific about the outcome all provide evolutionary advantages for superior predictors.  As our ancestors started living in larger tribes, the ability to predict the behaviors of potential mates and rivals made a big difference for who got to breed.  Intelligence-based mate selection was one key factor in the explosive increase of intelligence levels in our species.

The brain has to process the input from the senses.  This process takes some time.  Benjamin Libet showed that this processing takes about half a second — the so-called Libet Delay: we become conscious of what we see approximately half a second after it happens.  The brain censors this discrepancy from our consciousness so that we can pretend we are experiencing everything as it happens, in order to preserve our sanity.

Given the Libet Delay, we could not play tennis if we could not predict where the ball will be half a second later.  But that’s not all.  The speed of the signal in the nerves leading from the brain to the muscles controlling our fingers is roughly the same as the speed of sound, which is slow enough to matter in this context.  If we are throwing a rock at a running rabbit we have to predict not only where the rabbit will be and the speed and trajectory of the rock, but we also need to account for the delay in the nerves that tell our hand when to release the rock.  Doing this right means we get to dine on rabbit.  Again, high precision prediction provides an evolutionary advantage.  This is not restricted to higher life forms.  Predictive capability selects the winners among frogs darting their tongue against flies and among flies avoiding tongues.

Having established the purpose of intelligence, let’s examine the limits on intelligence.

The point of the chess match example is that logic-based systems don’t have an edge on predictions in a world that is too complex to be analyzed logically.  This is true in at least a dozen different ways and I’ll discuss this in detail in the next article.  For now it should suffice to state that in our mundane everyday life, we rarely have all the information we need for a 100% certain prediction.  But predict we must, whenever we must act.  Planning is simply a prediction that includes our own actions as something to consider.  We quickly and subconsciously hypothesize a few alternative actions, predict some of their possible outcomes, and then select the action that we predict will bring about the most desirable outcome.  This is an error-prone process, but since it’s the best we can do, this is what we do.  And we do this many times every second since every muscle activation is preceded by a low-level, subconscious prediction of its effects.

We can identify three major kinds of reasoning: deductive (which only proceeds “downhill or sideways” from given premises), inductive (which cautiously may proceed “uphill” – from observations to general principles – under certain conditions) and abductive (which merrily jumps to conclusions even on insufficient evidence).  Our current computers almost exclusively use deductive reasoning.  It is also common in the sciences, but scientists reluctantly admit that we must often resort to induction in order to make progress.  But all humans deal with their complex everyday reality mainly by using abduction.  In everyday life, deduction is useless.  Even induction is so rare and so spectacular that it made Sherlock Holmes a fascinating freak worth reading about.

So don’t take your clues about what real or artificial intelligence could or should be capable of from Sherlock Holmes, Star Trek’s Data, HAL-9000, or other fictional examples.  Examine what real intelligences do every day.  Artificial intelligences must be able to exist alongside us in our mundane everyday life.  AI is not the ability to play chess or solve integrals.  Computers can do these kinds of tasks and most people agree that it’s not AI.  A true AI-based robot should be able to go downtown, select an interesting magazine from the rack in a seven-eleven, chat with Apu the proprietor in spite of his accent, and understand and enjoy the articles once it gets back home.  A stationary AI should be able to selectively browse the web and understand what it reads.

One of the most cited definitions of AI is “The ability in a computer to do things, which, when done by a human, an observer would say required intelligence”.  I believe this definition is not only wrong, but also harmful to AI research and one reason we’ve made so little progress in the field.  A better (informal) definition would be “The ability in a computer to easily do most everyday mental tasks that are easy for humans”.  We can effortlessly navigate a changing world, interact with other agents with goals often at odds with our own, and understand and generate spoken and written language.  We can instantly recognize a chair as a chair no matter what its shape, color, or orientation.  None of these tasks can be done well by today’s computers, and when done at all, they are done one task or one problem domain at a time by mechanisms specific to the domain.  Stanley and the other entrants in the DARPA Grand Challenge for automated cars have had some limited success driving down the road on their own but their “intelligence” is not suited for understanding and enjoying a movie like Herbie The Love Bug.

Logic works very well when it is applicable.  Deductive reasoning is 100% reliable – and induction has a good track record – in the simplified world where science can operate.  Brains are forced to use abduction simply because it’s the only thing that works at all in our complex mundane reality.  AI systems have to do the same.  This is not an implementation choice we can make when designing our AI; it is part of the problem statement.

Some fraction of the AI research community (or should I say, “AI enthusiast community”, since this attitude is now rare among professional AI researchers) refuses to accept these ideas.  They insist on trying to design logic-based infallible godlike AIs in spite of this being impossible.  Some speculate about what might be possible “in principle”, given a universe-sized chunk of computronium and the lifetime of the universe for its computations.  They don’t like the abduction-based alternative simply because “It would be just as fallible as a human intelligence”.  To these “Shock Level Four” fanboys I say “get a clue”.  We need working AI as soon as possible.  An AI with the intelligence of the average 14-year old human would be worth a trillion dollars since it would revolutionize everything we use computers for today and would accelerate our advance as a species more than any previous technology.  It is our responsibility as transhumanists to take this opportunity and turn this misdirected reductionist, logical reasoning-based AI research around to something that will be useful in our lifetimes.

The world is constantly changing, forcing us to act within seconds, but our brains have some hard limits on computing capability.  Plausible is often good enough but correct often isn’t fast enough.  As the joke goes, “all early hominids whose brains were based on Bayesian logic were killed by sabertooth tigers while computing prior probabilities”.  We get by with our brains.  AIs must get by with whatever limited resources they have.  Both brains and computers have limits on cycle times, number of processors, and the amount of memory.  Yes, these limits are receding quite rapidly for computers; Moore, Kurzweil, and others are likely right as far as that goes.  But…

The limits on the quality of the predictions we can make are not technological. The complexity and unpredictability of the world yields very rapidly diminishing returns of prediction quality for any additional investment in computing power.  I believe the rate of this diminishing return is too steep to overcome for even recursive self-improvement of computers.  We’ll return to this issue in the next article in this series.

The insight that the complexity and unpredictability of the world enforces a limit on prediction quality – and hence intelligence – pretty much invalidates the AI singularitarians’ “Scary Idea” (as Ben Goertzel so aptly calls it) of a logic-based infallible godlike malevolent intelligence taking over the world.  The decreasing return cancels out Moore’s law and limits the rate of progress so that next year’s self-improved AI wouldn’t have a sufficient advantage over a dozen humans armed with pitchforks if they were also supported by a dozen of last year’s AIs.  The Scary Idea of a Runaway Unfriendly AI is a red herring that we should ignore, along with ideas about logic-based AIs in general.  We can now examine the alternatives in earnest and start making some progress.  I (and others) have mapped out the main landmarks along this path and I’ll be discussing these in future installments.  Ironically, the AI singularity is impossible, and the sooner we stop trying to make it happen, the sooner we’ll have workable and useful AI systems worthy of the name.

This is a saner, more moderate perspective.  There are enormous gains and risks involved, but many vocal and eloquent people have (by ignoring the world-imposed limits on predictability) overestimated both.  We have to design fallible abduction-based AIs because that’s the only kind of true intelligence that is possible at all.  And since these AIs will be fallible, we’ll be able to unplug them if they develop tendencies to become “unfriendly”.  Problem solved.

On the flip side, don’t expect a far-future logic-based infallible godlike AI to rescue the human race by solving all our problems.  It’s up to us, and machines a lot like us.

illustration by author

References:

Computer Chess: Fritz Leiber: “The 64-square Madhouse.”  In A Pail Of Air, Ballantine 1964

Prediction and rabbits: William Calvin: The Throwing Madonna

Variants of the cited definition of AI has been attributed to both John McCarthy and Marvin Minsky.Benjamin Libet: Neurophysiology of Consciousness: Selected Papers and New Essays

More on Libet

Masking of Libet delay in brain: Daniel Dennett: “Consciousness Explained”

1. another great evolution of artificial intelligence, hope one day when I wake up robots will greet me good morning.. codos to the human race

2. Oh, and even if we are not modeling the Intelligence to be created on a Human, it is unlikely that it will be of the Malevolent Superintelligent Variety that are so common at Sci-Fi Tropes.

Such an Intelligence isn’t going to spring fully formed from Zeus’ head, like Athena.

Nor will it be born from the Unholy Union of the Whore of Babylon raped by Satan while he is dressed as a goat or wolf.

3. Every Researcher trying to construct a working Intelligence will be working from a set of assumptions (induction), so they will reach different conclusions (deductions) based upon those assumptions.

Meaning:

If the premises are wrong, then the conclusion will have a good chance of being (but not guaranteed to be) wrong.

One of the things that I have noticed in school is that very few instructors claim to know how to construct a human equivalent intelligence of any age.

All they seem to be doing is teaching methods to produce behavior “X”, or prediction solely across domain “Y”.

Some have begun studying the problem of the assumptions behind the methods, and have made some marvelous strides in reproducing human-like behavior (CMU’s Benjamin Stephens’ Sarcos Humanoid Robot and Boston Dynamics’ Atlas ), or in reproducing emergent Human Prediction (still across a narrow domain, but not using symbolic logic based systems, such as Cynthia Breazeal or Jin Joo Li at MIT’s Media Lab’s Personal Robotics Group).

I have just begun to dig into UCSD’s work at the various Computational Neuroscience Labs (Such as Terrence Sejnowski’s) and at UCLA (where there is currently very little in the way of large scale Constructed Intelligence going on, but a TON of Neuroscience).

It seems to be only the “Enthusiasts” who have the “Scary Idea” that seems to come directly from historical mythology and Science Fiction (Modern Mythology).

I have yet to read ANYTHING from anyone actually working to Construct a Human Equivalent Intelligence that has seriously Considered that “Colossus The Forbin Project” or “Terminator’s Skynet” is anything we must worry about.

4. This is a fairly good summary of the ‘scruffy’ perspective (which became dominant in academic AI in the 70s, and aside from a short period when expert systems were getting a lot of press, has been dominant ever since), though most scruffies go further and don’t bother at all with symbolic logic even of the abductive variety (statistics are faster than classification into a formal structure, let alone treewalking such a structure).

5. regarding:
“..that the complexity and unpredictability of the world enforces a limit on
prediction quality – and hence intelligence – pretty much invalidates
the AI singularitarians’ ….logic-based infallible godlike malevolent
intelligence taking over the world…”

uh.. ‘not’ — sorry monica, but you’ve booted on addressing the crux here (assumingly by accident)

to use the evolutionary metaphor you opened with (that got my interest to read carefully), just as homo sapien looks ‘defacto’ logically infallible from the pov of a fish (or fox) being hunted by one, so-too, some ‘scary AI’, need only exhibit the same kind of ‘ratio-of-intelligence-gain-to-human’ to wipe’em humans out… (if it so feels threatened ’nuff to bother to do so)

while there may be some (unflawed) deep theory argument that strictly un-bounded-ly flawless prediction of our universe is also strictly inaccessible due to restrictions on either informational inputs and/or computational achieve-abilities… however, so too, such extreme ‘deep theory’ kind-of-arguments are also quite irrelevant to exclude-ably bounding arguments on in-achieve-ability of a ‘scary IA’

6. An psychopath (autistic-like AI) with the intelligence of the average 14-year old that happens to deal and *interact with human beings* may as well, as with any biological psychopath with punctual computational disparities localized in their brain, kill Monica Anderson or any living being. Well, these human predators (biological psychopaths and their synthetic twins) won’t do so much harm even though they lack empathy, which is required for interacting with human beings if you don’t wanna treat human beings like they are just meat and bones. They will probably be stopped and I don’t even need to believe in singularity to believe that these average 14-year old synthetic autists could kill someone before being surely stopped.

7. While much less of a strict proponent of Singinst’s Scary Idea than people like Eliezer, his argument is definitely more coherent than this.

The complexity and unpredictability of the world indeed places a limit on prediction qualities, but the natural limit of making efficient use of sensory information is so much higher than your intuition allows for.

Also there might be some confusion here between human level AI, transhuman AI, and superintelligent AI.

http://lesswrong.com/lw/qk/that_alien_message/ is Eliezer’s post on the efficient use of sensory information.

8. Amazing article Monica. I’m still wrapping my head around parts of it. Using any one form of logic to come to a consensus about a thing does in fact seem naive. But I’m curious as the article seems to brush the elephant but doesn’t actually fully discourse on the elephant in the room. The elephant in this case is experience.

I don’t look at my watch every time I look at the sky to determine what time it is only to then judge that the sky is blue. But the argument portrayed in the article seems to suggest that I should be doing exactly that. Or more accurately that this is precisely what an AI does every time it wants to tell what color the sky is.

I also don’t have to judge the speed of my prey in order to hit it with the rock. Admittedly I may have to the first few dozen times I try, but eventually I build up experience and “get the knack of it” and can hit the rabbit with the arrow a good fraction of the time, outside of when experience fails me, the rabbit’s experience at not getting hit with an arrow supersedes mine, or a random act like a gust of wind crops up.

So what I guess I’m trying to postulate is that certain judgments once made can be cached in the field of reflex and/or experience (since both of our examples are in regards to physical properties of things I would say that reflex is the most appropriate category, where as abstract, non-physical, ideas would go under experience) and could then be readily used in a much quicker fashion until I wanted to find an abstract bit of data.

e.g.:

1. reflex: The sky is blue, because it’s always that way this time of day.
2. induction: The blue sky means day time because the sun is up.
3. deduction: The sky is blue because light waves passing through molecules in the air are scattered down into the blue light spectrum. It’s currently 10 o’clock AM because the sun is at this angle in relation to where I am on the earth, there is no solar eclipse, etc.
4. abductive: The sky is blue because a source of light is present.

If it rains I get wet when I go into it. If I stay out of it I don’t get wet. I don’t need to deduce this every time I see it raining outside.

As you can see abductive reasoning isn’t precise the sky is black, or arguably a really really dark blue at night, though most humans simply consider it black even though there is an alternative source of light (the moon and stars). Your argument for abductive reasoning while valid, simply relies too heavily on predicting unknown events, when simply caching known good results into a system that’s readily accessible would make more sense; all the while using the three forms of reasoning to prepare future responses that could be stored in reflex/experience. This is how humans learn organically, AI’s have an advantage (or disadvantage based on one’s perspective) since they can learn at a rate much higher than humans can. The disadvantage being that they could possibly store a bad result even though it came out positive with a stroke of good luck.

I’m looking forward to the rest of your articles on this subject I’m finding it interesting.

• Jeremiah,

There will be time for elephants. I have a very large meme package to go through and I can’t say everything in the first article; that would give everyone a headache. 🙂 The rest of my articles will go considerably further down the path you are pointing to. If you want a preview and/or background then you can check out some of the resources I mention in another reply below.

My intent is to examine the fundamentals of AI from an Epistemological angle in order to determine which approaches to AI that might work and which should be abandoned. I am not a luddite naysayer claiming that computer based intelligence is impossible; I’m mainly saying we’ve been going about it wrong. In the fourth article I will enumerate some methods we can use to implement various kinds of abductive reasoning.

Experience is indeed vital. The human mind gathers experience throughout a lifetime; we call this learning. I call the subconscious algorithm that operates upon this database of past experiences “Intuition”, and the computer based implementation is called “Artificial Intuition”. Intuition determines very quickly and effectively (in almost constant time) which experiences that are relevant to the current situation and predicts the future based on what happened in past similar events. It trivially resolves conflicts and contradictions, but it’s not perfect and so it occasionally makes mistakes.

I feel justified in calling this process “Intuition”. There is a small public perception problem in that many believe intuition to be an infallible, mysterious power that women have 🙂 but if we define it to be fallible, to be “guessing wisely given a lifetime of experience”, then it is likely the best term available. Everyone uses their intuition all the time in their everyday life, many times per second, since this mechanism is used all the way down to muscle control required to walk and talk – both of these are learned skills, and it takes a newborn child about a year to bootstrap to the level of experience necessary to attempt these feats. “Reflex” is typically used in cases where the brain is *not* involved.

Your examples of abductive reasoning look more like Sherlock-Holmes style inductive reasoning, based on pretty long chains of causality in a Rube Goldberg like manner. Of course these will fail, and fail often. Good abduction is much less “reasoning” than “table lookup”. Identify all past experiences that may be relevant, and take a guess at what happens next, without attempting these kinds of deep logical chains. You don’t have time for that when facing a sabertooth tiger or constructing a sentence. The main trick in abductive reasoning with short “causality” chains is to resolve the contradictions and conflicts in the experience database, not to chain up probabilities. It’s much more important to know which past experience is the most salient than it is to perform causality based reasoning.

“Causality” is in scare quotes above. It’s actually not causality, that would be too Reductionist. There are Holistic alternatives.

PS the sky is blue because air is blue. It’s as simple as that.

• Monica,

Thanks for the direct response, I highly appreciated this. I’m glad to see that what I was concerned was being overlooked in fact is not. I’ll surely follow up on your resources here shortly.

My Rube Goldburg mechanics aside, this was actually the crux of what I was aiming at and am glad to see that I am keeping up with the class. “Causality” in this case would have to be kept on a very short chain. Obviously if I have a sabretooth tiger running at me, I’m going to be less concerned with the why (the cause as it were), and more with the prevention of becoming lunch (the effect). In this case I really rather doubt I’d wonder about the why at all and just get out of the way as quickly as possible. Which would be reflexive more than intuitive, if I lacked experience in getting out of the way of sabretooth tigers.

This brings me to another point, one that I assume is safely addressed in a future article. That being the differences between intuition, reflex, and experience. Intuition is hard to place, it’s nature seems to lie (at least in my limited understanding, not being of the fairer part of the species) between sense, and output. Reflex I would agree is essentially a non-thought induced action. Where experience allows us to look around for danger before reflex kicks in. Intuition on the other hand seems to rely on picking up nearly impossible to detect nuances and patterns that would escape experience’s scope. Would this lead us to a third system or 6th sense? I’m uncertain, but am interested to see where you’re going with this.

PS. No response necessary, I’ll wait for the rest of the presents as well as a five year old on Christmas eve waits for theirs. 😉

9. Once upon a time learned people “proved” that it was impossible for a bumblebee to fly. Their argument seemed much more solid than this argument against AIs much much more intelligent than ourselves. That the value of prediction is time bound and that intelligence is also in no way means that vastly higher quality predictions cannot be made than those now running in the brains somewhat evolved chimps on this little rock in one corner of one average galaxy.

• Compared to how birds (or, for that matter aeroplanes or helicopters) fly, bees don’t fly.

The viscosity of air molecules to a creature the size of a bee is such that the air it is moving through is so thick that swimming would actually be a better descriptor for what they are doing.

The writer of this article has proposed that AGIs might not be that much better than us at predicting the future, because of the complexity of our world. Now, as AGIs are completely hypothetical, she doesn’t have any evidence that this is the case. However, you, also do not have any evidence that this is *not* the case.

I see no reason why an AGI cannot be created *in principle* but its going to be really hard, for reasons such as those the author of this article has expressed (and for many other reasons as well) but it seems that singularity advocates simply ignore this. I am always amazed by the faith that some people have in superhuman AGI which does not actually exist yet. Faith in imaginary things is not science, it is religion.

• Faith in imaginary things that have no link to current reality, like a unicorn, or God, that is religion. However, I have faith in Moore’s law. I have faith that an imaginary computer twice as fast as my current one will exist next year. Maybe it won’t for some strange reason, but I am still betting on it based on past experience, even though the computer itself is imaginary and I have never seen it in my life, and I have no other reason but Moore’s law that it will appear in the future. I think the belief in an AGI is much more science than religion. Still, Monica and others have shown some serious reasons why an AGI would not be a result of the continuing of Moore’s law.

10. Great piece Monica.

I particularly like your exposition on the nature and limits of prediction, and the limits of deductive logic. Very valuable insights for some of our Singularitarian friends to take to heart.

I do think the problem of Friendly AI has more pieces to it than what is covered here however. With friendliness I think we are worried not only about how fast the AIs will blow up in intelligence, which is nicely considered here, but how they will build cognitive and behavioral inhibitions as they ramp up, and how they will learn from their mistakes. So perhaps this piece might be better titled as about the limits and nature of prediction in intelligent systems, and potential limits to the speed of growth of machine intelligence.

Understanding friendliness may need to incorporate some theory of the emergence of morality in complex living systems, and relate that to what we may expect (and eventually test for) in complex machines. At least that’s the way that I would suggest future science and current hypothesis might best approach it.

Warm Regards,

JS

11. Interesting article. I have some problems with/questions on some of the statements in it though:

1) Deductive reasoning is 100% reliable? Many deductions have been wrong in many different situations and it is not hard to make incorrect deductions, given incomplete information (which is almost always the case). I’m not sure why this was stated ….

2) How exactly would an AI with the intelligence of an average 14-year-old change the world dramatically? I think I may have missed something.

3) You state the purpose of intelligence, but do not define it. I realize that this is not an easy thing to do, but when you claim that an AI with an intelligence of X will change the world, it is hard to believe that anything truly revolutionary will occur – we have plenty of 14-year-olds and very few of them seem to drastically alter the world.

A quick suggestion from a non-AI, non-programming person: would it be possible to just have an intelligent AI that needs validation of its conclusions from a human before taking any action? Would that be hard to program in? I know it would slow things down, but it seems like that would be a nice safety option.

• My apologies to other readers… this reply is a bit more technical than the article itself.

John,

Deductive reasoning is 100% reliable in the abstract domain of logic itself. All problems with deductive reasoning are due to “Reduction Errors”, i.e. errors in the grounding (in reality) of the symbols used, in identifying the correct “Model” (equations), or in interpreting the result back into real world concepts. (This is similar to “sampling errors” in various engineering disciplines). When we say “All men are mortal” and “Socrates is a man”, and then use it to prove that “Socrates is mortal” the logic itself is infallible. The problems are all caused by the effort to define “All”, “men” “are”, “mortal”, “Socrates”, etc. – what do words mean? And if we are building an AI, then this effort – the “Reduction” – has to be done *automatically*, otherwise we don’t have an AI. My current technically accurate but *overly strict* definition of Intelligence is “The ability to perform Epistemic Reduction”. I’m currently trying to find a definition with similar strength but that covers more ground, in order to cover all uses of Intelligence/Intuition that happen at a subconscious level, since that’s where all the important stuff happens. More about this in article #3.

The trick with a 14-year-old-human-equivalent AI is that it can be taught some interesting (or perhaps dull 🙂 task such as evaluating the quality of webpages. We can then copy it a millionfold. This theme can be played out in any domain you want. A million teenagers reading (and comprehending) all of Lexis-Nexis. A million teenagers reading the news, scanning for things of value. And so on. These AIs could perform a large fraction of simpler white collar jobs. If you could teach your job to a clever 14-year old in a couple of months, then your entire profession will change within six months of introduction of AI to your domain of expertise. Almost all professions will change. In my opinion, for the better. We expect Reductionist methods to be outcompeted in short order.

I observe that objection #3 is the same as #2, except for the request to define Intelligence, which I did above.

And if our AIs are evaluating and sorting webpages, or finding legal precedents, or scanning the news, and that’s the limit of their skill and ambition, why worry about having to confirm what their decisions are? You trust your email spam filters today to toss out thousands of emails per year. Why wouldn’t you trust some AI that happens to actually *understand* English and can make much better decisions about what is spam and what isn’t. An AI would sort thousands of webpages per hour. Humans could not keep up. Spot check their results, or use multiple differently raised AIs to do the same task and check where they disagree (I call this trick “Enforced Diversity” and it is a decent second-level bulwark against an AI takeover). This is straightforward engineering, once you have licensed a 20GB core image that *Understands language* from Syntience Inc.

And this scenario will play out in hundreds of vertical domains. For instance, 14-year olds have perfect comprehension of spoken English. We could have 100% correct speech recognition. Forget about computer keyboards. Perfect OCR. Perfect spelling correction, even perfect copy editing, etc. If you want more examples, we can provide you with the corporate “Use Cases” document. It’s already leaked to the ‘net.

12. Monica, you are correct… mostly. What is missing in your assessment of complexity is its relative cost. Cost, within parameters, is plastic. There can be many configurations that result in the same effective morphology or behavior. Evolution chooses (eventually) the configurations that both result in the target shape or function, AND do so with the least energy throughput.

The energy required to build and maintain a given complexity rises proportional to the extent with which it is unstable. Unstable systems are systems that fight causality more than other (possible) systems.

Statistically, this instability is a function of how unlikely a configuration is… thermodynamically, it is a function of the how much energy it takes to maintain that structure. The result? Systems become more and more predictable as they become more and more complex. Systems that don’t follow this rule are easy to detect and avoid. They are the systems in which non-stable configurations require tremendous energy throughput for simple maintenance. Luckily, such forms of costly instability are doomed to be replaced by similarly complex systems that don’t require such exaggerated maintenance. Biological evolution has reached this point in complexity without an understanding of thermodynamics. Going much further, will require the incorporation of such understanding as prediction. Systems building towards complexities greater than the relatively simple infrastructure we call culture will be lost without it.

As systems become more complex, their complexity is increasingly dependent upon a coherence with physical causality. The costs associated with breaking with causality become prohibitive as a system’s complexity rises.

One human being can act in ways that are highly incoherent. But a culture of a billion humans will fall apart unless this coherence (with physical causality) is honored.

Because of this, one can build a prediction algorithm right now, that will work no matter how complex a system or its environment becomes… a predictor that will work in any domain. Such a predictor would only have to do two things. 1. measure the complexity of the results a system is built to produce. 2. Divide that complexity by the energy required to build and maintain that system such that it will operate reliably forever. This complexity per energy throughput ratio can then be compared to a curve that maps the same ratio of known systems. Systems that are plotted on the hot side of such a curve are systems that can be predicted not to survive. Systems plotted close to the curve are systems that will, by surviving, predict that direction that complexity will take.

13. I think the major failure of assumption you make is that the world will continue to be completely unpredictable.

Are you are all familiar with the Venus Project?
They want to redesign human society using logic-based decision making computers to create a highly-efficient, sustainable society. Systems theory would be applied to global resource management and, the contention is that, everything would become self-evident.
In terms of the sprinkler example…there probably wouldn’t be sprinkler systems like that because it is simply inevitable that they ruin things like computers. First of all, buildings of the future will be made entirely out of fireproof materials. Second, if there were still sprinklers (meaning that they aren’t replaced with a superior technology) they will be de-centralized and will only trigger in the area that the fire is occuring.

The world is only so unmanageable because the system is based on decentralized competition.
http://WWW.THEVENUSPROJECT.COM

14. Monica have the good nose, the feeling that AI cannot serve as the basis for developing a AGI is right.

The basis of AI, as it is for now, is compilation from the common misguiding assumption about the processes leading to determination of an observable behavior.

Monica is stating “The purpose of intelligence is prediction.” That is true, but what is predicted? Later she state: “The brain has to process the input from the senses.” That is the common mistake.

The brain, by the best, has to process the output from the senses.

In the most cases even that is impossible because the separate pathways for the signals from the senses often are absent in the bodies of a live creatures. The more objections to the current state of the “scientific” approach in that field on the http://www.legendaliveinc.com

“Predictive capability selects the winners among frogs darting their tongue against flies and among flies avoiding tongues.”

Yes, all live creatures could make a prediction about the future state of… The future state of the World? But they never have information about the causes in the World delivered to a control unit.

How our surviving is possible in that circumstances? Live creatures could survive if they are capable to develop a chain of behavioral actions leading to the desired state of their inner subjective representation how it should be in the future.

There is the key word — subjectivity. The task of designing AGI will be solved if one will be able to build of an artificial subjective system.

Best, Michael

15. Author doesnt seems to giving anything of her own insights into AI, but jsut stoping current trends. This trend is really becoming popular nowdays.
Would be nice to hear some fresh ideas.

16. I think that the point the author misses is that AI researchers are not generaly trying to make an artificial human intelligence. Humans make a tradeoff – in being so adapative we are no good at the type of intelligence that computers excel at. You suggest that we could have our cake and eat it too by making computers think like a person but actually we would be forcing those same tradeoffs onto the AI. Sure there are people working on replicating the brain but these are mostly trying to gain a greater understanding of how our minds work. The reason people are so focused in the direction that you mention is because the aim of AI research is to augment human intelligence rather than to try and replace it.