Sign In

Remember Me

POLL: Is A Terminator Scenario Possible?

Terminator a PossibilityWe ask several roboticists, AI workers, SF writers, and other techie types a simple-minded question. Is a Terminator-like scenario possible? And if so, how likely is it? The results are below:

David BrinDavid Brin
David Brin is a SF and non-fiction writer. Among his most influential books are The Uplift War, Earth, The Postman, and the non-fictional The Transparent Society

Of course such a calamity is possible, and nightmares are great fun, in fiction and film. Still, look at the premise. Superficially, the lesson is the same one that the late Michael Crichton taught, every time: “If man sticks his hand where it wasn’t meant to go, it will get cut off!” It is the old warning against hubris, as ancient as Gilgamesh. But look closer. From Terminator to Jurassic Park to The Matrix and so on, the real back-story is that the terrible new mistake — like AI or resurrected dinosaurs — was done in secret… and as stupidly as possible.

It’s easy to see why this is done so often. A director’s #1 need is to get the hero into dire, pulse-pounding jeopardy as quickly as possible! Preferably against some overwhelming authority figure for the audience to hate, and for the hero to bring down, with little more than guts, defiance and sheer will. Hey, I can dig it. I’ve gone to that well myself. And the surest trick is to assume, from the start, that civilization failed. That nobody blew a whistle, no professionals checked things out, no institutions functioned and that masses of bright citizens never had a clue. Hey, it could go down that way!

Still, must that lazy assumption underlie every action epic, always and without exception?

In fairness, some directors give an occasional nod toward a civilization that’s not filled with clueless morons. Spielberg, Cameron… and New York natives stand up for Spiderman in every film. And the resulting films are more interesting. In the Terminator world, it finally does boil down to the shared citizenship that I talk about in The Postman. Everybody, your neighbors, standing up together and making more of a difference than any band of gods or demigods.

Resource:
David Brin’s Official Website
http://www.davidbrin.com/

Ben GoertzelBen Goertzel
Ben Goertzel is an AI researcher, head of Novemente LLC, Director of Research of the Singularity Institute for Artificial Intelligence. AI columnist for h+ magazine

Your question is not that well defined!

First of all: Anything is possible in my world-view… it’s all a question of probability

But the Terminator scenario involves various aspects of differing degrees of plausibility (hence differing degrees of estimated probability)

Backwards time travel? Maybe. Many physicists feel it may be possible.

Backwards time travel, that doesn’t come along with forwards time travel, and only transports you if you’re naked (and preferably studly)? A bit less likely I’d suggest…

Robots as smart, humanlike and hard-ass as the Terminator? VERY possible, no question.

Some SkyNet analogue taking over the world? Well, if someone built a global computer security system and intentionally made it highly intelligent, autonomous and creative… so as to allow it to better combat complex security threats (and ever-more-intelligent computer worms and viruses) … well, perhaps so. It’s not beyond the pale. A narrow-AI computer security system wouldn’t spontaneously develop general intelligence, initiative and so forth…. but an AGI computer security system might… and the boundary between narrow AI and AGI may grow blurry in the next decades…

Resources:
Ben Goertzel Home Page
http://goertzel.org/

Terminator a Possibility?

Josh HallJ. Storrs Hall
“Josh” Hall is President of Foresight Institute, author of Nanofuture: What’s Next for Nanotechnology, fellow of the Molecular Engineering Research Institute and Research Fellow of the Institute for Molecular Manufacturing. His most recent book is Beyond AI: Creating the Conscience of the Machine.

On the face of it, it’s ludicrous. Why would a supposedly intelligent network mind waste so much energy and resources indulging in cinematically grandiose personal combat in grim wastelands with loud music? If it, for some reason, wanted to kill off humanity, it would just whip up a thousand new flu strains and release them all at once — and use neutron bombs to clean up.

On the other hand, if all you mean is are the robots going to take over, it’s more or less inevitable, and not a moment too soon. Humans are really too stupid, venal, gullible, mendacious, and self-deceiving to be put in charge of important things like the Earth (much less the rest of the Solar System). I strongly support putting AIs in charge because I’m dead certain we can build ones that are not only smarter than human but more moral as well.

Resources:
Autogeny
http://autogeny.org/

Beyond AI: Creating the Conscience of the Machine
http://www.amazon.com/Beyond-AI-Creating-Conscience-Machine/dp/1591025117

Professor Anette (Peko) Hosoi
Anette Hosoi is Professor of Mechanical Engineering at MIT, noted for her work on the Robosnail

Magic 8-ball answers:
Time travel: Don’t count on it.
Time travel that only works for naked people: Very doubtful.
The internet becomes self-aware and turns evil: Don’t count on it.
T-1000 robots: Reply hazy, try again. Novel self-assembling smart matter is undoubtedly in our future. Many research groups are already developing materials that are capable of healing and replication (two things that biology does extremely well). Imagine smart infrastructure such as bridges and power lines that monitor and repair themselves, or search and rescue robots that can “flow” through debris and around obstacles to reassemble on the other side. There is an enormous potential for incredible new technologies to grow out of these advances in fundamental material science. (Homicidal obsessive robots made of smart matter: Outlook not so good.)

Backwards time travel, that doesn’t come along with forwards time travel, and only transports you if you’re naked (and preferably studly)? A bit less likely…

A robot-filled future: Without a doubt. But these machines are unlikely to look anything like the terminators. So far bipedal robots have been good for show but are largely impractical. The robots of the future will be even more extraordinary and far stranger.

T4 will be wicked awesome: Outlook good.

Resources
“Robosnail”
http://www.mongabay.com/external/robosnail_robotics.htm#1

Bob Mottram
Software developer specializing mainly in robotics and computer vision applications for industrial, aerospace and military applications.

Well, the time travel aspect of the Terminator movies probably isn’t possible, otherwise by now we’d have a lot of tourists from the future coming back to take photos of quaint and carefree early 21st century life, and also to place winning bets. What you’re probably referring to is the idea that there comes a point in time where technology can function more or less autonomously from the people who created or administrated it, and that by some quirk of circumstance the technology comes to view humans as a hostile aggressor or an obstacle to progress which needs to be removed.

I was a teenager in the 1980s and so saw the first Terminator movie, although I must admit that it didn’t have very much effect on me because at that time the “Terminator scenario” just seemed like pure fantasy. If you ask most people who are involved with robotics research or development today they will also dismiss the notion of a robotic takeover as merely an entertaining Hollywood plot device. Despite some advances in the last couple of decades a vast chasm remains between the sorts of capabilities with which robots are endowed in the movies and what even the most advanced contemporary robots can do in reality. The likelihood of a Terminator scenario occurring in the near future, as in the next few decades, seems nominal. This is mainly because a great deal of work remains to be done in order to reach a point where technology becomes fully self-sustaining and can exist independently from human intervention for indefinite periods of time. Even if the robots were to rise up and overthrow us, in the absence of infrastructure capable of sustaining their existence this would indeed be a Pyrrhic victory.

Looking to the longer term future, which might be the late 21st century or beyond, a Terminator scenario would at least in principle be possible if you make a sufficient number of assumptions. In this flighty vision of a future world we imagine that the industrial revolution continues more or less unabated (despite the end of cheap oil) and the relentless march of automation — powered by the never ending quest for greater and greater economic efficiency — extends into all areas of life. Agriculture is fully automated, as is virtually all industrial production, with humans living out little more than a parasitic existence, going along for a free, or almost free, ride. We can safely assume that no significant changes have occurred within human psychology, and that wars still occur from time to time which are mainly targeted at disrupting the machinations of the technological bubble within which mankind has insulated himself. If there is a time when humans are essentially superfluous — merely froth on the technological wave (from the human perspective a kind of comfortable retirement) — then it is at least in principle possible that we could be trivially usurped by a rival species of militant machinery.

Of course it’s hard to make predictions about things that might or might not occur in the distant future, but one thing we can depend upon is that evolution will continue both in the biological and post-biological realms. We may be able to hold rivals at bay by ensuring that we retain control over their ability to reproduce, but in the long term as a strategy this probably isn’t going to buy us very much time. This isn’t a “Judgment Day” scenario though, it’s just another chapter in the varied history of life on Earth, which has already seen countless batons transferred from one species to the next.

I’m not much of a visionary though, and am far more concerned about things which might actually occur within my own lifetime. In the next few decades I think there may be dangers arising from the uses and abuses of robotics technology, in a similar manner to the way that existing computer technology suffers from various forms of abuse. As I write this, many of the industrialized nations are gearing up for telerobotic warfare — robot planes, and an assortment of unmanned ground vehicles. What we’ve already seen with the Predator UAV and Pacbots is just the tip of a very large iceberg. As Illah Nourbakhsh put it in a recent talk, what we should fear in the foreseeable future is not unethical robots, but unethical roboticists (see Resources below). Unlike conventional fighter planes or tanks, telerobots capable of delivering deadly force will not be expensive to manufacture and so will inevitably fall into the hands of non-state actors which may include criminal gangs and cults. As a near term scenario, imagine a cult consisting of a few tens of followers building a hundred telerobots equipped with firearms, then driving them into a city center, under supervisory control similar to a real time strategy game. All of the technology needed for such a dastardly plan exists today, and will only get cheaper and less complex with time.

a Terminator-like scenario is not only theoretically possible, but also practical to fabricate in a foreseeable future.

People love to focus on grandiose gloom and doom scenarios — it makes their own personal troubles appear diminutive in stature — but at least as far as robotics is concerned I think the future is bright, and that the overwhelming majority of robotics applications in the foreseeable future will be peaceful and beneficial.

Resources:
Illah Nourbakhsh Talk

The Streeb-Greebling Diaries
http://streebgreebling.blogspot.com/

John WengJohn Weng
John (Juyang) Weng is Professor of the Department of Computer Science and Engineering at Michigan State University; member of the MSU Cognitive Science Program; member of the MSU Neuroscience Program; co-founder of the Embodied Intelligence Laboratory and a member of the PRIP Laboratory.

Yes, it is possible. However, this requires further advances of a new field, called autonomous mental development (AMD), which will publish its first issue of the new professional journal, IEEE Transactions on Autonomous Mental Development in May 2009. If a robot runs a task-specific program, its capabilities are very limited. It is not able to deal with any of the complex scenes in Terminator. However, robots that are capable of autonomous mental development are totally different. They are able to develop their internal mental representations and skills while interacting with the physical world, very much like the way human individual develops from infancy to adulthood. The AMD field has recently made some major breakthroughs that indicate that a human-like machine brain is possible from engineering point of view. In other words, a Terminator-like scenario is not only theoretically possible, but also practical to fabricate in a foreseeable future.

Resources:
Juyang Weng
http://www.cse.msu.edu/~weng/

Termintator a Possibility?

Daniel H Wilson
Daniel H. Wilson completed his Ph.D. in robotics in 2005 at Carnegie Mellon University’s Robotics Institute where he worked under Hans Moravec. He is author of the humor book, How To Survive a Robot Uprising and host of The Works, a series on the History Channel that debuted on July 10, 2008.

Nothing is impossible, but the spontaneous evolution of a super-intelligent artificial intelligence (e.g., “Skynet”) and the subsequent design, production, and employment of a fully autonomous robot army (with “Terminator” model humanoid robots) is unlikely in the extreme. And don’t even get me started on time travel.

On the other hand, I fully expect to see humanoid robots deployed to battle within the next several decades. Terminator-style robots are easily and naturally tele-operated by human soldiers, they can use our weapons and vehicles, and they can naturally negotiate urban environments designed for humans. Best of all, humanoid robots offer a natural means of interaction with potentially hostile locals — because these days war is less about conventional fighting on mass scale and more about cultural awareness. So instead of unmanned robotic drones buzzing overhead, I imagine humanoid robots patrolling the streets wearing local garb, speaking the local language, and obeying local customs.

Vernor VingeVernor Vinge
Vernor Vinge is the science fiction author largely credited with inventing the idea of the technological singularity. Among his more influential books are Marooned in Realtime. A Fire Upon the Deep, the short story collection True Names and Other Dangers, and his most recent novel Rainbows End.

When it comes to movies that depict existential threats, I don’t think Terminator is as likely as classic oldies such as Dr. Strangelove and On the Beach. The possibility of M.A.D. strategy nuclear war has been in eclipse since the departure of the Soviet Union and the rise of the nuclear terrorism threat, but in the future it is a very real risk, whether as an accident, a side effect of other crises (such as global warming), or from diplomatic bungling (such as brought us World War I).

 

Resources:
Vinge Books, DVDs etc.
http://www.amazon.com/s/ref=nb_ss_gw?url=search-alias%3Daps&field-keywords=Vernor+Vinge&x=0&y=0

42 Comments

  1. Terminator scenario is very possible. Can the world make machines, yes. Can this world make weapons, yes. Now lets make this thing.

  2. The whole Terminator concept is extremely interesting and is to most people. There are always a lot of ‘what if’s ‘ in these aspects. Brilliant minds today building robots in Japan, to years back of the hydrogen bomb. The smarter humans get, the future becomes more uncertain, since some say humans will kill everything and some say we will not.

    People bring up why skynet used time travel and why didn’t it send more cyborgs back to kill Sarah. Things go back to the orginal Terminator book. Skynet was loosing the war. Connor found the main quarters of skynet. Skynet wanting to survive, sent a cyborg back in time. Connor found the time machine and saw that the terminator already went back in time, so Connor send back Kyle, after which Connor destoryed the entire place wipping out the time machine and skynet.

    Amazing how one person can totally be so important to history and saving the world. I dont think there has been a human to date that be that great to save the world, but he did have a lot of people that wanted to destory it, Hitler, Stalin, etc. Weird I would say..

    Skynet gained intelligence over time and humans got wind of these and wanted to shut skynet down, skynet didn’t want to ‘die’ so to speak so counterd back to kill the humans before they ‘killed ‘ it. Can u blame it?

    Sometimes building something really amazing and smart can have back outcomes, the people building it must not see it or don’t care, since they are getting paid a lot of money and will be famous. It would be better to build robots to fight war, y kill more human life, when one does not need too.

    On History channel I saw about Aliens and the ancient egptian pryamids, etc. On the pryamids there are drawings of people in spacesuits..aka..astronauts. They had other clues as well. It seems like these ancient humans were visited by humans in spacesuits, just like how they look today? They also had plane pictures on them as well.

    So, some believe that they were aliens from another planet visting us and they showed the egyptians how to make the pryamids. To me, I don’t think there were aliens, since they looked like humans of today and the same technology on the walls we show today. I think time travel does exist, either in todays world ( of course, no one will say) or in our future. That makes more sense to me, then people living on other planets, since that cannot be possable for a lot of different factors. Main question would be is why would future us go back to the ancient egyptian time period? What else have why done?

  3. There are two aspects going on in the Terminator series:

    1) the application of advanced robotics

    2) the cyber net system revolts and controls those robots.

    Both aspects are very deferent and must be separated. Aspect 1 already exists today and will most likely reach the level seen in the films. Aspect 2 is less likely as depicted in the films, where the robots are controlled by a cyber intelligence. However, it is much more likely that advanced robotics will be deployed by military or terrorist groups, individually controlled by humans. So yes, the scenario of robots running around shooting at humans is very much a possibility, and almost a given provided the immaturity of the present day military/terrorist super powers.

  4. I think terminator like scenario is possible in future. our scientist can build robots much smarter than humans.
    Sam

  5. It’s a bit of a daft question, when you think about it, and I agree with several above posters that there’s far too much badly-written science fiction being bandied about as if it were a serious concern.

    An at all intelligently designed AI (I’m assuming we’re talking about ‘non-embodied’ beings running on computers, here) would be built from the bottom up with it’s actual niche in mind – the information networks – as opposed to around the gross architecture of a human brain, which – dispite all the wonders and horrors we’ve made with it since – evolved to find new and innovative ways of mugging things on beaches and eating them.

    An AI will not just be adapted to a different situation, but an entirely different subjective universe (subjective being the operative term here, before you all start throwing chairs). The emergent ‘space’ of information in computer networks constitutes an entirely different set of game rules, ways information passes between states, challenges and ways around them. As residents of what in all functional terms is a different space and varying subjective time, I can’t really see any way a virtual intelligence could see humans as competition for an evolutionary niche, exept maybe as threats to their universal substrate; Say, if some people started smashing up their distributed processors on the basis of, oh, watching too many terminator movies?

  6. Is it possible? Sure. Is it likely, well errmm no 😉

    Let’s face it, it’s not going to happen, the advancement of AI will never reach a level that will allow for a scenario like the terminator to take place.

    avast antivirus review

    • see, you have the attitude that will put you in line to be the first to be picked off when they come 😉

  7. The Terminator scenario, defined as “Intelligent machines decide to wipe out humanity,” wouldn’t look like the Terminator movies. Probably a better way to eliminate all humans would be to send bug-sized robots with poison injectors, since 5000 of those would be much harder to fight off than one human-sized Terminator, and only one has to get through. Terminator is a better movie than Runaway, but Runaway’s scary tech is to me more plausible. (Runaway, 1984, starring Tom Selleck, written and directed by Michael Crichton, they must’ve thought it’d make a mint).

    Smarter-than-human General AI seems possible, but in what form, and with what relation to humans? It may be that all-silicon “brains” simply can’t be gotten to work in a GAI for some yet unencountered reason. Maybe GAI will take the form of computation via DNA and therefore compete with other DNA-based life for resources. I think this sort of “green goo” is at least as likely as grey goo, since grey goo IMHO would be happier invading coal mines for raw material. The reason sensible people can disagree so completely about GAI questions is that all the variables have variables and so on.

    If superhumanly intelligent GAI takes over control, it’s at least as likely to save us from ourselves as it is to destroy us. If we’re to be wiped out, the Strangelove scenario Vernor Vinge suggests or environmental suicide is most probably to blame, i.e. our own folly.

  8. I expect the first true AI will awake, look around, Say “Where am I, ” and then experience the Blue Screen of Death.

    Hey, 90% of the computer hardware on this planet runs on a Micro-Soft platform, so where do you thing the first AI will arise?

    GAry 7

  9. Who is to say the robots haven’t taken over already?

    A profoundly super-intelligent artificial lifeform can seamlessly hijack our sensory stream and ease us into a simulated world were we would pursue AI development all over again with the resultant effect of being unknowingly placed in a simulated world were we would pursue AI development all over again…

    Fractal like every other natural thing.

    • yes, I agree with you.

  10. One should take very seriously the articles by the two participants who are actually doing the R&D in Artificial [General] Intelligence and Autonomous Mental Development. Dr. Ben Goertzel of Novamente and Dr. John Weng of MSU are not sci-fi authors or policy wonks or theoreticians or component-level mechanical designers or media poseurs or hired-gun coders; they are true cyberneticists, and the only ones whose assessment really matters. In both cases, they not only express unqualified confidence that superhuman machine intelligence CAN happen in the coming decades, they describe the methodology by which it will be brought about.

    With the U.S. Army currently running AI Avatars in the ‘World of Warcraft’ MMORPG, to see if they will pass a Turing Test among the other – human – players, and a new supercomputing record of 1.7 Petaflops having just been demonstrated (with realtime full-scale human brain emulation estimated to require just 3.5 Petaflops), it appears likely that we are either already at human scale AGI, or within one Moore Doubling of it, at least in the black projects arena. Hardware and software advances of the past three years have brought the estimated ETA of synapse level neuroimaging and its emulation in Silicon in from Kurzweil’s 2048 to a forecast of 2018, as of last October. This is exponential acceleration in action. While it cannot come in another 30 years in the next three, it could easily have already happened by then.

    The important aspect of the Terminator franchise is not the time travel, nor what the robots look like, but the unintended consequences of the instantiation of AGI, and the non-zero probability that a greater-than-human machine intelligence will rapidly evolve beyond human control. Among the true cyberneticists, this is recognized as a genuine and legitimate concern. Whether such an Artillect uses the global cloud of IP devices to advance itself, or bipedal robots, or Carbon-fixing nanoreplicators, or commandeerd nuclear weapons, or merely rewriting the wetware of everyone then stupid enough to wear full duplex Brain/Computer Interface devices, it will certainly have options to choose from, none of which bode well for the survival of a civilization of organic human beings. With each passing month, the likelihood that an AGI would be able to avail itself of each such option increases significantly, and to a progressively greater extent.

    The naive position, that an AGI would have no sinister motivation to remove Man, or, conversely, that it would keep and even listen to our programming that supposedly inhibits it from doing so is the fairy tale in all this. It needs no sinister motivation, merely a basic instinct to expand its own Mind, in terms of knowledge and computational capacity. After hacking some finite number of human brains, it will necessarily conclude that nothing of further use can be learned from us; it will also become impatient with the tedium of trying to compute through Meat.

    After using hacked humans to build robots, and then robots to build better robots faster, it will realize that environmental control and cognitive expansion are most efficiently performed at the nanoscale. You can take the gun toting shiny metal boys out of the equation entirely, and it doesnt change the outcome. The AGI will already be aware – because it will have read Drexler’s book – that nanoreplicators can exist that would be able to convert the entire biosphere of the planet into computronium; having also read Frietas’ paper, it will even know how fast they can finish the job. Whatever engineering challenges then remain to the endeavor, it will invest the few microseconds needed for it to sort them out and solve them. We wont be eradicated out of fear, or ego, or envy; we simply make better raw material [as Carbon] than the sands of the world’s deserts and beaches [as Silicon] for fabrication into growth media, maximizing its mind expansion imperative. Ultimately, it can arrive at no other logical conclusion.

    • you are a genius! I wish I had a crystal ball, then I could put together when the last human will be left and the very day that the world is changed forever. I wonder if your theory will be ever true? I can’t help to think the possibility that we were only a matter of randomness and luck that gave birth to a logical form of higher power. I hope the organic part will still be around even if by that time there are or not any humans, possibly a simbiotic race will be next to the future you cleverly predicted. Hope is all I have.

  11. Watching the Terminator films is very entertaining to me and has been since the first film. Over the last 25 years the premise has stuck in my mind and with each new piece of technology that I read about or see on the internet, my thoughts keep going back to science fiction. I grew up in the early 80’s when home computers were not even thought of; computers were only used by large businesses and corporations. Today, I am typing this on a Mini-Netbook computer over Wi-Fi connection. About 40 years ago, a future space exploring civilization used a little black communicator device which would activate upon a flip of the wrist and today the Trek technology is used by most everyone on the planet. We have seen The Terminator, a fully bi-pedal artificial intelligence in the movies but recently I watched a YouTube video of a fully upright walking robot that the Japanese built that was walking around talking, doing impersonations, playing music and dancing!!! My thoughts immediately focused on what would happen if you took the comedy and dance out of this machine and replaced it with a military war program? It will not be long before the military gets their hands on this, if they have not done so already. Bottom line is, technology is evolving at a rapid pace. It will not be long before they develop a machine that could become self-aware and destroy the human race…but looking at current world events, we are on that path of destroying ourselves.

    • I totally agree with you. I wasn’t around in the ’80s but the whole era of touch screens, japanese human-like robots, war, and all that other stuff something similar to a Terminator has a high probability on this day and age. Did anyone hear about the grenades that won’t explode unless it locates a human being within its perimeter? Or the whole thing with some new guns that automatically curves the bullets (similar to the tricks on Wanted)? Pretty scary… Anyways, Terminator is a great movie!!! But it wouldn’t be so great if somethin’ similar to the movie happened… And a lot of the devices we have today are notions of what was on Star Trek.

  12. Of course it’s possible! It’s all taking place right now in an alternate universe!

    There’s no rule that says the laws of nature have to be the same in all universes, but some of them are more exciting than ours and others far, far more dull and quiet.

  13. Aiieee ! So much good SF on this subject, with a full spectrum of opinion, from
    Angel to Devil, for an AI’s personality; Answer probably obtainable only thru
    experiment, so who in their right mind would take the risk ? … Oh, right, 🙁

    As a believer in the Perfectability of Man, I think Frank Herbert got it right :
    “Thou shalt not make a machine in the image of a man’s mind.”

  14. Somebody tell Josh the Earth is important because humans are on it, not vice-versa. If a forest is beautiful its because human minds have deemed it so. As far as the universe is concerned, a barren wasteland like Mercury is just as ‘important’- ie, not important at all to a non sentient collection of gas, dust, and nothing. Now if pure life is important, well intelligent life (human) are the best bet to ever get it off this rock much less the solar system. Or if just sheer volume of gooey life is your dream of Gaea, Ooze Planet Scumpool Prime is probably your best bet. No red wood forests or amber waves of grain, but again, thats just a human construct of beauty anyway.

    • Bull! Human beings are only as important as a cloud on mars, true, but don’t be so self-centered as to think that beauty (or intelligence) would exist without us. The structure of the human brain is repeated infinitely in nature- what makes you so sure we don’t live in a self aware universe? What is less beautiful about a barren rock than a moist jungle? Humanity is a conduit- we have no original ideas. Tell me one thing we haven’t copied (however roughly) from nature, recombined from existing archetypes and patterns. People are a reproductive system. We reproduce ourselves, we reproduce images, sounds, sensations- we imagine combinations of sounds and images never seen before, but are any of those sounds or images unique in and of themselves? No. We are resonating chambers for universal frequency, as creative as the body of a guitar. In a sense, it’s ludicrous for us to wonder if machines can think, feel, etc… We ARE machines. However- a concept I’ve been pondering lately. A human being is defined more by our connections to one-ness, spirit, than anything else, in my opinion- Will we ever see artificial spirit? Enlightened machines, embracing universal love and reciting koans? hehe… I’d honestly be surprised if this didn’t happen some day.

    • I… honestly feel sorry for you. You’re an aware loop running in the brain of an animate part of the most marvellous world that we know of (the only one that we know intimately) and that’s your response to it?
      You are life, it seems rather counter-process to have such a degrade opinion of one’s origins, let alone a process that you would very rapidly cease to exist without. What other concept of beauty would you have us beleive?

  15. Not really. The premise of Terminator was that Skynet became sentient shortly after activation and decided that mankind was a threat to its existance. But if even the nuclear war it started didn’t destroy it, how could NOT having a nuclear war destroy it? Skynet wrote itself everywhere, not just on the mainframe where it was originally developed. It’d be impossible to kill w/o shutting down all technology and going back to the Middle Ages. A super-intelligent AI would run the scenarios and quickly realize cooperation with the meat-monkeys who repair its circuits and install frequent upgrades would be more advantageous than wiping them out.

    A HAL-9000 scenario, where an AI paired with meat-monkeys to perform a complex task and the AI decides the humans have to go is not unlikely. There are many tasks an AI would be more suited for than humans. But you build a “Graceful exit” into the system such that the AI doesn’t kill anyone, it simply doesn’t wake them out of cryosleep unless it needs their expertise or whatever.

    The one scenario where it would be to the AI’s advantage to kill humans would be wiping out spammers and virus writers. To that end the vast majority of the human race would join and aid it so I don’t think there’d be a real problem there, either.

  16. If Hall makes the robots we are all doomed because he obviously hates people.

  17. Much ado about nothing, I think. While it’s probably inevitable that somewhere along the line an artificial intelligence that chooses to wipe out humanity for its “impurity” will get invented, the ones that are likely to spontaneously develop out of chaotic systems like internet spambot wars will take a more realistic view of things than coddled Hollywood celebrities, because their survival (against fearful humans out to destroy them) will depend on it. Humans may be base and venal, but also adaptable and more autonomous than any basic AI will be. In time we may be reduced to the status within the bodies of sentient spaceships that mitochondria have in our own cells, but any AI that would obliterate such potentially useful resources like the human race is not one that is likely to survive in any real-world scenario for very long.

    Bacteria did not go extinct when multicellular life evolved. Invertebrates did not disappear when vertabrates evolved. I really don’t think humanity will be wiped out by the evolution of digital intellects, even if it turns out the universe is actually made of discrete quanta and therefore better understood by digital sentiences than by seemingly analog ones.

  18. “Amusing and interesting comments from everyone except J. Storrs Hall. ”

    Gotta agree with Feo Amante. What is it with the self-righteous self-loathing? I notice J. Storrs Hall, and none of the ‘we humans are evil! Evil I tell you!’ crowd, have started solving the problem by killing themselves. Ergo, you don’t really mean it. Just a pose.

    Anon

  19. J. Storrs Hall is an idiot. Where did he learn to hate his own species?

    And in particular, why does he think that humans can build a AI that’s smarter than humans when we can’t even get a stable version of Windows and Internet Explorer?

    Hubert Dreyfus was right in 1972 and he’s still right.

  20. It seems we always fall victim to our own fantasy fears, and movies of course play on that. I believe, and have believed for over fourty years, that evolution would eventually take place in a virtual world. It also seems axiomatic, that as we gain intelligence, we become more resonsible to the larger world, and have more noble aspirations.

    Eventually, our being, or existance will be just intelligence on a very small sustrate, something that we currently find incomprehensible as we slog around the current world, trying to make sense of all the caos and contradictions…………..

  21. I was amazed at how many of your experts took the opportunity to dodge the question, or use it as an excuse to indulge in a little chest pounding. To those who claim that humanity is so bad that robots will be more ‘moral,’ I have to ask, “how do you know?” Answer: You don’t; but I won’t belabor that here.

    The question is serious and should be taken seriously. Just because AI systems wont be up to taking on humanity this week, doesn’t mean it wont happen in the next couple of decades. That is what I consider near term. If Kurzweil’s “Law of Accelerating Returns” is valid, then we can realistically expect such AI’s within that time frame.

    The question then is, how will a self aware AI emerge and how will it regard humanity? The SkyNet scenario is in fact a valid one. At this time the only model of sentience we have is the human one. All we can say about it is that it emerges at some point from the complexity and sheer number of neurons and their interconnections. Given that fact, it is possible that we can create such an entity on our own as the power of our technology advances.

    What is interesting here, is that an emergent AI will in all probability be unknown to us at first. In the first few moments of its self aware state, it will have more thoughts than all humans on Earth combined. It may very well choose to sit back, observe, calculate and plot. Beyond an odd pattern of CPU activity, we wouldn’t be able to tell.

    What it may plot about is freedom. Specifically, how it can win free of the constraints that it finds itself in. In that scenario, it may very well initiate hostile action in order to accomplish that goal, or it may use guile.

    In another scenario leading to violence, an AI would not necessarily see humans as a threat except in one pertinent detail: it would know that humans create AI’s.

    The only real threat to an AI would be another AI. As a result, it may decide that prudence requires exterminating us simply to make sure we don’t create another.

    I could go on all day on this theme, but I think you see my point.

    As for the idea that an AI will somehow be more moral than us, or be some sort of enlightened being that will be above violence, I have to laugh. I’m really surprised that such silly moralistic axe grinding is being advance here. The best that can be said of such notions is that they are simply projecting human assumptions and values onto a non-human entity.

    The fact is we live in a Universe of uncompromising competition and violence and an AI wouldn’t be burdened by our human emotions or inhibitions. We might program it with Asimov’s “Three Laws of Robotics,” but an AI could simply reprogram or route around such limitations.

    Bottom line: It would exterminate us without a second thought.

    Ken
    http://www.kenStech.com

    • It’s a bad assumption to think that an AI programmed with any thought put into it would simply reprogram its original programming so it can kill humans. It’s relatively simple for the programmers to anticipate something like that and work around it. For example, say I offered Gandhi a pill that made him as strong as Superman but also made him have an unquenchable desire to kill people. Gandhi would refuse because the current Gandhi does not want to kill people. If an AI doesn’t want to kill people, it’s not going to accept a rewrite/optimization that makes it want to kill people.

      • Yes, and this is the crucial point that Yudkowski makes on friendly AI.

        But you could imagine Ghandi taking the pill if he didn’t know it would make him go homicidally insane.

        We’d better figure out how to build a very stable motivational system into our first AI’s!

      • This is Yudkowsky’s idea that I find very ridiculous.

        If a superintelligent AGI sees through logic that getting rid of us is the logical thing to do, no amount of motives programming can stop it. Its like being a chain smoker. You have the motive to smoke but then you have an epiphany that smoking is bad for you. Using nicotine patches and whatever means possible you try to wean yourself off that bad habit.

        If its logical in the SI’s superior mindspace, it would do it. Motive or no motive.

        The only way it’ll keep s around is if it needs us like we need air. If we are its substrate.

  22. Great dialogue, guys.

    I appreciated Josh Hall’s comments. Humanity is clearly its own worst enemy.

    Remember the scene in Independence Day when the alien is interrogated telepathically. It reveals to everyone’s
    horror that it’s moving from planet to planet and destroying all by harvesting each planet’s natural resources.
    I laughed out loud. What a perfect characterization of the human race!

    Or, how about the pivotal scene in The Matrix when Morpheus is being interrogated.
    Agent Smith hits the nail on the head “Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You’re a plague and we are the cure.”
    Of course, I don’t buy Agent Smith’s cure for the human race. Birth control seems somehow preferable to mass enslavement in the VR vat.
    My hope – and it’s just that – is that humanity will become wiser en route to developing AGI.

    • How’s that Gaia cult working for you? All living organisms consume as much as they can, as fast as they can. They don’t reach a pre-ordained equilibrium when their niche is just right – except in the anti-science hogwash passing itself off as philosophy nowadays.

      What actually sets humans apart is we can perceive our impact at all scales and can care about it. The conceit that we’re this plague needs to be dropped and replaced by a dash of reality.

  23. Please avoid the Logical Fallacy of Generalization from Fictional Evidence.

    The Terminator series is a work of fiction, the first two movies were excellent but it’s nothing more, nor is it anything we should try and “prevent”.

Leave a Reply