Sign In

Remember Me

Elon Musk is More Dangerous than AI

I was enjoying a quiet weekend when my news feed started to pop up with stories such as Elon Musk warns us that human-level AI is ‘potentially more dangerous than nukes’ and Elon Musk: AI could be more dangerous than nukes. Wow! Now usually I am a huge fan of Mr. Musk, his approach to innovation, and many of the amazing things he and his teams have accomplished at Tesla and SpaceX for example. But I have to say that I strongly disagree with Mr. Musk about the dangers of AI. First and foremost, AI is not now nor will it ever be “more dangerous than nukes”. Second, and perhaps even more importantly, restricting AI research is in itself dangerous. Let me explain.

What Exactly Did Musk Say?

elon-musk

This series of articles was all generated by a couple of tweets posted by Musk to his Twitter account. I’ve included them below.


 

He’s a bit vague here, unsurprising given that we are talking about just a few tweets, but let’s examine the details.

How Dangerous Are “Nukes” Anyway?

Mr. Musk’s exact statement was that AI is “Potentially more dangerous than nukes.” But just how dangerous are nuclear weapons anyway? Simply stated, nuclear weapons are extremely dangerous and constitute an existential threat to humanity and all life forms on Earth. So to be “more dangerous” than nukes, AI has to be really really dangerous. Only two nuclear weapons have been used in warfare and together they killed around 250,000 people and destroyed two cities. However the testing and manufacture of early nuclear devices was itself dangerous and entire populations were exposed to radiation. Many of the scientists working with early devices and materials were also exposed to radiation.  The total number of deaths from radiation exposure resulting from testing and manufacture of nuclear devices is unclear, but an estimate of 20,000 people worldwide dying from cancer as a result of nuclear testing wouldn’t be unreasonable  and might be quite low. Current nuclear arsenals however number in the thousands of warheads and the devices are also more powerful and destructive than the first devices.

Country Warheads active/total[nb 1] Date of first test CTBT status[5]
The five nuclear-weapon states under the NPT
 United States 1,920 / 7,315[3] 16 July 1945 (“Trinity“) Signatory
 Russia 1,600 / 8,000[3] 29 August 1949 (“RDS-1“) Ratifier
 United Kingdom 160 / 225[3] 3 October 1952 (“Hurricane“) Ratifier
 France 290 / 300[3] 13 February 1960 (“Gerboise Bleue“) Ratifier
 China n.a. / 250[3] 16 October 1964 (“596“) Signatory
Non-NPT nuclear powers
 India n.a. / 90–110[3] 18 May 1974 (“Smiling Buddha“) Non-signatory
 Pakistan n.a. / 100–120[3] 28 May 1998 (“Chagai-I“) Non-signatory
 North Korea n.a. / <10[3] 9 October 2006[6] Non-signatory
Undeclared nuclear powers
 Israel n.a. / 80,[3] 60-400[7][8] Unknown (suspected 22 September 1979) Signatory

Even if we only consider active warheads, there are around 3000 active nuclear devices currently in the world. The above table should make it clear that this is a very conservative estimate of the number of devices, i.e. the U.S. has 1,920 active warheads but 7,315 total. Russia has 1,600 active devices. according to one estimate, just 300 of these devices used against the United States it would cause 90 million casualties within 30 minutes. A U.S. strike against Russia would be expected to be similar in scope. Now we don’t really know what Musk is talking about when says “nukes” as this is a pretty imprecise terminology. Again he is just tweeting quickly here, and this is expected. But since he uses the plural I think he does not mean just one nuclear device. A full scale nuclear exchange, global thermonuclear war, would be expected to kill at least 200 million people in the first hour. This is a highly conservative estimate, and many more people would die from radiation, starvation and other causes within hours or days. It seems conceivable  that something approaching 1 billion people might die within 24 hours depending on the specifics. The map below gives you some idea of the effects of a limited nuclear strike on the U.S. with the resulting fall-out indicated with the darkest considered as lethal and the least dangerous fall-out zones colored yellow. (from FEMA-estimated primary counterforce targets for Soviet ICBMs circa 1990).

And this doesn’t include second or subsequent nuclear strikes. Both the U.S. and Russia have the capability and strategy to launch a second strike should such an exchange occur.

Beyond the immediate casualties and deaths from radiation, some scientists believe a full scale nuclear war could cause a “Nuclear Winter” which would possibly terminate all life on Earth. While the Nuclear WInter scenario is speculative, we don’t exactly know what would happen, we can clearly see that it would be very bad. At least 1 billion deaths and possibly ending all life on Earth is within the reach of existing and disclosed nuclear arsenals. So when Mr. Musk is saying AI is more dangerous than nuclear weapons, he is claiming that the technology will result in millions if not billions of human deaths. To summarize, nuclear weapons have probably killed 250,000 people or thereabouts and have the capability to kill pretty much everyone. It’s not exactly clear to me how AI can be more dangerous than killing everyone and all life on Earth, but if we just take my limited estimate above of 200 million in one hour, then to be more dangerous than nuclear weapons, an AI has to be able to kill more than 3.3 million humans per minute.  

Musk is making a very extraordinary claim here about how dangerous AI is.

What is Musk Really Worried About?

Again all we have is these two  tweets so perhaps it is a bit presumptuous to say exactly what Mr. Musk is thinking. However, he does mention reading Nick Bostrom’s recent book Superintelligence and his statement was made in the context of having just finished the book. So we can assume that he is talking about the scenarios presented in Bostrom’s book, and specifically the idea of an intelligence explosion leading to a system with a rapidly increasing intelligence that becomes superintelligent shortly after being created, that is, more intelligent than all humans combined.

In science fiction stories such as The Terminator, War Games, The Forbin Project, etc. the rise of artificial intelligence is depicted as dangerous. Bostrom’s book is technical nonfiction, but it essentially falls into this same mold. Various arguments are presented and discounted by Bostrom. But the entire book has a massive flaw. How realistic is this idea of an Intelligence Explosion? It’s not entirely clear.

To see why, consider that  I’m the best tic-tac-toe player in the world. I don’t care how smart you are, you can’t beat me at tic-tac-toe. No superintelligence is better at tic-tac-toe than I am no matter how vastly intelligent it is. With tic-tac-toe, it’s easy to see why. The game has a finite number of possible moves and once you know the best strategy you can’t lose. But consider a larger tic-toe game like 4×4, 5×5, 6×6, etc. You can also consider 3D tic-tac-toe and even higher dimensional games.

A superintelligence might be able to beat me at larger games, say a 100×100 tic-tac-toe game, or 3D tic-tac-toe. However, the game is still finite and so it is unclear that another intelligence, much smarter than the first, can play better. It depends on how much each player knows about the game as well as the specific properties of the game itself.  So depending on the environment in which we are acting, an unbounded intelligence explosion might not happen. An AI might get smart enough that being smarter wasn’t an immediate advantage and the cost of increasing intelligence could outweigh any advantage. Certainly humans tend to overestimate the advantages conferred by their own intelligence.

Intelligence vs. Autonomy

More importantly, autonomy is more dangerous than intelligence. Dangerousness requires the ability to do harm and therefore also the ability to act freely (autonomy).  It does not however require intelligence. A simple feedback loop could kill everyone on Earth and it would be dumber than the simplest flatworm. It is easy to see that intelligence is not the same as dangerousness and it isn’t even always correlated with it. Consider who  you would rather fight to the death:

a. an unarmed man with 170 IQ and both his arms tied behind his back or

b. a huge angry man with 70 IQ holding a blunt instrument.

You don’t have to be smart to be dangerous.

This is a pretty important point because in all of the recent discussion of the potential dangers of AI the discussion is focused around “intelligence” and “superintelligence”. But intelligence alone is not dangerous. For example, a super-intelligent system that is not autonomous but can only act with my permission is not dangerous to me. Further, in order to do harm, the system has to be able to act in the world or cause someone or something else to act. An appropriately isolated superintelligence also can’t hurt anyone. So the fear is really not about  only the rise of a super-human intelligent system, but also the rise of one that is autonomous and free so it can act in the world.

Sure, I agree, such a system could be highly dangerous and we’ve all seen the movies and read the books where this is what happens. However the focus on intelligence is misleading and in itself dangerous.

Elon Musk is more dangerous than AI because he is autonomous and free to act in the world.

Reality check: A machine that hunts and kills humans in large numbers wouldn’t need to be more intelligent than an insect. And  yes it would be super dangerous to make such machines especially if we add in the idea of self replication or self manufacture. Imagine an insect like killing machine that can build copies of itself from raw materials or repair itself from the spare parts of its fallen comrades. But notice that the relative intelligence or lack thereof has little to do with the danger of such a system. It is dangerous because it has the ability to kill you and is designed to do so. The fact that this machine can’t play chess, converse in English, or pass a Turing Test doesn’t change anything about its ability to kill.

The danger then is not that we will create an intelligence greater than our own, but that we will embody these intelligences into autonomous systems that can kill us either on purpose or by accident.

Preventing the Rise of Dangerous AI

Let’s accept for a moment Musk’s assertion that AI is potentially more dangerous than nuclear weapons. What should we do about this?

The best thing to do would be to keep the secrets of building an AI secret and to hide even the possible existence of such a machine from the world. But it’s already too late.

For a while that’s what we did with nuclear weapons. But then we detonated two of them, and the cat was out of the bag.

Since then, the U.S. and the world have created a vast security and surveillance apparatus largely devoted to managing and controlling nuclear weapons and materials. The operation of this apparatus is formalized through a series of complex agreements, arms control treaties, and it also includes the operation of vast technical systems and is supported by the involvement of a large number of people in multiple nations and organizations. Few people have any idea how vast and far reaching this apparatus is. The Snowden revelations will give you some idea and that isn’t the whole story.

And despite this almost omnipresent global security apparatus with vast financial resources, nations such as Pakistan, North Korea, Libya, and others were able to gain access to the technology to make nuclear weapons and various subsystems and in some cases they have demonstrated working weapons. This is despite the best efforts of the global security apparatus to prevent this exact outcome. We can’t secure nuclear weapons perfectly, so the idea that we can secure AI perfectly is at least in question. What would be required?

With nuclear weapons both the materials and designs are illegal to possess. Manufacture of a weapon requires both the raw materials, general scientific knowledge, and also specific design details and engineering knowledge. The details of working weapons are all highly classified and even just possessing them without permission will land you in prison. Imagine a similar security regime applied to AI. First, we’ll have to restrict the materials used to make AIs. Those would be computers and software tools like programming languages and compilers. Only  individuals working on classified projects with appropriate security clearances would be given access to them. Further, illegal possession of programmable computers or development tools would be a serious felony and would carry high criminal penalties. Surveillance and law enforcement would be involved and would act with extreme prejudice against anyone suspect of having these items or developing AI.

But creating an AI is something you can do at home on your personal computer. Even when you need a larger computational resource, these are now available on demand in the cloud or can be built fairly inexpensively from commercial components such as graphics accelerator cards. Restricting AI would mean restricting access to these tools and systems as well. Beyond this, the specific engineering knowledge associated with making dangerous AIs would have to be protected.

It would become illegal to implement or possibly even know about certain algorithms, areas of mathematics, etc. This idea isn’t unprecedented,  for example consider the efforts of the U.S. government to restrict knowledge of cryptography algorithms in the 1990s. Certain programs, i.e. those associated with classified weapon systems, are themselves classified and unauthorized possession of these codes is illegal. If we follow Musk’s argument, simply having access to AI software will become a crime. But would it be enough?

Some attention has been given to the notion that intelligence is an “emergent” phenomenon of the human neural system. This suggests that machine intelligence could also be an emergent phenomenon of an underlying system or network. Could a dangerous AI emerge unexpectedly from a safe system? Certainly if we build and field systems whose operation we don’t fully understand, unexpected events can transpire. Existing deep learning systems are examples of systems that work, but do so by a mechanism which humans don’t and possibly can’t really understand.

Some other researchers have focused on the creation of provably beneficial or provably friendly AIs. The notion here is to build AIs according to some rules or within certain constraints that ensure the resulting AI is friendly. But the idea is fundamentally flawed in a very deep way. First, defining “friendly” is a huge problem. Even seemingly friendly systems can become unfriendly if the context changes or when taken to extremes. Bostrom covers some of these scenarios in his book. But this is also true about simple maximizing systems that have poorly specified goals. It isn’t about AI per se.

Further, a friendly program can be modified or subverted to become unfriendly. Alternatively a seemingly friendly program might contain hidden functionality that is unfriendly. There is in general no way to prove that an arbitrary presented program is friendly and secure. In fact Rice’s Theorem, a not very well known result in computer science, states that in general this can’t be done. So we can’t know if a presented piece of software includes a dangerous AI or not simply by looking at it.

I don’t understand how one can assert that AI is so dangerous that it might destroy the human race and yet at the same time invest in it. But that is exactly what Mr. Musk is doing. Perhaps he will close Vicarious or use it to promote the security apparatus described above. But perhaps he just thinks the research they are doing at present is “safe”. The problem is that we don’t know what they are doing.

Moreover, a bad outcome might look nothing like The Terminator scenario, for example we might cede control over our lives to such systems giving them control over food production and our ability to survive. Consider a plausible future in which food production is entirely robotic and food is produced in technological vertical farms or similar systems. We might lose the ability to produce food ourselves, or forget how to repair or maintain the systems, and so on. The point here is that not all bad outcomes are obvious; they might start off looking like good ideas.

The security regime that would be required to secure AIs would be similar to that for nuclear weapons, but it would have vast negative consequences for our society and intellectual lives. Knowledge of computers and programming would be tightly controlled, rendering our economy less innovative and productive. All employees of AI companies would require high level security clearances. A lot of smart people would simply refuse to go through the hassle and our competitiveness would suffer severely.

Notably those who fear an AI doomsday aren’t the only people that want to limit your access to free general computation. The recording and media industries also would love to terminate the ability of citizens to compute arbitrary programs on unlocked machines.  The President of the United States just signed a law allowing Americans to unlock their phones. What is being proposed here is a vast leap backwards; you wouldn’t even be allowed to own a phone that could be unlocked. Cory Doctorow’s essays Lockdown: The War on General Purpose Computing and  The Coming Civil War on General Computing cover this subject nicely.

Consider instead of “copying”, we replace in Cory’s essays “creating dangerous AI”, e.g. “In short, they made unrealistic demands on reality and reality did not oblige them. Copying only got easier following the passage of these laws—copying will only ever get easier. Right now is as hard as copying will get.”

Right now is as hard as creating dangerous AIs will get.

But trying to stop people from creating them will end our ability to have a free and open society.

Of course Mr. Musk never says any of this and I have no idea if he has even considered this aspect of the issue. However if as he asserts AI is really more dangerous than nuclear weapons, you can see the immediate need for an appropriately significant security regime. This is implied by his assertion that AI is “potentially more dangerous than nukes” and it is a highly dangerous idea itself in my view.

Imagine a world in which it is illegal for citizens to own programmable machines or to own tools for programming. People would be surrounded by complex and intelligent systems, but they would have no idea how they worked and no way of accessing this knowledge. They would quite literally be prisoners of the matrix.

I think we need to move in the opposite direction, empowering more people to code and to understand code and how it works.

Ignoring the Real Benefits from AI

This is a case where the proposed cure is far worse than the disease. As I have argued above, intelligence isn’t dangerous by itself, but restricting the development of intelligence might be.

Beyond suggesting the need to control general purpose computation, perhaps the most dangerous aspect of Musk’s tweets is that he ignores entirely the possible benefits of AI. Again its easy to read too much into these tweets. I assume that a belief in the potential for good was in part his interest in Vicarious, and that Musk imagined that their technology could be used to create things to make people’s lives better.

He seems to have forgotten this part. See for example, The Promise of a Cancer Drug Developed by AI.

We are now just at this time starting to see some really interesting results from AI systems like Vicarious’ and IBM’s Watson. These systems have amazing potential applications in areas such a medicine and health care. They will help us live longer and be healthier. Banning or restricting AI research would  limit or restrict research into beneficial uses of AI in areas such as medicine, drug design, or aiding in developing longevity therapies. Imagine if we restricted humans from getting smarter too. It obviously makes no sense. If we succumb to this sort of fear mongering , and Musk is not alone in propagating it, we will lose all sorts of advantageous developments that rely on AI for their operation.

This is a clear case where the proactionary principle applies, especially if you agree that the existential risks of AI are being overstated here. AI might also help us end poverty, prevent wars, and more. You need to consider both the possible benefits and risks of the technology not only the risks to make a rational decision about it. Just like other technologies such as SpaceX’s rockets which are also sometimes known as “ballistic missiles”. So please Mr. Musk, let’s also talk about the potential vast benefits of AI for humanity and not just frighten people by talking about movie doomsday fantasy scenarios. Hyperboles and hyper-exaggerated risks do not move the debate forward. Please don’t support the rise of an even more pervasive and oppressive global security regime to control AI research and computing more generally. That idea is even more dangerous than AI.

[updated to include link to http://www.fastcoexist.com/3033737/the-promise-of-a-cancer-drug-developed-by-artificial-intelligence]

h+ Magazine Forums Elon Musk is More Dangerous than AI

This topic contains 8 replies, has 4 voices, and was last updated by  Zach 3 months, 3 weeks ago.

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
    Posts
  • #19267

    Peter
    Keymaster

    I was enjoying a quiet weekend when my news feed started to pop up with stories such as Elon Musk warns us that human-level AI is ‘potentially more dangerous than nukes’ etc. Wow!

    [See the full post at: Elon Musk is More Dangerous than AI]

    #19278

    Ralf Lippold
    Participant

    Thanks a lot Peter to bring out the many aspects around nukes and AI.

    If Elon’s intervention through Twitter, and just two tweets that got the conversation rolling, is worth anything, it is worth initiating the conversation about these two topics that humanity at large fears (whether rational or irrational) nowadays.

    Back in 1986, I just served in the army in Andernach (happened to be the first army installation of the young Bundesrepublik Deutschland after WWII), the catastrophe of Cernobyl happened. And to be honest lots of words spread, how dangerous or not the fallout would be, about the long term effects on living creatures. Now the region around the former nuclear plant is off-limits, and its almost 30 years ago.

    AI, my first direct contact with what AI could mean was at a visit to the MIT Museum back in 2009. Of course lots of good is possible. And already along the way.

    However the human mind connects most easily what it already has encountered in the past (whether personal or after seen or heard stories on radio or tv). Imagining the combination of nuclear technology (especially what happened in Fukushima in 2011), and AI (which at large still is functioning, and developing due to the limited minds of humans who pre-program and create these systems) Elon Musk’s thoughts are not out of the world, and general public thinking.

    Catching the accelerated conversation, and lead it towards fruitful dialogue on two topics that both have (in certain niches, and managed technology surroundings) amazing positive power (while I started my studies of economics at the University of Mainz, we got a tour through campus, and visited also the institute that was named, and led by Otto Hahn; there we stood right next to a 100kw nuclear reactor, which every other day on purpose got heated up safely (!)) will be best outcome of what Elon started with his most creative, and innovative move in the Twitterville (where still today not many people are reading, as they think 140 characters can’t bring over a message).

    #19289

    Peter
    Keymaster

    Sure, conversation is good. That is why I wrote my article of course and I appreciate that you took the time to reply.

    However I don’t agree with his claims about AI at all.

    #19290

    Slaqr
    Participant

    My compliments to you for this article, it was a good read and I agree with your views on this issue.

    The way Mr. Musk said what he said does make it seem a bit like fear-mongering, although I’m sure his perspective isn’t that black and white; Usually Twitter and it’s annoying word limit is to blame for that.

    Personally I think we should treat an AI like we would if we made contact with an extraterrestrial race; Take the appropriate security measures and then see if we can learn anything from it.

    After all, we have no way of knowing what it’s personality or intentions would be like, which would ultimately govern what it would do if it could influence the real world.

    Regardless, we need to have intelligent people thinking on both sides of the spectrum; having people with strong reservations and concerns as well as people with great ambitions and ideas on the subject of artificial intelligence is essential to make sure we handle this properly, without sacrificing our opportunities.

    #19291

    Zach
    Participant

    It’s an interesting topic of conversation for sure and your article provides many insights. There are very good potential outcomes and very bad potential outcomes. The key word being potential, which Musk uses. You seem to be responding to an imaginary scenario where Musk tweeted something like “AI is more dangerous than nukes”. Unless I’m missing something, nowhere did he discourage participation in or advancement of AI. He’s simply saying that care should be taken given that bad outcomes are possible. The following quote, from this article, makes it clear that you agree with him.

    “…we might cede control over our lives to such systems giving them control over food production and our ability to survive. Consider a plausible future in which food production is entirely robotic and food is produced in technological vertical farms or similar systems. We might lose the ability to produce food ourselves, or forget how to repair or maintain the systems, and so on…”

    It seems like that could result in every person on Earth starving to death, which aligns with what the tweets in question are saying. We get it, there’s good and bad with AI just like there is with everything else.

    “If we follow Musk’s argument, simply having access to AI software will become a crime” – Can you supply a written or recorded instance of him arguing this? I’m not seeing it in the tweets and I’ve never heard him utter any like thing. It seems to me like you took the opportunity to use something that somebody popular recently tweeted, along with their name, to draw attention to your article on the topic. Am I missing something here?

    #19292

    Peter
    Keymaster

    I’m saying that it is inherent in the comparison to nuclear weapons. If general AI research is as dangerous as nuclear weapons you can expect a similar set of security precautions. Is it legal to own a nuclear weapon design? No it isn’t. And if AI is as dangerous or possibly more dangerous, then AI designs will be illegal to own as well. As I said in the article multiple times, we don’t really know what he is thinking because it is just a few tweets.

    #19298

    Zach
    Participant

    You’re basing all of this off of a misunderstood premise. Musk isn’t comparing a single result of a certain technology to the whole of another technology. However, that is what you’re doing and it doesn’t make sense. Instead he’s comparing one result of nuclear power research to one or more potential results of artificial intelligence research. This does make sense.

    You’re right, it’s illegal to have a design for a nuclear bomb and therefore it is reasonable to extrapolate the same for a network of armed drones designed to kill a billion people within 24 hours (or something).

    However, it’s not illegal to have plans for a nuclear powered car or to actually build a nuclear power plant to power houses. Therefore it is not reasonable to think that it will ever be illegal to have plans for a machine that combines a water source and a humidity sensor to keep your plants alive, or contact lenses that monitor glucose levels in the tears of people with diabetes.

    Nuclear bombs were discovered and humanity doesn’t shun all of nuclear tech or ignore its benefits, so how does it make any sense to think we would do that with AI?

    #19305

    Peter
    Keymaster

    Actually building a nuclear powered car without the proper authorizations and security precautions might be illegal. It depends on the specifics of the design and materials used, how you get them, and how you secure them.

    See also:

    http://gamepolitics.com/2008/01/10/gamer-builds-nuclear-reactor-in-home-fbi-pays-a-visit#.U-JQtIBdVTw

    Let’s say instead of a nuclear reactor, ganer guy build a “military AI” for use in a military simulation game like Call of Duty. How is this AI different from a real world combat AI? As the games become more real, so do the AIs.

    It seems the only difference is actually connecting the game AI software to an armed robot, like the drones you mention. Current game AIs might be sufficient to hunt and kill humans, see my comments about insect level intelligence in the article.

    If AI software is treated like nuclear weapons then this sort of AI would become illegal and controlled. You might need the equivalent of a Q-clearance to work on game AI.

    #19373

    Zach
    Participant

    Yeah, since even the most harmless uses of a nuclear reaction could potentially be very dangerous, it makes sense to monitor such things closely. The same cannot be said of AI. Medicine is maybe a better example of how such a predicament might be handled; some of it is behind lock and key and some just sits on the shelf for anybody to grab. The availability of different drugs is based off of how dangerous the chemicals can be. Such determinations are possible with AI as well.

    Regarding your military AI example and the assertion that AI gets more realistic as games do, I disagree again. The AI required to make such a thing work in a video game is incredibly different from and much, much simpler than what would be required to reproduce something that even comes close to being as effective in the real world. Consider a robotic enemy soldier (AI) that tries to seek out and stab the you (or the player in the case of a video game) to death in a forest…

    In the real world the robot would have to gather data from a bunch of inputs like cameras, microphones, thermal sensors, etc… It would then have to analyze all of this data and make guesses as to where you are and try to move towards you in real time. Keep in mind too that this data is faulty and incomplete, and these sensors can be tricked by almost anything. A leaf falling in front of the camera could tamper with the data and cold rain in the air could confuse a thermal sensor and a microphone, etc. To get more accurate you’d have to install more types of sensors and/or upgrade existing ones all the time and you’d never get something completely reliable. Making these devices work together would also require heaps of code.

    Given that everything in the world of a game is created by the game, it is omniscient in that world. Furthermore, it can instantly pass any of its knowledge to any object in the game. Kinda like God. So if everything in a game has instant access to perfectly accurate data about every other object in the world, the process of gathering and analyzing data completely is eliminated. If you ran behind a tree in a game, the enemy would still just know exactly where you are.

    Also note that that process is where pretty much all of the code, programming effort, and computer processing power would go in the real world implementation. There’s just one part of a huge difference and there are many others…

Viewing 9 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic.