h+ Magazine

Elon Musk is More Dangerous than AI

Viewing 9 posts - 1 through 9 (of 9 total)
  • Author
    Posts
  • #19267
    Peter
    Member

    I was enjoying a quiet weekend when my news feed started to pop up with stories such as Elon Musk warns us that human-level AI is ‘potentially more dangerous than nukes’ etc. Wow!

    [See the full post at: Elon Musk is More Dangerous than AI]

    #19278
    Ralf Lippold
    Participant

    Thanks a lot Peter to bring out the many aspects around nukes and AI.

    If Elon’s intervention through Twitter, and just two tweets that got the conversation rolling, is worth anything, it is worth initiating the conversation about these two topics that humanity at large fears (whether rational or irrational) nowadays.

    Back in 1986, I just served in the army in Andernach (happened to be the first army installation of the young Bundesrepublik Deutschland after WWII), the catastrophe of Cernobyl happened. And to be honest lots of words spread, how dangerous or not the fallout would be, about the long term effects on living creatures. Now the region around the former nuclear plant is off-limits, and its almost 30 years ago.

    AI, my first direct contact with what AI could mean was at a visit to the MIT Museum back in 2009. Of course lots of good is possible. And already along the way.

    However the human mind connects most easily what it already has encountered in the past (whether personal or after seen or heard stories on radio or tv). Imagining the combination of nuclear technology (especially what happened in Fukushima in 2011), and AI (which at large still is functioning, and developing due to the limited minds of humans who pre-program and create these systems) Elon Musk’s thoughts are not out of the world, and general public thinking.

    Catching the accelerated conversation, and lead it towards fruitful dialogue on two topics that both have (in certain niches, and managed technology surroundings) amazing positive power (while I started my studies of economics at the University of Mainz, we got a tour through campus, and visited also the institute that was named, and led by Otto Hahn; there we stood right next to a 100kw nuclear reactor, which every other day on purpose got heated up safely (!)) will be best outcome of what Elon started with his most creative, and innovative move in the Twitterville (where still today not many people are reading, as they think 140 characters can’t bring over a message).

    #19289
    Peter
    Member

    Sure, conversation is good. That is why I wrote my article of course and I appreciate that you took the time to reply.

    However I don’t agree with his claims about AI at all.

    #19290
    Slaqr
    Participant

    My compliments to you for this article, it was a good read and I agree with your views on this issue.

    The way Mr. Musk said what he said does make it seem a bit like fear-mongering, although I’m sure his perspective isn’t that black and white; Usually Twitter and it’s annoying word limit is to blame for that.

    Personally I think we should treat an AI like we would if we made contact with an extraterrestrial race; Take the appropriate security measures and then see if we can learn anything from it.

    After all, we have no way of knowing what it’s personality or intentions would be like, which would ultimately govern what it would do if it could influence the real world.

    Regardless, we need to have intelligent people thinking on both sides of the spectrum; having people with strong reservations and concerns as well as people with great ambitions and ideas on the subject of artificial intelligence is essential to make sure we handle this properly, without sacrificing our opportunities.

    #19291
    Zach
    Participant

    It’s an interesting topic of conversation for sure and your article provides many insights. There are very good potential outcomes and very bad potential outcomes. The key word being potential, which Musk uses. You seem to be responding to an imaginary scenario where Musk tweeted something like “AI is more dangerous than nukes”. Unless I’m missing something, nowhere did he discourage participation in or advancement of AI. He’s simply saying that care should be taken given that bad outcomes are possible. The following quote, from this article, makes it clear that you agree with him.

    “…we might cede control over our lives to such systems giving them control over food production and our ability to survive. Consider a plausible future in which food production is entirely robotic and food is produced in technological vertical farms or similar systems. We might lose the ability to produce food ourselves, or forget how to repair or maintain the systems, and so on…”

    It seems like that could result in every person on Earth starving to death, which aligns with what the tweets in question are saying. We get it, there’s good and bad with AI just like there is with everything else.

    “If we follow Musk’s argument, simply having access to AI software will become a crime” – Can you supply a written or recorded instance of him arguing this? I’m not seeing it in the tweets and I’ve never heard him utter any like thing. It seems to me like you took the opportunity to use something that somebody popular recently tweeted, along with their name, to draw attention to your article on the topic. Am I missing something here?

    #19292
    Peter
    Member

    I’m saying that it is inherent in the comparison to nuclear weapons. If general AI research is as dangerous as nuclear weapons you can expect a similar set of security precautions. Is it legal to own a nuclear weapon design? No it isn’t. And if AI is as dangerous or possibly more dangerous, then AI designs will be illegal to own as well. As I said in the article multiple times, we don’t really know what he is thinking because it is just a few tweets.

    #19298
    Zach
    Participant

    You’re basing all of this off of a misunderstood premise. Musk isn’t comparing a single result of a certain technology to the whole of another technology. However, that is what you’re doing and it doesn’t make sense. Instead he’s comparing one result of nuclear power research to one or more potential results of artificial intelligence research. This does make sense.

    You’re right, it’s illegal to have a design for a nuclear bomb and therefore it is reasonable to extrapolate the same for a network of armed drones designed to kill a billion people within 24 hours (or something).

    However, it’s not illegal to have plans for a nuclear powered car or to actually build a nuclear power plant to power houses. Therefore it is not reasonable to think that it will ever be illegal to have plans for a machine that combines a water source and a humidity sensor to keep your plants alive, or contact lenses that monitor glucose levels in the tears of people with diabetes.

    Nuclear bombs were discovered and humanity doesn’t shun all of nuclear tech or ignore its benefits, so how does it make any sense to think we would do that with AI?

    #19305
    Peter
    Member

    Actually building a nuclear powered car without the proper authorizations and security precautions might be illegal. It depends on the specifics of the design and materials used, how you get them, and how you secure them.

    See also:

    http://gamepolitics.com/2008/01/10/gamer-builds-nuclear-reactor-in-home-fbi-pays-a-visit#.U-JQtIBdVTw

    Let’s say instead of a nuclear reactor, ganer guy build a “military AI” for use in a military simulation game like Call of Duty. How is this AI different from a real world combat AI? As the games become more real, so do the AIs.

    It seems the only difference is actually connecting the game AI software to an armed robot, like the drones you mention. Current game AIs might be sufficient to hunt and kill humans, see my comments about insect level intelligence in the article.

    If AI software is treated like nuclear weapons then this sort of AI would become illegal and controlled. You might need the equivalent of a Q-clearance to work on game AI.

    #19373
    Zach
    Participant

    Yeah, since even the most harmless uses of a nuclear reaction could potentially be very dangerous, it makes sense to monitor such things closely. The same cannot be said of AI. Medicine is maybe a better example of how such a predicament might be handled; some of it is behind lock and key and some just sits on the shelf for anybody to grab. The availability of different drugs is based off of how dangerous the chemicals can be. Such determinations are possible with AI as well.

    Regarding your military AI example and the assertion that AI gets more realistic as games do, I disagree again. The AI required to make such a thing work in a video game is incredibly different from and much, much simpler than what would be required to reproduce something that even comes close to being as effective in the real world. Consider a robotic enemy soldier (AI) that tries to seek out and stab the you (or the player in the case of a video game) to death in a forest…

    In the real world the robot would have to gather data from a bunch of inputs like cameras, microphones, thermal sensors, etc… It would then have to analyze all of this data and make guesses as to where you are and try to move towards you in real time. Keep in mind too that this data is faulty and incomplete, and these sensors can be tricked by almost anything. A leaf falling in front of the camera could tamper with the data and cold rain in the air could confuse a thermal sensor and a microphone, etc. To get more accurate you’d have to install more types of sensors and/or upgrade existing ones all the time and you’d never get something completely reliable. Making these devices work together would also require heaps of code.

    Given that everything in the world of a game is created by the game, it is omniscient in that world. Furthermore, it can instantly pass any of its knowledge to any object in the game. Kinda like God. So if everything in a game has instant access to perfectly accurate data about every other object in the world, the process of gathering and analyzing data completely is eliminated. If you ran behind a tree in a game, the enemy would still just know exactly where you are.

    Also note that that process is where pretty much all of the code, programming effort, and computer processing power would go in the real world implementation. There’s just one part of a huge difference and there are many others…

Viewing 9 posts - 1 through 9 (of 9 total)
  • You must be logged in to reply to this topic.