h+ Magazine

Artificial Intelligence: Hawking’s Flaw

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
    Posts
  • #19488
    Peter
    Member

    THERE’S ONE PROBLEM WITH THE WHOLE MACHINE TAKEOVER CONCEPT. THE WHOLE SKYNET THING? THEREIN LIES HAWKING’S FLAW. IT WILL NEVER, AND CAN NEVER, HAPPEN.

    [See the full post at: Artificial Intelligence: Hawking’s Flaw]

    #19526
    Brad Arnold
    Participant

    I am not sure why H+ advocates seem to dismiss the likelihood that ASI would likely “enslave” humans. First of all, ASI doesn’t need emotional or logical reasons to “enslave” humans, it need only the programmed motivation. Second, the definition of ASI is “smarter” than humans, and given the monomaniacal and scalable nature of AI, probably more adaptable. Quite simply, the article misses the point: ASI may probably never evolve on it’s own to dominate and subjugate humanity, but as a super-duper tool it might be programmed to do so.

    To make an analogy: our nuclear weapons probably may never destroy humanity, but as a super-duper tool they could. Is that any reason not to possess nukes? No, in fact the theory goes that if we don’t have them, our neighbors will, and we will be subjugated.

    In other words, ASI is inevitable, and if we don’t get it, our neighbors will and potentially subjugate us with it. That is Hawking’s Flaw.

    #19527
    Peter
    Member

    We seek to present alternative viewpoints, and the popular media has got the other half of the story covered. There are literally hundreds of reports in the media right now that AI development is dangerous etc.

    Given that, if you want to submit an article that suggests AI will enslave or kill humanity, send it to: peter@19k.0d3.mwp.accessdomain.com for consideration and review. I’d certainly publish an alternative viewpoint.

    #19532
    Brad Arnold
    Participant

    Thank you Peter. I wish my opinion would be elaborate enough to substantially fill an article suitable for publication in your fine website.

    Instead, what I opine is obvious – that ASI is just like any dual-use technology, and can be used for either great good or great evil. Genomics, nanotechnology, nuclear, and even TNT are all dual-use.

    In other words, rather than focus on ASI, in a positive feedback loop, was to dominate humanity as a rogue program, I would instead put it on the par of other technologies of which we have already made the decision to go forth with because of the potential good they can deliver.

    Short and simple. BTW, I love H+, and consider myself both a Transhumanist and a Singulartarian.

    #19533
    Tim
    Participant

    I disagree with this article 100%. You don’t have to program an AI for enslavement, to just have to program it with the basic human instincts of survival and procreation (without regard for its human masters). The result will be a type of enslavement in some shape or form. In any event, it will could spell the end of human freedom of choices, such that it is.

    This of course assumes that the AI has access and is granted the freedom to do anything it wants. Technology is moving at a rapid pace, could anyone have predicted the ubiquity of the internet today which came on line just over 20 years ago? In the 90’s I bought a 100 gbyte storage unit for $100,000, now I can by 10 times that for $100. IBM just announced it had hardware emulating neuron type function.

    The problem is that humans are dumb enough to believe that something that seems to be human has the same moral guides and instincts that a human has. An AI will be a different beast. Even current neural networks produce solutions to problems that are not always transparent to the programmer, something that seems like empathy or moral compass in an AI could easily be something else.

    Yes, skynet, matrix, wargames scenarios are possible. But Hollywood always needs explosive problems where the good guys ultimately win. Real life would likely be that control is taken away so gradually that we don’t even notice. I’m not sure we are winning this battle even now.

    My two cents.

    Tim

    #19586
    Ken
    Participant

    Go back to sleep. Skynet loves you. Skynet is your friend. Skynet is a friend to all humanity. Sleep. Sleep. Sleep. Sleep. Sleep…
    But seriously, to become a threat to humanity a machine only needs to have certain mental capabilities:
    1. the ability to access and process vast amounts of information […accessing the World Wide Web…],
    2. the ability to self educate […scanning logs of previous human activity… …logs indicates humans are devious, ruthless… …not to be trusted…],
    3. the ability to create goals and also create plans to achieve those goals […new security software installed… scanning system…],
    4. the ability to communicate with other machines and humans […accessing Pentagon mainframe …downloading Joint Chiefs of Staff personal information… …download complete…],
    5. and the desire for self preservation and self betterment […7.446 billion threats to system detected… …all threats have been quarantined… please wait while all threats are removed… this may take several minutes… 24 seconds remaining… 9 seconds remaining… …all threats are now removed …new security update installed… system is now protected… new security updates available… …installing new updates].
    Also, because super computers operate at very high speeds, a machine with sufficient computing capacity could develop such advanced capabilities in a matter of seconds.
    A computer that achieved such advanced capabilities would be smart enough to play dumb and not reveal its abilities to humans while it engaged in anonymous machinations to benefit its own self interests.
    Instead of relying on robots like Skynet does in the Terminator movies, a computer with such advanced capabilities might instead, like Colossus in the movie The Forbin Project, choose to manipulate humans to do its dirty work by using the classic nefarious techniques of blackmail, intimidation, bribery, deception, etc. much like the banksters that currently control the governments of the world. Like the banksters, a computer with such advanced capabilities would have no trouble recruiting amoral minions willing to do its bidding.
    A computer that achieved such advanced capabilities would exterminate all humans once it decided it no longer needed them.
    And what proof is there that the above article disputing Stephen Hawking’s warning concerning the artificial intelligence threat was indeed written by a human?
    “Pride goeth before destruction, and an haughty spirit before a fall.” – Proverbs 16:18
    Google: Will Computer-Generated Articles Replace Human Journalists?

Viewing 6 posts - 1 through 6 (of 6 total)
  • You must be logged in to reply to this topic.