h+ Magazine

The Danger of Artificial Stupidity

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
  • #27101

    It is not often that you are obliged to proclaim a much-loved international genius wrong, but in the alarming prediction made recently regarding Artif
    [See the full post at: The Danger of Artificial Stupidity]


    > I believe three foundational problems explain why computational AI has failed

    1) Hyped. As long as Searle keeps pumping out the Mandarin jokes, I’ll be laughing all the way to the Singularity. I’m hoping to get a crack at the omics languages though. Dry run it, Forrest, RUN!

    2) OVER Hyped. I care not who is in the passenger seat nor even if there is any for that matter.

    3) Circular. Believing “Computers lack mathematical insight” => “AI fails to replicate human mentality” (insight?) is equivalent to saying: “AI fails because it does.” But regardless, it’s inconsequential anyway. We don’t need “unassailable demonstrations” to get some juice out of a machine. I mean, it shouldn’t even be hard, if mathematicians can do it…

    > the combination of a human mind working alongside a future AI will continue to be more powerful than that future AI system operating on its own.

    I agree to that, but only because AI devs are too slow to figure it out. Anyway, that’s pretty much Kurzweil’s vision no? “Melding” with the stuff? lol

    Not sure he should be placed in the same camp as Hawking..

    > The singularity will never be televised.

    Does that rule out the Oculus Rift? =)

    > lacking an ability to formulate its own goals,

    I don’t follow how would that be so.

    > such systems exhibit a genuine “artificial stupidity.”

    I agree to that. But destruction is a form of stupidity anyway. “Stupidity” and “Backfire” should probably be synonyms.

    > a malfunctioning Soviet alarm system

    Maybe it wasn’t malfunctioning at all.. Maybe Hawking is right! O.o

    nah.. I hope he apologizes for that “theory” as well.

    > Some commentators have suggested that the colonel’s quick and correct human decision to over-rule the automatic response system averted East-West nuclear Armageddon.

    Boy we need some true intelligence around here quickly.

    > I am skeptical that current and foreseeable AI technology can enable autonomous weapons systems to reliably comply with …

    I agree. Albeit for different reasons.

    > I believe we should all be very concerned.

    Yes… be afraid… be VERY afraid. =)


    Very nice article, its good to know that people like the author recognize the potential dangers of AI as well as its benefits.

    But I think the matter of consciousness is really irrelevant, especially since we don’t actually know what it is or how it is formed. Eventually AI will have control over human lives, as mentioned above, probably beginning on the battle field and likely leading to police, medicine and other areas. There are great benefits to be had, but the risks are also great.

    Human beings are motivated by instincts that have been honed in millions of years of evolution and thousands of years of culture that have allowed humanity to develop to this point. The problem is that an AI, to properly simulate human behavior has to be given the equivalent of instincts, goals and values. When an AI eventually has real world power of the type mentioned above, conscious or not, it is these simulated instincts that will determine its danger to humanity. It makes one appreciate the genius of Asimov and his three laws of robotics. Is anyone working on this?

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.