More AI Scare Talk from Stephen Hawking

hawking_robotThe barrage of AI scare propaganda increases!

A couple months ago it was Nick Bostrom with his book Superintelligence (a nicely-written, rather academic-ish book arguing at length and with great complication that we can’t prove advanced AIs won’t kill us all).

A couple weeks ago, it was Elon Musk comparing advanced AI’s to evil supernatural beings

Now we have another dire warning from Stephen Hawking.  TL;DR — “My new AI  communication tech is great; BTW beware AI may soon end humanity.”

I have to say that, while I respect Hawking greatly and find his physics work very stimulating (the second half of Hawking & Ellis’s “The Large Scale Structure of Space-Time“, which I slogged through back in grad school, is one of the toughest things I’ve ever read — but oh, what cool stuff!), I find his views on AI a bit ironic.

I mean — given the testimony to human frailty that Hawking’s body constitutes — and given how aware he must be (via his work) of the human mind’s limited abilities to comprehend the advanced mathematics underlying our physical universe — it’s hard to see why he’d be attached to legacy humanity.   I would think Hawking would be psyched about the potential for humanity to expand into new forms of being, including superhumanly capable robot bodies as well as superhumanly expansive and incisive minds….

Not to put too fine a point on it, but Stephen Hawking manages to exist productively and satisfactorily in human society only via means of advanced technologies.   So I’d think he, of all people, would be especially open to the possibility of techno-transcending the limitations of legacy humanity.

I also have to note that a lower-class African with the same disease as Stephan Hawking, born in the same year as Hawking, would not be alive at Hawking’s age.   A lower-class African, born a decade or three ago, would not currently have the benefit of the amazing tech that Hawking has.  Obviously Hawking is an exceptional mind and (modulo all the pecularities of counterfactuals) it’s fortunate he was born into circumstances such that he’s had access to medical care and advanced assistive technology, thus enabling him to contribute so wonderfully to science.   But we should not forget that, at our current stage of technological and social advancement, the human race is not all that wonderfully perfect and kind to all of its members.   Humanity could use a lot of upgrading, physically and cognitively and empathically/ethically.

Yes, there is a nonzero possibility that advanced AGI could lead to the end of humanity.   But harping on this possibility in a vacuum — ignoring the risks posed by numerous other rapidly-advancing technologies (nanotech, synthetic biology, etc.) and also ignoring the potential of AI to help palliate these other risks — seems to me worse than counterproductive.

I don’t mean to imply the risks of advanced technologies should be whitewashed.   It’s perfectly reasonable to say something like “A lot of powerful technologies are advancing currently, many of which have both tremendous positive potential and huge potential downsides.   These technologies have the potential to accentuate each others’ potential dangers; and also, in many cases, the potential to help reduce each others’ risks.   AI, nanotech, synthetic biology and brain-computer interfacing are among these technologies of which I speak — and other new technologies with no names yet may well emerge in the next decades.   We live interesting, exciting and also perilous times.   There is the possibility for the human race to annihilate itself and leave nothing behind; and also the possibility for human life to be enhanced and improved and expanded beyond all our imaginations.   What we do now will likely have some impact on the ultimate outcome, though exactly how much is hard to say.”

But ignoring all the other factors and yelling over and over “WATCH OUT !!   AI IS COMING AND IT MAY WELL KILL US ALL !!!”, which seems to be the latest media trend in the technosphere, really doesn’t strike me as sensible or productive.   I mean, I admit I have a somewhat biased view here, being as I’m an AI researcher whose intuition is that advanced AI is likely to help more than it hurts, as we collectively navigate the uncertain futures.   But still… sheesh …

What’s most striking to me is the prevalence of this Terminator-ish rhetoric NOW, when AI technology really isn’t all that advanced yet.   AI is serving very important roles in a variety of industries now, it’s true — but we don’t yet have autonomous AI agents romping around in the world or on the Net, interacting with people based on their own goals and motivations and knowledge.   Once AI does advance to that point — how heated will the rhetoric get then??!!  Egads!   Those of us focused on AI as a force for positive transformation have definitely got to be ready for some high-intensity interactions….   And at that point the discussion will likely become much more broad-based, well beyond the Stephen Hawkings and Nick Bostroms of the world.

Advanced, world-transforming AGIs are coming … and in the process, as a sort of comical human sideshow, will come a lot more heated debates and worried rhetoric! ….

 

 

 

https://phuonghoangschool.com/wp-includes/nexus-slot/