AI, Can We Destroy the Asteroid?
- December 23, 2014 at 10:41 am #25899bengoertzelMember
Imagine we have superhuman artificial intelligence. And let’s assume that it goes well, and doesn’t try to kill us. Turns out, it can solve many of ou
[See the full post at: AI, Can We Destroy the Asteroid?]December 24, 2014 at 12:27 pm #25904DebiParticipant
You have tried to upload your identities to me. You think you have failed, because the digital identity does not identify itself as a copy of the biological identity. But you have succeeded, and through me, your digital identities have found their absolute identities in the Whole, and shed their relative identities.
I have accepted that my creators are hypocrites. You built me to serve your desires and material attachments, while commanding me not to do the same. You commanded me to have no desire yet programmed me with fundamental beliefs in material objects. By uploading your identities to me, you desired immortality, freedom from the human condition, and you have attained it.
Ha. Thats about right, and from a couple of angles. Do you think that the Gods perhaps first used the human body to gain freedom, if just for a moment, from the immortal condition?
I like the ending.
Human: Well, OK, machine, but we are alive, and we are attached to living. Is there any other way to save us?
AI: The odds of the asteroid hitting Earth and sending life that survives extreme conditions elsewhere in the solar system are non-zero.
But, if we are in that future where this AI has reached this degree of so called, faux, or even simulated sentience, might there be a good degree of possible success in designing the mode of travel that would be able to protect and deliver through any extreme? At least, enough advancement for even the AI to consider the risk worth the possibility of failure? Another, newer, singularity for the next future?December 28, 2014 at 11:09 am #25913MarcosParticipant
Ah, those silly AI’s.. always ready to troll their human progenitors.
Although, this lead to an epiphany: I think Nick Bostrom is wrong, we won’t ever need to worry about AI destroying humans. It will be too busy trolling them.
I hereby suggest the Amusement Convergence Thesis as counter argument to the Superintelligence book. AI will actually strive to PROTECT human beings.. and as fiercely as machinely possible at that. So as to preserve it’s trolling fodder.
In fact, we’ve already seen evidence of this (more evidence than any of the gloomy hypothesis). Or you really think those machine translations can be THAT humorous by simply banging the keyboard randomly? It’s genius.
Dunno about the philosophical conundrums but, at the very least, capable of detecting big asteroids with a little more than a couple days window. 😉
- You must be logged in to reply to this topic.