AI Doomsaying is the Self Loathing of Jerks
- August 8, 2014 at 5:23 pm #19427August 8, 2014 at 8:33 pm #19434TimParticipant
I don’t share Correy’s optimism. In fact I think it is inevitable there will be a conflict between humans and AI at some point. Scientist will naturally want to endow the AI with human characteristics including those related to aggression and competition, since these are primary drives in human productivity.
Humans have natural instincts to strive for continued life and reproduction. Imagine something that is way more intelligent and has way more resources with these instincts, plus the aggression and competitiveness characteristics. Any being with this kind of will (and perhaps no moral compass of any kind) will begin to dominate anything it can, be it other AI’s or humans. Given sufficient power and freedom to an AI, we will end up working for it at best, and as its slaves, at worst.
This is just an extrapolation of human instinct, domination and slavery have happened throughout our history, humans dominating other humans. If you give these characteristics to an AI, do expect anything less?
My two cents…
tkAugust 8, 2014 at 9:51 pm #19439
Humans already create potentially dangerous autonomous entities. We call them “children”.
It seems the desire to “enslave” someone originates in the need for their services, either for physical or intellectual labor. But a future super-intelligence very likely won’t need humans to be its slaves since it will be able to accomplish its goals without us. That’s the situation of abundance mentioned in the article.
In fact this is arguably the definition of super-intelligence; an entity that is smarter than all humans combined can accomplish intellectual tasks which no human or team of humans could attempt.August 8, 2014 at 11:14 pm #19442TAHSParticipant
Could someone help me understand the Hunter S. Thompson connection here?
He was mentioned only once, and I fail to see where he fits into this.
I agree completely with the argument in this piece. However I’m also a HST fan so I’m merely curious what purpose he has in this.August 9, 2014 at 7:31 pm #19444
We’d have to ask the author. However my interpretation was that Hunter was sort of a mess, sometimes a jerk and a big problem to his neighbors. And yet society survived and progressed.August 10, 2014 at 7:17 pm #19445CorreyParticipant
@TAHS I simply put forward HST as an example of a brilliant and difficult neighbor that I had some first-hand experience dealing with. HST would sleep in a lawn chair with his hose running (at noon in Colorado during a drought) with a shotgun in his lap. Any other neighbor you might be tempted to go and scold him for abusing communal resources, however the shotgun and spilled whiskey tumbler were usually sufficient to keep those temptations in check. I believe super-intelligent AI will be no worse or better than super intelligent humans, just slightly smarter at first and dramatically so over time. I also believe that the anti-symmetry of human vs. machine intelligence will be strongly mitigated by human augmentation.August 11, 2014 at 1:50 am #19447TAHSParticipant
Ah, okay, I understand. Must’ve been something to have been his neighbor! Were you two friends at all?
Abusing communal resources isn’t cool.
Human augmentation could help “bridge the gap,” but I don’t think either super-intelligent AI nor genuine artificial consciousness would be inclined to enslave, harm or kill-off humans or transhumans. Whether an AI or AC has human-like emotions or not. We’d probably be able to co-exist especially well if they had human-like emotions.
Those who say otherwise are usually in the “muh terminator!” camp. It’s foolish. They think that if given the chance, anything intelligent would want to harm them. So they usually rely upon things like the state for deterrence. In this specific case, humanocentrism and biological fundamentalism (FM-2030). The thing is, if the state magically collapsed overnight, I wouldn’t suddenly go out and harm people, because I don’t want to. Most other human beings wouldn’t either, I believe, because they’re inherently “good.” Should a coercive entity begin terrorizing people, they’ll likely band together as a group of individuals to cooperatively safeguard each other as needed (examples being the crypto community, pirates, and other anti-law enforcement actions in communities).
I apply these same ideas to AI and AC.
It would help if we gave such entities their proper freedoms such as, but not limited to the ones listed by George Dvorsky:
-The right to not be shut down against its will
-The right to not be experimented upon
-The right to have full and unhindered access to its own source code
-The right to not have its own source code manipulated against its will
-The right to copy (or not copy) itself
-The right to privacy (namely the right to conceal its own internal mental states)
-The right of self-determination
- You must be logged in to reply to this topic.