Sign In

Remember Me

Get Happy: Why Superintelligent AI Will Probably be Superfriendly

We’ve all met intelligent people who aren’t nice. That becomes the mold — the heuristic — when we debate whether artificial intelligence will be friendly. If smart people aren’t necessarily friendly, neither will smart machines be necessarily friendly. And, obviously, when we create artificial intelligence that is above human level there is nothing stopping it, as long as it is self-directed, to be meaner than humans are. However, I will suggest that here above-human level intelligence has an increased probability of being friendly, for a few reasons, and offer some evidence that this trend is already happening.

First, intelligence leads to creative problem solving in attaining one’s goals. In fact, that might work as the very definition of intelligence. On the surface, this has nothing to do with friendliness. Pursuing goals usually means we trod on other people’s goals. The exception is that as we become more intelligent — and find more alternatives to attain our goals and solve our problems — we can find more ways of doing so without impinging on other people’s goals. A superintelligence would, by definition, be able to find even more ways to solve problems and create solutions that do not hurt others. Therefore, increases in intelligence would increase the probability of friendliness. Indeed, superintelligence would increase the probability of not just attaining its own goals without impinging on others, but would lead to the increased probability of attaining its goals while benefiting the success of others in attaining their own goals.

Is there any evidence of this now, though? Steven Pinker shows in his Better Angels of Our Nature that society is becoming less violent. Violence is pursuing goals by any means necessary. While he doesn’t necessarily draw the conclusion that intelligence is causing this declining trend of violence, it may be correlated. Smarter societies are less likely to be violent societies. As societies become more intelligence, the reasoning would go, the less likely that it will be violent. He adds in the Economist:  “Consciousness is increasingly seen as the origin of moral worth.”

The other reason that smarter-than-human intelligence would function at nicer-than-human levels centers on the ability to consume resources. The debate usually sounds like this: If an intelligent species can consume resources to pursue its goals, it will. If a superintelligence can pursue its goals it will, also, and consume them and do so excessively. This debate is usually ended with imagery of a machine sucking all the carbon atoms out of your body for use in some type of superintelligent factory.

But, this debate doesn’t take into consideration that intelligence leads to better solutions. Pinker says that violence — which is central to attaining resources at the ultimate expense of others — yields fewer benefits.

Again, a superintelligence that has access to a wider range of alternatives to obtaining resources would probably do so in a manner that doesn’t just yield fewer benefits, but yield the most benefits and it would have an increased probability of obtaining those resources at the benefit of all.

Does this have any evidence to back it up? If we again use history as a quasi-map for charting intelligence — and humanity is becoming smarter, in spite of what you see on reality television — we see that society is becoming more efficient at consuming its resources. The progression of mankind from stone age to the information age is not just about finding new resources, but of increasing the efficiency — eventually — of those resources because increasing efficiency leads to fewer costs and more benefits.

Critics, standing in the indiscernible screen of smoke-belching factories, may have trouble with this reasoning and claim that smarter humans don’t seem to be better at consuming resources, but efficiency can be masked by other trends, such as the concentration of industrial sites and the higher demand from larger populations that benefit from increased efficiency.

None of this rules out the possibility of a “mean” AI taking over the world and using you and your carbon atoms as a log on its superintelligent camp fire, it just means that — and the trend lines seem to confirm — that intelligence leads to less violence, better problem solving, and increased efficiency and that, therefore superintelligence has a better probability of finding ways to accomplish its goals with less violence, much better problem solving and unmatched creative problem solving.

###

Matt Swayne is a science and research writer at Penn State, as well as an author and freelance writer. Always interested in technology and fringe science, Matt became interested in the technological singularity after reading Ray Kurzweil’s The Singularity is Near.