H+ Magazine
Covering technological, scientific, and cultural trends that are changing–and will change–human beings in fundamental ways.

Editor's Blog

Matt Swayne
October 8, 2013


We’ve all met intelligent people who aren’t nice. That becomes the mold -- the heuristic -- when we debate whether artificial intelligence will be friendly. If smart people aren’t necessarily friendly, neither will smart machines be necessarily friendly. And, obviously, when we create artificial intelligence that is above human level there is nothing stopping it, as long as it is self-directed, to be meaner than humans are. However, I will suggest that here above-human level intelligence has an increased probability of being friendly, for a few reasons, and offer some evidence that this trend is already happening.

First, intelligence leads to creative problem solving in attaining one’s goals. In fact, that might work as the very definition of intelligence. On the surface, this has nothing to do with friendliness. Pursuing goals usually means we trod on other people’s goals. The exception is that as we become more intelligent -- and find more alternatives to attain our goals and solve our problems -- we can find more ways of doing so without impinging on other people’s goals. A superintelligence would, by definition, be able to find even more ways to solve problems and create solutions that do not hurt others. Therefore, increases in intelligence would increase the probability of friendliness. Indeed, superintelligence would increase the probability of not just attaining its own goals without impinging on others, but would lead to the increased probability of attaining its goals while benefiting the success of others in attaining their own goals.

Is there any evidence of this now, though? Steven Pinker shows in his Better Angels of Our Nature that society is becoming less violent. Violence is pursuing goals by any means necessary. While he doesn’t necessarily draw the conclusion that intelligence is causing this declining trend of violence, it may be correlated. Smarter societies are less likely to be violent societies. As societies become more intelligence, the reasoning would go, the less likely that it will be violent. He adds in the Economist:  "Consciousness is increasingly seen as the origin of moral worth.”

The other reason that smarter-than-human intelligence would function at nicer-than-human levels centers on the ability to consume resources. The debate usually sounds like this: If an intelligent species can consume resources to pursue its goals, it will. If a superintelligence can pursue its goals it will, also, and consume them and do so excessively. This debate is usually ended with imagery of a machine sucking all the carbon atoms out of your body for use in some type of superintelligent factory.

But, this debate doesn't take into consideration that intelligence leads to better solutions. Pinker says that violence -- which is central to attaining resources at the ultimate expense of others -- yields fewer benefits.



Again, a superintelligence that has access to a wider range of alternatives to obtaining resources would probably do so in a manner that doesn’t just yield fewer benefits, but yield the most benefits and it would have an increased probability of obtaining those resources at the benefit of all.

Does this have any evidence to back it up? If we again use history as a quasi-map for charting intelligence -- and humanity is becoming smarter, in spite of what you see on reality television -- we see that society is becoming more efficient at consuming its resources. The progression of mankind from stone age to the information age is not just about finding new resources, but of increasing the efficiency -- eventually -- of those resources because increasing efficiency leads to fewer costs and more benefits.

Critics, standing in the indiscernible screen of smoke-belching factories, may have trouble with this reasoning and claim that smarter humans don’t seem to be better at consuming resources, but efficiency can be masked by other trends, such as the concentration of industrial sites and the higher demand from larger populations that benefit from increased efficiency.

None of this rules out the possibility of a "mean" AI taking over the world and using you and your carbon atoms as a log on its superintelligent camp fire, it just means that -- and the trend lines seem to confirm -- that intelligence leads to less violence, better problem solving, and increased efficiency and that, therefore superintelligence has a better probability of finding ways to accomplish its goals with less violence, much better problem solving and unmatched creative problem solving.

###

Matt Swayne is a science and research writer at Penn State, as well as an author and freelance writer. Always interested in technology and fringe science, Matt became interested in the technological singularity after reading Ray Kurzweil’s The Singularity is Near.

14 Comments

    Yes, a superintelligence would be ABLE to find non-rivalrous paths to achieving its goals, but would a superintelligence necessarily CARE about finding such less violent paths? I think that civilized humans tend towards abstaining from violence at least in part because humans are evenly matched with other humans, and so there is a great deal of risk involved in competing directly with other humans. It is a symmetrical fight. A superintelligent AI competing with a human however presents a tremendously assymmetrical situation the likes of which we have never seen before in history. A closer historical analogy would be the way that humans have treated animals (answer: not very well).

      Some good points Jon.
      To this point: "I think that civilized humans tend towards abstaining from violence at least in part because humans are evenly matched with other humans, and so there is a great deal of risk involved in competing directly with other humans."
      Wouldn't the trend of violence be steady?
      Also, do you think an intelligent person is more likely or less likely to be cruel toward animals? Do you think an intelligent society would be more or less cruel to animals.
      While I agree that all these horrible possibilities exist, the point of the article that intelligence seems to correlate to seeking nonviolent ways to resolve conflict. If a comparatively small amount of difference in IQ correlates to the probability of less violence, we can assume that vast improvements of intelligence would lead to even higher probability of kindness.
      But I am just saying probability -- and admitting that the possibility of violence exists.

    A very interesting way to look at this existential threat. You still have to wonder though if it makes any difference if we are happy or not when we are stepping on ants (knowingly or unknowingly). The Singularity has the potential to be so much of a shift that human emotions are no longer of any significance.

    The problem with the above reason that a SAI would be kind (as opposed to mean) because Malthusian motivation would be absent, is that conflict over resources is just one reason why a SAI could theoretically have reason to be mean.

    For instance, power is often times the primary motivation, or survival (i.e. no rivals). Or, a SAI could be deliberately forged into a weapon under the pretext of the logic of force (that you deter aggression by having the best weapon).

    Frankly, I don't see how you can make a valid argument that SAI would be anything but nihilistic.

    Sorry to say that, but for me all this whole article sounds more like wishful thinking than serioius reasoning. The presented facts might well be true, but I guess they don't fit as an argument for AIs being friendly beings.

    Human societies became less violent over time and at the same time "more intelligent"? Maybe they just found some more indirect ways to control their individuals? Nevertheless, force and finally violence gets applied to the individuals, if they don't obey the rules. So to stick with this somewhat misfit example, even if an AI wouldn't have to use violence to suppress humans, being suppressed at all might not be a desireable future.

    Besides that, this point assumes competition of individuals of the same species, where altruism, common interests and cooperative gains play a big role. So with other humans we might well have good reason to share some rare resources like food. Nevertheless we'll probably react quite hostile against some ants "stealing" honey and "fix the problem" with some glue traps and poison, even if we have more than enough of honey left for us. So if we ever end up being in a competitive situation for some rare resources with an surpreme AI, chances are we'll face the same fate as the ants.

    AIs will of course have very different needs than we do, but we can quite likely assume, that they will have need for energy, i.e. electricity and production ressources, silicon, iron, seldom earth, ... for survival and self replication. So as these are things, that humans also have some use for, there is a competitive situation, even if some of these things might become more readily available (due to increased production or as mentioned, efficiency gains) or replaced by other, more advanced needs.

    There might as well be motivation for cooperation for the AI, but these motivation diminishes, the bigger the intelligence gap becomes. As for us, there is just no point in having cooperations with ants. Either we'd be enslaved, exploited or extinguishied, and only if we're really lucky, just ignored.

    Not very convincing, I'm afraid. I don't see how a convincing argument could be constructed to show that a super-intelligent AI (SAI) will be benign, since mere mortals cannot fathom what its reasoning will be like. But a possible roadmap is sketched at bit.ly/170uDbM

    It seems to me that the development of military robots and AI systems that ALREADY out perform humans at certain tasks suggests that AIs need not be friendly and that friendliness is a design problem at least.

    For an alternative viewpoint, see for example: http://www.globalresearch.ca/artificial-intelligence-and-death-by-drones-the-future-of-warfare-will-be-decided-by-drones-not-humans

    The logic here is based on inferring from humans to artificial intelligences. There is no reason to assume an artificial intelligence will have values that are in any way human. It will have precisely the values we very carefully program into it. And if it recursively self improves until it can wield molecular nanotech in real time, humans will be entirely ignorable, and thus will be ignored unless we very carefully programmed it to care about humans.

    I do not think the AI is part to fear in the human machine relationship. I think the machines will do as they are told by a small select group of humans. These humans will realise they do not need the 7 billion of us completing for their resources, once machines are intelligent enough to replace us in necessary work.

    Jct: I absolutely agree but that it doesn't take super-intelligence to see the advantages of cooperation over competition when possible. Man has always had the capability to cooperate, until the money-lenders imposed the mort-gage death-gamble contract where all who borrowed 10 must repay 11 and the losers knocked out of the financial game starve from lack of life-support tickets. Get rid of the usury, like adding a chair to musical chairs, and Man will no longer be the destructive greedy animal conditioned to survive the death-gamble rule. It doesn't take super-intelligence, it only takes enough to stop trying to pay 11 when they only created and loaned out 10. What is postulated here has to be correct: superintelligent machines will be as smart as ordinary man. I only worry if the machine also has a mort-gage to survive.

    Fascinating article. Personally, I think we make a mistake when we anthropomorphise future superintelligent AI. For most of them (except for those specifically built to have humanlike personalities), they'll be as nice as the people who control them are.

    Perhaps the best defense against bad AI is to have a lot of competition in that area so that no one person or group has a monopoly.

    Just my 2-cents.

    Would you say human are more friendly than lesser intelligent animals?

    Why do they have world wars and wire up dumb animals to experiment on?

    To be dependent on the goodwill of more powerful intelligences is an unacceptable position to me.

    The problem with this reasoning is the ambiguous meaning of "others".

    To pursue one's goal without impinging in those of others presupposes that "we" and the "others" are the same kind of being, that we are humans.

    But just as we humans couldn't care less of eliminating a fly in pursuing the goal of tranquility, the same could very well do a being vastly more intelligent, or "superior", than us...

    All in all, I think the problem of AI's benignity lies in programming the correct values and motivations in it. Will AI designers be capable of doing it? Hopefully so.

    Living organisms don't do what they do because of intelligence, but because they have motivations. And motivations are by definition irrational. Intelligence is just the capacity of satisfying motivations.

2 Trackbacks

  1. [...] Peter We’ve all met intelligent people who aren’t nice. That becomes the mold — the heuristic [...]

  2. [...] aren't necessarily friendly, neither will smart machines be necessarily … Read more on h+ Magazine This entry was posted in Artificial Intelligence and tagged Artificial, Human, Intelligence, [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*

Join the h+ Community