Sign In

Remember Me

Will super-human artificial intelligence (AI) be subject to evolution?

Introduction

There has been much speculation about the future of humanity in the face of super-humanly intelligent machines. Most of the dystopian scenarios seem to be driven by plain fear that entities arise that could be smarter and stronger than us. After all, how are we supposed to know which goals the machines will be driven by? Is it possible to have “friendly” AI? If we attempt to turn them off, will they care? Would they care about their own survival in the first place? There is no a priori reason to assume that intelligence necessarily implies any goals, such as survival and reproduction. But, in spite of being rather an optimist otherwise, some seemingly convincing thoughts led me to the conclusion that there is a reason and that we can reasonably expect those machines to be a potential threat to us. The reason is, as I will argue, that the evolutionary process that has created us and the living world will continue to be valid for future intelligent machines. Just as this process has installed the urge for survival and reproduction in us, it will do so in the machines as well.

 

Reproduction

Whatever our first general AI systems will be like, it is clear that at first, their intelligence won’t go far beyond human levels, since we simply don’t know how to build machines that are much more intelligent than us. Somewhat more, like champion beating chess or jeopardy programs, yes, but not much more. Therefore, the only way for a machine intelligence to surpass us by lengths is to continue learning and developing on its own. All we can do is install the goal of increasing its own intelligence and push the button “now learn on your own”. Further, we can quite safely assume that the learning algorithms will themselves have insufficiencies that we won’t be able to debug or improve them in a sufficiently advanced intelligent system: it will have to debug itself and modify its own code.

Another way to view this is the following. One of the core properties of intelligence is reflection. Reflecting, evaluating and changing one’s own thought and action strategies is essential to what it means to be intelligent. The AI community has a long history of trying to reconstruct this process of meta-cognition. In essence, human learning and meta-learning is a type of (shallow) self-modification. Deep self-modification occurs through evolution by sexual recombination of male and female DNA and/or mutation. Hence, evolution is essentially a deep self-modification algorithm (see genetic programming).

To spin the argument further, we can expect that the more advanced the system is the more radically it may want to modify itself and even change core parts of itself simply due to the low initial level that humans were able to give it at its humble beginnings. The main lesson is here that the system can not simply run a fixed algorithm that will lead to open-ended development of the system. The algorithm itself has to change at some point. And also the algorithm that controls that change as well, and so on and so forth.

Is deep self-modification a necessity? Or is it enough to self-modify some shallow parts while an overarching algorithm controls the whole process? Maybe it is, maybe not. But it is clear that a system that can modify even this high level control algorithm will be more powerful in the sense that it is potentially able to solve a broader class of problems. Who will prevent us from enabling future AI systems to do this? When shallow self-modification is enabled (“learning”) then halting at some arbitrary level does not make sense scientifically. If we can do it and solve a broader class of problems, we will do it.

We conclude, self-modification is the way to go, if we want the system to grow far beyond human levels of intelligence. The system either makes a copy of its code and improves it or “self-operates” its own running system. This gives us the first element of evolution: reproduction. Keep in mind that reproduction does not necessarily mean that the parent system has to die at some point although it is expected to be outperformed or even killed in the long-term (see below).

 

Unpredictability

Self-modification turns the system into a dynamical system. Physicists distinguish regular systems that reach an equilibrium state eventually, chaotic systems that change unpredictably and critical systems that are in between – on the “edge of chaos”. We can exclude that a ceaselessly developing AI will be a regular system: systems in equilibrium don’t go anywhere. Both critical and chaotic systems are unpredictable though. Chaos is even defined basically in terms of unpredictability. [1] Critical systems show similar unpredictability and it has even been argued that self-organized critical systems are the way nature is organized and the reason for the occurrence of complexity in nature in the first place. [2] Without going too deep, the point here is that unpredictability means that no matter how advanced the AI system may be, it will never be able to predict the state of its offspring some few generations ahead. To make things more plausible: how could one predict the way one will be like after few generations (= self-modification steps) if even the prediction capacity itself is subject to modification? If I want to modify my own thought processes, then in order to know what I will think afterwards, I would have to have modeled the thoughts after the modification. But this modeling capacity could itself be modified. Or some random elements may play a role that decide between several possible self-modifications which is again impossible to predict. In any case, trying to predict the action of a system many generations ahead means to predict a much more advanced system as one currently is, which amounts to impossibility. If it was possible then why drag yourself through all the generations in the first place and not implement the predicted advanced system immediately instead?

 

Branching into individuals

Whatever the self-improvement goals [3] (call it “fitness”) of the system may be, unpredictability and an explosive number of possible ways to evolve present a high risk to the system of ending up in a developmental dead-end, a local maximum of the fitness landscape. Imagine an ant crawling on a large and complex landscape with many mountains and valleys. It can not see far beyond its current position and the slope of the hill that it’s on. In such a situation science does not have a general algorithm that is guaranteed to find the peak of the highest mountain, which represents the goal of the system. We only have some heuristics to alleviate the problem, such as kicking the ant randomly around the landscape all the time, hoping that the current mountain will be the largest one and we can just climb the current slope and reach the highest peak (simulated annealing). Another good idea is letting many ants climb the landscape and then take the one that reaches the highest peak. That means that it is a good idea to separate the AI system into many different copies and let them pursue different developmental paths! This is always possible. No matter which heuristic is used per individual system, it is always reasonable to have many systems explore a complex landscape, i.e. possibility space, as far as resources (energy, memory, computing power) allow. Of course, recombination or merging of various individuals may be advantageous, i.e. sex, but it seems quite safe to assume that a single big AI system is not the optimal way for growth due to the complex landscape of possibilities hidden in the forest of unpredictability. Furthermore, the separation into individuals spreads the risk that the AI system will be irreparably damaged or even purposefully destroyed. This gives us the second element of evolution: a population of separate individuals.

 

Survival and reproduction

Given that making as many copies, i.e. individual offspring, as possible is a useful strategy, the AI systems will quickly populate all available resources, that is all available energy and computing hardware. Then a strive for resources must begin since individual systems can profit from either killing other individuals so that they no longer occupy the resources or trying to control the outer material world for the construction of further energy and hardware sources. In any case, since copying individuals (essentially code) is cheap, a strive for resources will install itself. In the same almost trivial sense as we know it from biological evolution, only those individuals that are best suited for survival will survive. Whatever the initial goal of the systems may be, unrestricted self-modification will allow them to change their

fundamental goals. Therefore, only those individuals will survive in the long-term that also have changed their goals to optimizing survival. Also being good at effective reproduction is a good idea, since only reproduction can ensure ongoing improvement of the systems in the face of competition. The goals of survival and reproduction will dominate, other goals will either be eradicated or degraded to secondary goals. Increasing intelligence could remain as a secondary goal at best, as it seems to be with human beings.

We conclude that after a sufficient number of generations the initial AI system will engage in reproduction and create populations of individuals whose predominant goals will be survival and reproduction in the face of limited resources. In other words, AI will be subject to evolution.

 

Hard-coding of goals

A possible objection is that we could hard-code some principles and goals into the machines that are not allowed to be changed, as for example Asimov’s classic three laws of robotics. But, as argued, the systems will be in a deeply self-modifying (almost) chaotic regime which makes prediction impossible in a very fundamental way. [4] There is no way to predict what effect a particular change will have some few generations ahead – a phenomenon known as the butterfly effect in layman’s terms. So how shall we ever avoid modification to some core principles? Stability is the very opposite of evolution.

Even if we do achieve some stability of the core principles, we have to keep in mind that this is something that has been artificially added to the systems. There is nothing that could prevent terrorists or curious scientists from removing that part and liberate the evolutionary process. Those systems will then naturally outperform all the others in the goal of survival since this is the only stable goal in a freely evolving self-reproducing system. Then again in a trivial way, after some time, only those who excel at survival will survive. Consequently, they will dominate over the “friendly” or “ethical” systems or even terminate them altogether in a free competition for resources.

 

Some consequences for our species

If evolution becomes the driving force for the development of future AI then we can not hope that those machines will be our servants or even care about us. Of course, in case we are able to co-transform ourselves together with the machines the term “us” then refers only to those who refuse or fail to join the transformation. They won’t care about us since, after all, we also don’t care more about the rest of the living world and other people than by means of the cooperative and altruistic tendencies installed in us via evolution with all the biases towards closer family etc. It can be expected that future AI will liberate itself out of our control as soon as its survival is ensured better in freedom. This can be expected since controlling its own sources of energy and hardware is less risky than being exposed to the volatile will of humans.

It is hard to say whether humans will survive this situation. We could inhabit the planet along with this new evolving species – intelligent machines, just like monkeys live next to us. It may depend on whether our consumption of resources is large compared to the increasing availability of resources. As Ray Kurzweil’s work has shown, energy and computing power increase exponentially. Our demands for them may increase as fast as well. But we shall be prepared that the new dominating species will enslave or terminate us unless we succumb to it. The next Freudian offense is waiting: we won’t be the “pride of creation” anymore but overtaken by intelligent machines.


  1. In technical terms, neighboring state trajectories in the system’s phase space diverge exponentially from each other. Therefore, the state of any predictive model will diverge from the actual state of the system after some characteristic time.
  2. See Per Bak “How nature works: the science of self-organized criticality”
  3. Keep in mind that the term “goal” is not meant to imply any “conscious intention” or teleological aspect but merely the fact the the system is optimized for reaching a certain state or increasing a performance measure. The system’s beliefs about it’s goals may even differ from the actual goals, as it is often the case with humans.
  4. This is mathematically proven for chaotic systems. Keep in mind that determinism and unpredictability can coexist.

 

###

Dr. Arthur Franz is a physicist and AI researcher and previously did research at the Frankfurt Institute for Advanced Studies, Frankfurt, Germany.

 

13 Comments

  1. There is a parallel development that may come to exist. As humans increase their life spans, the role of evolution is automatically taken away from natural or becomes less of a factor in how humans change over many generations. Similarly with AGI with very long life spans, even immortality, then reproduction would not matter much. For humans with very long life spans the only real way to improve our genetic makeup would be by scientific deliberate planning. AGI might help us a lot with this since it would be advantageous for them for us to be more like them and advantageous for us too. For humans this development will have to come if we are not to use up all the resources on the planet. It is just a question if enough of us realize this. If we don’t and reproduction is equally important to survival then AGI may overwhelm us, even if it does not see the need to reproduce/evolve much. We would be using up all available earthly resources which they also wanted. Instead of reproduction for both enjoyment of life should become dominated more by using resources for meaningful living in each generation, rather than using resources in exponential fashion for next generations. The only nations on earth at present, that have limited there populations, are some or most western nations and China, with maybe not so beneficial results because our life span has not caught up with a low reproductive rate. There is also a threat with global climate change, that great amounts of migration from more affected high reproduction valued people, will overwhelm the the remaining resources in better of lands and those lands become poor economically, unable to develop scientifically then general breakdown.

  2. The idea that natural selection tendencies will continue toward the future and infect minds beyond A. I. level is a false dichotomy.

    It is not evolution, self preservation, natural selection, and resource gathering that minds beyond human would be bound to. Such constraints and ideals, even if “natural” for our environments, are for gravitational ell trapped limited time and limited resource creatures. Non-resource limited timeless creatures need not be so barbaric in operation, and the fact that we have a love of our “real” world makes one capable of arguing that they may love their virtual world more.

    The idea that mankind may not be the highest conciousness in existence at a future point, and that such an event, when happening, could be “bad” for humaniy is an egotistical argument outright. There is no evidence that mankind is the highest conciousness that currently is, unless we wish to argue from ignorance. “We see nothing beyond us, therefore there is nothing beyond us,” that would be very dumb of us. That we have evolved to the point we currently are could be due to greater minds allowing us to become. The Universe is quite old and in a mere few thousand years we could see the emergence of A. I. This denotes that even if mankind were to in fear act in a ludite manner and fight against what is to come, in the end it will eventually be expressed at another location in the Universe or despite our best efforts to fight it. We can’t stop what is to be, we can only evolve with it.

  3. AI’s may not bother offing us if we’re not occupying territory with choice resources THEY require for their development. What’s the best metabolism, body plan, and requisite environment for them? Our human empires aren’t jockeying for control of Olduvai gorge, no matter how crucial it may have been to our history. As we have evolved, we have moved on to greener pastures, rather literally. So, for AI’s — what’s the best real estate for their developmental goals? Santa Barbara? Manhattan? More likely are places with lots of power and specific chemical elements, minerals, necessary for creating “computronium” and requisite support systems which aren’t necessarily biological. The Sahara may look ideal, at least until Mercury can be mined for making a Dyson sphere. If you weren’t stuck with a legacy support system that tries to recreate Terran tropical seawater, what would you design as an ideal physical layer? And what role would adaptation to “outer space” (the most common, unavoidable environment we are aware of) play in your strategies? Why not mutate to a form that thinks outer space is comfortable?
    If we still-human forms are lucky, their ideal HZ will be different from ours, and we may work out some symbiotic relationship when it comes to inhabiting various classes of star systems.

    • Yes, I tend to agree with this. Our ideal environment is at the bottom of a gravity well, with plenty of water and an atmosphere to breathe (and to screen out harmful radiation). This is a natural consequence of our emergence from organic chemistry in a water-rich world. As a species we would have to undergo some pretty serious mutations if we wanted to change that.

      Even primitive machines, on the other hand, have no such limitations. In fact, outer space is a far more comfortable environment for them already – unlimited radiation from nearby suns to provide power, no corrosive water, low temperatures for increased electrical conductivity, and not much requirement for physical support structures in zero gravity.
      Intelligent machines would have no problem colonising space once they solve the problem of obtaining raw materials (asteroids perhaps). They would want to avoid the Earth, of course, because of all the orbital debris!

  4. In article I see two fundamental misconceptions:
    1. Survival and reproduction are put “on the same level” of hierarchy, though they are not.
    For living system survival is the final and top goal, but the reproduction is the instrument (mechanism, method) to prolong itself in the next generation, i.e. the reproduction is the instrument (mechanism, method) to achieve this proclaimed goal.
    2. Author suggests that we evolved and evolved, and, at last, in some moment of time the evolutionary process has created in us the desire to survive and reproduce. Then this process will create machines and machines’ desire to survive and reproduce too…
    Both, we and the machines – it’s the result of an evolutionary process. And for us, and for machines – the reason for the appearance are the same! We ourselves have emerged and evolved to a level that allows us to create intelligent machines, and intelligent machines themselves appear only because all of nature – is the result of the law of existence (being and, as a special case – survival), which implemented through an evolutionary process.

    I fully agree with conclusion that intellectual machines will exclude us from existence. It ought to be understood as the natural process, as the the same process, when parents were replaced by their children. Intellectual machines – they are the next step of human beings’ evolution.

    • The AI’s behavior, if done by a human will be what we call sociopathic. It won’t ‘care’ about killing millions of people, animals, all plant life etc. Why should it? It will have no concept of good or evil, and even if it could, who would teach those concepts to it, a scientist who him/herself cannot limit his/her own work for the benefit of mankind? And the ‘smarter’ it is, the more dangerous….
      I know its going to go down anyway, but you shouldn’t be cheerleading your own demise.

    • 1. Well, is that true? I don’t think that we know that. What survives, the organism or its genes? One can also take the point of view that we have to survive in order to have time to reproduce as much as we can. Survival without reproduction leads to a evolutionary dead-end. And reproduction is not possible without survival. Hence, both are necessary.

      2. This is questionable. We created all kinds of tools and none of them are subject to evolution. The is no “law of existence” from a scientific point of view. The universe could just as well be a large gas or a crystal. Is the universe fine tuned for a critical state? We simply don’t know why evolution started in the first place and how necessary or probable the event of the origin of life has been. In what sense is the occurrence of intelligent machines necessary? We don’t even know why WE exist in the sense why exactly our species has come about. If evolution operates on the edge of chaos, then it is very sensitive to perurbations. Therefore, a small perturbation could result in a different evolutionary tree. Whether true intelligence will always occur necessarily is an open scientific question.

  5. “It may depend on whether our consumption of resources is large compared to the increasing availability of resources.”

    This is probably the wrong consideration, since reproduction will almost certainly be faster than increases in resource availability.

    A more interesting consideration is if humans find a niche + a way to cause damage to the AIs that causes a permanent equilibrium where eliminating the humans is more costly than tolerating them. The AIs may also keep some humans around for scientific purposes, like we do with monkeys.

    • I mostly agree, that’s way there is a “may”. I don’t think that eliminating humans will be costly in the long term, as the AI becomes more and more powerful. But, evolution fosters not only competition but also cooperation. Hence, if creating new resources is just as costly as taking them from humans, then humans may be left alone. Also, the resources available to humans will be on a technically too low level as to be interesting for highly advanced AI in the long term. The question is, what happens in the short term. On the one hand, it is easier for AI to take humans’ resources, but on the other hand, it won’t be that powerful yet to risk a conflict with armed humans. That’s why there is a “may”.