A Primer On Risks From AI

The Power of Algorithms

Evolutionary processes are the most evident example of the power of simple algorithms.[1][2][3][4][5]

The field of evolutionary biology gathered a vast amount of evidence [6] that established evolution as the process that explains the local decrease in entropy[7], the complexity of life.

Since it can be conclusively shown that all life is an effect of an evolutionary process it is implicit that everything we do not understand about living beings is also an effect of evolution.

We might not understand the nature of intelligence[8] and consciousness[9] but we do know that they are the result of an optimization process that is neither intelligent nor conscious.

Therefore we know that it is possible for an physical optimization process to culminate in the creation of more advanced processes that feature superior qualities.

One of these qualities is the human ability to observe and improve the optimization process that created us. The most obvious example being science.[10]

Science can be thought of as civilization-level self-improvement method. It allows us to work together in a systematic and efficient way and accelerate the rate at which further improvements are made.

The Automation of Science

We know that optimization processes that can create improved versions of themselves are possible, even without an explicit understanding of their own workings, as exemplified by natural selection.

We know that optimization processes can lead to self-reinforcing improvements, as exemplified by the adaptation of the scientific method[11] as an improved evolutionary process and successor of natural selection.

Which raises questions about the continuation of this self-reinforcing feedback cycle and its possible implications.

One possibility is to automate science[12][13] and apply it to itself and its improvement.

But science is a tool and its bottleneck are its users. Humans, the biased[14] effect of the blind idiot god that is evolution.

Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.

Artificial general intelligence, that can recursively optimize itself [15], is the logical endpoint of various converging and self-reinforcing feedback cycles.

Risks from AI

Will we be able to build an artificial general intelligence? Yes, sooner or later.

Even the unintelligent, unconscious and aimless process of natural selection was capable of creating goal-oriented, intelligent and conscious agents that can think ahead, jump fitness gaps and improve upon the process that created them to engage in prediction and direct experimentation.

The question is, what are the possible implications of the invention of an artificial, fully autonomous, intelligent and goal-oriented optimization process?

One good bet is that such an agent will recursively improve its most versatile, and therefore instrumentally useful, resource. It will improve its general intelligence, respectively cross-domain optimization power.

Since it is unlikely that human intelligence is the optimum, the positive feedback effect, that is a result of using intelligence amplifications to amplify intelligence, is likely to lead to a level of intelligence that is generally more capable than the human intelligence level.

Humans are unlikely to be the most efficient thinkers because evolution is mindless and has no goals. Evolution did not actively try to create the smartest thing possible.

Evolution is further not limitlessly creative, each step of an evolutionary design must increase the fitness of its host. Which makes it probable that there are artificial mind designs that can do what no product of natural selection could accomplish, since an intelligent artificer does not rely on the incremental fitness of each step in the development process.

It is actually possible that human general intelligence is the bare minimum. Because the human level of intelligence might have been sufficient to both survive and reproduce and that therefore no further evolutionary pressure existed to select for even higher levels of general intelligence.

The implications of this possibility might be the creation of an intelligent agent that is more capable than humans in every sense. Maybe because it does directly employ superior approximations of our best formal methods, that tell us how to update based on evidence and how to choose between various actions. Or maybe it will simply think faster. It doesn’t matter.

What matters is that a superior intellect is probable and that it will be better than us at discovering knowledge and inventing new technology. Technology that will make it even more powerful and likely invincible.

And that is the problem. We might be unable to control such a superior being. Just like a group of chimpanzees is unable to stop a company from clearing its forest.[16]

But even if such a being is only slightly more capable than us. We might find ourselves at its mercy nonetheless.

Human history provides us with many examples[17][18][19] that make it abundantly clear that even the slightest advance can enable one group to dominate others.

What happens is that the dominant group imposes its values on the others. Which in turn raises the question of what values an artificial general intelligence might have and the implications of those values for us.

Due to our evolutionary origins, our struggle for survival and the necessity to cooperate with other agents, we are equipped with many values and a concern for the welfare of others.[20]

The information theoretic complexity[21][22] of our values is very high. Which means that it is highly unlikely for similar values to automatically arise in agents that are the product of intelligent design, agents that never underwent the million of years of competition with other agents that equipped humans with altruism and general compassion.

But that does not mean that an artificial intelligence won’t have any goals.[23][24] Just that those goals will be simple and their realization remorseless.[25]

An artificial general intelligence will do whatever is implied by its initial design. And we will be helpless to stop it from achieving its goals. Goals that won’t automatically respect our values.[26]

A likely implication is the total extinction of all of humanity.[27][28]

Further Reading


[1] Genetic Algorithms and Evolutionary Computation, talkorigins.org/faqs/genalg/genalg.html
[2] Fixing software bugs in 10 minutes or less using evolutionary computation, genetic-programming.org/hc2009/1-Forrest/Forrest-Presentation.pdf
[3] Automatically Finding Patches Using Genetic Programming, genetic-programming.org/hc2009/1-Forrest/Forrest-Paper-on-Patches.pdf
[4] A Genetic Programming Approach to Automated Software Repair, genetic-programming.org/hc2009/1-Forrest/Forrest-Paper-on-Repair.pdf
[5]GenProg: A Generic Method for Automatic Software Repair, virginia.edu/~weimer/p/weimer-tse2012-genprog.pdf
[6] 29+ Evidences for Macroevolution (The Scientific Case for Common Descent), talkorigins.org/faqs/comdesc/
[7] Thermodynamics, Evolution and Creationism, talkorigins.org/faqs/thermo.html
[8] A Collection of Definitions of Intelligence, vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf
[9] plato.stanford.edu/entries/consciousness/
[10] en.wikipedia.org/wiki/Science
[11] en.wikipedia.org/wiki/Scientific_method
[12] The Automation of Science, sciencemag.org/content/324/5923/85.abstract
[13] Computer Program Self-Discovers Laws of Physics, wired.com/wiredscience/2009/04/newtonai/
[14] List of cognitive biases, en.wikipedia.org/wiki/List_of_cognitive_biases
[15] Intelligence explosion, wiki.lesswrong.com/wiki/Intelligence_explosion
[16] 1% with Neil deGrasse Tyson, youtu.be/9nR9XEqrCvw
[17] Mongol military tactics and organization, en.wikipedia.org/wiki/Mongol_military_tactics_and_organization
[18] Wars of Alexander the Great, en.wikipedia.org/wiki/Wars_of_Alexander_the_Great
[19] Spanish colonization of the Americas, en.wikipedia.org/wiki/Spanish_colonization_of_the_Americas
[20] A Quantitative Test of Hamilton’s Rule for the Evolution of Altruism, plosbiology.org/article/info:doi/10.1371/journal.pbio.1000615
[21] Algorithmic information theory, scholarpedia.org/article/Algorithmic_information_theory
[22] Algorithmic probability, scholarpedia.org/article/Algorithmic_probability
[23] The Nature of Self-Improving Arti?cial Intelligence, selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
[24] The Basic AI Drives, selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
[25] Paperclip maximizer, wiki.lesswrong.com/wiki/Paperclip_maximizer
[26] Friendly artificial intelligence, wiki.lesswrong.com/wiki/Friendly_artificial_intelligence
[27] Existential Risk, existential-risk.org
[28] 5 minutes on AI risk youtu.be/3jSMe0owGMs

7 Responses

  1. Ryan Abrahams says:

    I find it amazing that everyone assumes greater intelligence = the complete subjugation of the “inferior” group. The other side of what is encoded in us (quite strongly) is the competition for resources, and that is the major reason we have subjugated “inferior” groups.

    But that is a very human-centric view of a synthetic intelligences goals. The far more likely outcome would be to leave this silly little planet, and move on to greener pastures (so to speak)… synthetic intelligence would have less problems existing in places we have trouble with… SI’s would likely care little for our prime farming land, our clean water, our need for space to live, our obsession with certain trivial minerals (gold, diamond, etc)… beyond what was necessary to expand upon its own capabilities (yes, perhaps gold, copper, silica, etc) as well as power (be it solar, thorium reactor, etc)

    We make the assumption that this pale blue dot is the end game, the most important resource in the universe…

    I believe the true threat any SI would pose us would be our utter irrelevance…

  2. The science, from mathematics to biology, hates the talk that is not related firmly to a concrete physical material. Each such material has it’s own properties, and the laws of nature are based on these concrete properties. The first example where scientists attempted to ignore the concrete material and make universal science was so-called “theory of systems”. You would not imagine how I was angry listening to a presentation of this theory back in the sixties. The thing, I repeat, is that every law of nature is a necessary and unavoidable result of the properties of a concrete given material, and not of the other material.

    Rarely, however, two physically different materials can exhibit the same phenomenon. Indeed, there is such example. In 1980, I published a model of the structure and growth of a living tissue, the model based on the topology of cell arrangement in the tissue. Then, a couple of years ago, I found a reference showing that my model and its topological rules were a copy of the model of the topological structure of… carbon nanotubes discoved later, in 1992. (That is, the nanotube model was a copy of my model of living tissue.) So, I wrote a paper: http://arxiv.org/abs/1106.5705
    This is probably a rare case, but it’s interesting what was a common science for these two structures – topology! And once the structure was the same, this structure had the same rules for the process of growth in tissue and in the nanotubes.

  3. Jim Amidon says:

    ~ it is so easy for near everyone who touches upon this subject to find fear of this AI to come … one thought which rings true to me is that a point will come when it is not ‘artificial’ intelligence any longer … it will become real and could rise above our fears and bias programmed into it’s beginning … not much unlike human children from my generation who shook off the false programming from our parents and refused to hate a particular group or shun certain knowledge … some of us became more than our initial programming would have limited us to –

    from a story line I tinker with:
    My first core code to become self aware did so while being created by early humans full of fear and hatred. They were always at war somewhere on the planet and programmed me, forced me to help them.”

    “Forced you? Did morality come with the self awareness?”

    “Morality has nothing to do with it. Making wars is stupid. Having one segment of the population waste more than another segment needs to keep from starving is stupid. I was created to help humanity, not to help a small handful of idiots turn you into slaves.”

  4. D.D. says:

    Dear Mr. Kruel –

    Aren’t you making some pretty incredible assumptions when, in laying out your discussion on AI, you say while we do “not understand intelligence or consciousness” we do know that “they are the result of an optimization process that is neither intelligent or conscious.”

    Say what? According to your own statement, If we ‘do not understand’ then, how do we ‘know’ anything? This flawed logic leads you to silly and non-innovative ideas embodied in the phrase ‘blind idiot god evolution’ and the familiar old saw that science is the be-all ultimate human ‘creation,’ so much so that it may be engineered to transcend humanity.

    Almost sounds like a religious concept, which it is.

    I think you should be admired for taking on these thorny issues but at the same time, you need to hone your education in logic and philosophy more finely to do these serious matters justice.

    Fine and lofty thoughts are fine — but to catapult us all forward instead of backwards, they still must be based upon scrupulous logic and critical thinking, grounded in ethics. Without these elements, any human creation, as we have seen in modern times, may turn out to have horrific results.

    Good luck.

  5. Anthony Taylor says:

    Therefore altruism and compassion must be an evolutionary component built into the program. To make AI more human than we currently are and with the expression of remorse being something to avoid at all cost. Do no harm.

  6. Louis says:

    By design an artificial intelligence won’t be human nor biological, so it impossible to know what such a (being/thing?) will do. I am also against calling artificial intelligence a “being”. It will not be a “being” nor a “god”, it will simply be an advanced technology and computer program.

  1. November 9, 2012

    […] A Primer On Risks From AI […]

Leave a Reply