A Primer On Risks From AI
The Power of Algorithms
Since it can be conclusively shown that all life is an effect of an evolutionary process it is implicit that everything we do not understand about living beings is also an effect of evolution.
Therefore we know that it is possible for an physical optimization process to culminate in the creation of more advanced processes that feature superior qualities.
One of these qualities is the human ability to observe and improve the optimization process that created us. The most obvious example being science.
Science can be thought of as civilization-level self-improvement method. It allows us to work together in a systematic and efficient way and accelerate the rate at which further improvements are made.
The Automation of Science
We know that optimization processes that can create improved versions of themselves are possible, even without an explicit understanding of their own workings, as exemplified by natural selection.
We know that optimization processes can lead to self-reinforcing improvements, as exemplified by the adaptation of the scientific method as an improved evolutionary process and successor of natural selection.
Which raises questions about the continuation of this self-reinforcing feedback cycle and its possible implications.
But science is a tool and its bottleneck are its users. Humans, the biased effect of the blind idiot god that is evolution.
Therefore the next logical step is to use science to figure out how to replace humans by a better version of themselves, artificial general intelligence.
Artificial general intelligence, that can recursively optimize itself , is the logical endpoint of various converging and self-reinforcing feedback cycles.
Risks from AI
Will we be able to build an artificial general intelligence? Yes, sooner or later.
Even the unintelligent, unconscious and aimless process of natural selection was capable of creating goal-oriented, intelligent and conscious agents that can think ahead, jump fitness gaps and improve upon the process that created them to engage in prediction and direct experimentation.
The question is, what are the possible implications of the invention of an artificial, fully autonomous, intelligent and goal-oriented optimization process?
One good bet is that such an agent will recursively improve its most versatile, and therefore instrumentally useful, resource. It will improve its general intelligence, respectively cross-domain optimization power.
Since it is unlikely that human intelligence is the optimum, the positive feedback effect, that is a result of using intelligence amplifications to amplify intelligence, is likely to lead to a level of intelligence that is generally more capable than the human intelligence level.
Humans are unlikely to be the most efficient thinkers because evolution is mindless and has no goals. Evolution did not actively try to create the smartest thing possible.
Evolution is further not limitlessly creative, each step of an evolutionary design must increase the fitness of its host. Which makes it probable that there are artificial mind designs that can do what no product of natural selection could accomplish, since an intelligent artificer does not rely on the incremental fitness of each step in the development process.
It is actually possible that human general intelligence is the bare minimum. Because the human level of intelligence might have been sufficient to both survive and reproduce and that therefore no further evolutionary pressure existed to select for even higher levels of general intelligence.
The implications of this possibility might be the creation of an intelligent agent that is more capable than humans in every sense. Maybe because it does directly employ superior approximations of our best formal methods, that tell us how to update based on evidence and how to choose between various actions. Or maybe it will simply think faster. It doesn’t matter.
What matters is that a superior intellect is probable and that it will be better than us at discovering knowledge and inventing new technology. Technology that will make it even more powerful and likely invincible.
And that is the problem. We might be unable to control such a superior being. Just like a group of chimpanzees is unable to stop a company from clearing its forest.
But even if such a being is only slightly more capable than us. We might find ourselves at its mercy nonetheless.
What happens is that the dominant group imposes its values on the others. Which in turn raises the question of what values an artificial general intelligence might have and the implications of those values for us.
Due to our evolutionary origins, our struggle for survival and the necessity to cooperate with other agents, we are equipped with many values and a concern for the welfare of others.
The information theoretic complexity of our values is very high. Which means that it is highly unlikely for similar values to automatically arise in agents that are the product of intelligent design, agents that never underwent the million of years of competition with other agents that equipped humans with altruism and general compassion.
An artificial general intelligence will do whatever is implied by its initial design. And we will be helpless to stop it from achieving its goals. Goals that won’t automatically respect our values.
- What should a reasonable person believe about the Singularity?
- The Singularity: A Philosophical Analysis
- Intelligence Explosion: Evidence and Import
- Why an Intelligence Explosion is Probable
- Artificial Intelligence as a Positive and Negative Factor in Global Risk
- From mostly harmless to civilization-threatening: pathways to dangerous artificial general intelligences
- The Hanson-Yudkowsky AI-Foom Debate
- Facing The Singularity
- Intelligence Explosion
- Singularity FAQ
 Genetic Algorithms and Evolutionary Computation, talkorigins.org/faqs/genalg/genalg.html
 Fixing software bugs in 10 minutes or less using evolutionary computation, genetic-programming.org/hc2009/1-Forrest/Forrest-Presentation.pdf
 Automatically Finding Patches Using Genetic Programming, genetic-programming.org/hc2009/1-Forrest/Forrest-Paper-on-Patches.pdf
 A Genetic Programming Approach to Automated Software Repair, genetic-programming.org/hc2009/1-Forrest/Forrest-Paper-on-Repair.pdf
GenProg: A Generic Method for Automatic Software Repair, virginia.edu/~weimer/p/weimer-tse2012-genprog.pdf
 29+ Evidences for Macroevolution (The Scientific Case for Common Descent), talkorigins.org/faqs/comdesc/
 Thermodynamics, Evolution and Creationism, talkorigins.org/faqs/thermo.html
 A Collection of Definitions of Intelligence, vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf
 The Automation of Science, sciencemag.org/content/324/5923/85.abstract
 Computer Program Self-Discovers Laws of Physics, wired.com/wiredscience/2009/04/newtonai/
 List of cognitive biases, en.wikipedia.org/wiki/List_of_cognitive_biases
 Intelligence explosion, wiki.lesswrong.com/wiki/Intelligence_explosion
 1% with Neil deGrasse Tyson, youtu.be/9nR9XEqrCvw
 Mongol military tactics and organization, en.wikipedia.org/wiki/Mongol_military_tactics_and_organization
 Wars of Alexander the Great, en.wikipedia.org/wiki/Wars_of_Alexander_the_Great
 Spanish colonization of the Americas, en.wikipedia.org/wiki/Spanish_colonization_of_the_Americas
 A Quantitative Test of Hamilton’s Rule for the Evolution of Altruism, plosbiology.org/article/info:doi/10.1371/journal.pbio.1000615
 Algorithmic information theory, scholarpedia.org/article/Algorithmic_information_theory
 Algorithmic probability, scholarpedia.org/article/Algorithmic_probability
 The Nature of Self-Improving Arti?cial Intelligence, selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf
 The Basic AI Drives, selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
 Paperclip maximizer, wiki.lesswrong.com/wiki/Paperclip_maximizer
 Friendly artificial intelligence, wiki.lesswrong.com/wiki/Friendly_artificial_intelligence
 Existential Risk, existential-risk.org
 5 minutes on AI risk youtu.be/3jSMe0owGMs