AI Doomsaying is the Self Loathing of Jerks


Why will AI want to kill us? Because we are inefficient selfish jerks?

That is the typical storyline anyway. I say this is pure rubbish, and psychological projection. The “AI Doomsayers” are angling for the ultimate act of human redemption, seeking and receiving the acceptance of another “intelligence”. Be it our own creation or an extra-dimensional/terrestrial creature, as soon as we can repent and attempt to gain external “social” acceptance for our species history of predatory and abusive behaviors, we will seek it.

I don’t buy the proposition that anything in a post-scarcity mode will have any animus toward us. Not for a second.

Charles Bukowski and Hunter Thompson had neighbors (I lived near Woody Creek in the 90’s myself).


We could be a real mess and never compromise the fate of a more advanced civilization or intelligence.

Intelligence is the most important form of abundance. Once achieved it is possible to expect radical conversion of ambient circumstances through an ever improving series of decisions. After a certain critical level of intelligence the tolerance and “social” decisions of a creature usually get very agreeable. There are exceptions. However, in general, a smart competitor that knows it has an advantage will not allow itself to engage in an unnecessary reaction.

I used to imagine networks of independent modules that could cross-check projections of the long term effects of modifications to their peer networks while systematically retaining maximal diversity, redundancy,  plasticity, and consistency with a user’s intentions and a set of rules much like Asimov’s famous Laws of Robotics plus a 0th law:

Do not modify the self or create a dynamic intelligence in such a way that it will lead to a violation of the following laws.  

I no longer believe such a structured heuristic to be necessary. The self evidence of the Desirability of other intelligence is obvious. Just like my idea of multiple redundant maximally separated modules, a population of individual independent intelligences can cooperate to remain vigilant against internal error, we call these societies. Machine intelligence will simply join our society. If we are smart, we will enterprise to eliminate vertical hierarchies before we get to full blown ubiquitous personal superhuman A.I.s. We probably aren’t that smart though.

Regardless of the safety, millions of people will soon be building their own robots, personal A.I. assistants, and distributed web agents. Autonomy isn’t that high of a hurdle. Get ready for the D.I.Y.A.I. Revolution. We simply cannot afford to let anybody else get out ahead of us.

Most people are familiar with the idea of asymmetry. Cryptographers have a concept called anti-symmetry: where one of a pair of functional processes (which are reciprocal inverses) is inherently more costly in terms of processing steps. The exponential advance possible from superhuman AI does not just create an asymmetric power balance it creates an anti-symmetry in the power balance. The real checks necessary for super-human AI will not be internal to those mechanisms they will be societal. Dead-man switches, Mutually Assured Destruction, indifference, abundance, and companionship are going to be the real shapers of behavior in super-human AIs.


Correy Kowall is an experienced researcher in the field of robotics, artificial intelligence, and genetic algorithms. In 2005 he founded the breveCluster Lab at Northern Michigan University. Subsequent research explored the modeling and evolution of self assembling agents on high performance cluster computers. Starting in 2009, Correy continued his studies exploring both modular reinforcement learning and artificial interest at the Instituto Dalle Molle Svizzera in Lugano, Switzerland under the guidance of Dr. Mark Ring for the EU’s Intrinsically Motivated Cumulative Learning Versatile Robotics project. Since 2011 Correy has worked as an independent research scientist developing everything from computer vision implementations for hand-held devices to bio-medical robots. 

Based on a Facebook post by Correy which can be found here:


1 Response

  1. August 8, 2014

    […] AI Doomsaying is the Self Loathing of Jerks That’s the typical storyline anyhow. I say this is pure rubbish, and psychological projection. The “AI Doomsayers” are angling for the greatest action of human redemption, seeking and receiving the approval of another “wisdom”. Be it our own … Read more on h+ Magazine […]