Unfriendly AI

Do unto others: Intelligent machines and killer robots

Do unto others: Intelligent machines and killer robots

The recent U.N call for a moratorium on the development of remotely controlled weapon systems has raised some interesting issues and debate. While the tone of the announcement by U.N Special Rapporteur, Christof Heyns, rather dryly refers to LARs or lethal autonomous robots the story was then picked up by numerous news sites with headlines pitting killer robots against the U.N.

Read More »
Aristotelian Posthumans

Aristotelian Posthumans

I’ve argued that posthumans would have to be, in some sense of the term, “autonomous entities” capable of operating outside the scope of the socio-technical network I refer to as the Wide Human (Roden 2013). A being is autonomous if it is self-governing. According to the modern practical philosophy that follows Rousseau and Kant, autonomous beings (paradigmatically human beings) are those that can...

Read More »
The End of Anger

The End of Anger

We need to stop. Take a deep breath. We need to turn our anger into discriminating intelligence and wisdom before it is too late.

Read More »
Kruel AGI Risks Roundtable

Kruel AGI Risks Roundtable

In 2011, Alexander Kruel (XiXiDu) started a Q&A style interview series on LessWrong asking various experts in artificial intelligence about their perception of AI risks. He convened what was in essence a council of expert advisors to discuss AI development and risk.

Read More »

A Primer On Risks From AI

The question is, what are the possible implications of the invention of an artificial, fully autonomous, intelligent and goal-oriented optimization process?

Read More »
Godseed: Benevolent or Malevolent?

Godseed: Benevolent or Malevolent?

It is known that benign looking AI objectives may result in powerful AI drives that may pose a risk to the human society. We examine the alternative scenario of what happens when universal goals that are not human-centric are used for designing AI agents. We follow a design approach that tries to exclude malevolent motivations from AI's, however, we see that even objectives that seem benevolent at...

Read More »