Tagged: Adam Ford

The Fermi Paradox, Self-Replicating Probes, and the Interstellar Transportation Bandwidth

Keith B. Wiley kwiley@keithwiley.com Affiliation at the time of writing: Univ. of Washington, Dept. of Astronomy, Box 351580, Seattle WA 98195, USA Originally written November 25, 2011 Citation: Wiley K. The Fermi Paradox, Self-Replicating Probes, and the Interstellar Transportation Bandwidth....


Interview – Kim Stanley Robinson – Utopia, Transhumanism, Social Systems, Climate Change & Strategic Foresight

Renowned Science Fiction author, Kim Stanley Robinson is interviewed by H+ director Adam Ford. Robinson’s novels have won eleven major science fiction awards, and have been nominated on twenty-nine occasions. Robinson won the Hugo Award for Best Novel with Green Mars (1994); and Blue Mars (1997); the Nebula Award for Best Novel with Red Mars (1993) and 2312 (2012); the Nebula Award for Best Novella with The Blind Geometer (1986); the World Fantasy Award with Black Air (1983); a John W. Campbell Memorial Award for Best Science Fiction Novel with Pacific Edge (1991); and Locus Awards for The Wild Shore (1985), A Short, Sharp Shock (1991), Green Mars (1994), Blue Mars (1997), The Martians (2000), and The Years of Rice and Salt (2003).


Is Artificial Superintelligence Research Ethical?

Video Interview: Recently I interviewed Roman Yampolskiy, Latvian born computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds and artificial intelligence safety. He holds a PhD from the University at Buffalo (2008). He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.


Reward Function Integrity in Artificially Intelligent Systems

Video and abstract of Roman’s presentation at Oxford University. Analysis of historical examples of wireheading in man and machine and evaluate a number of approaches proposed for dealing with reward-function corruption. While simplistic optimizers driven to maximize a proxy measure for a particular goal will always be a subject to corruption, sufficiently rational self-improving machines are believed by many to be safe from wireheading problems.