Category: Complex Systems


Will super-human artificial intelligence (AI) be subject to evolution?

There has been much speculation about the future of humanity in the face of super-humanly intelligent machines. Most of the dystopian scenarios seem to be driven by plain fear that entities arise that could be smarter and stronger than us. After all, how are we supposed to know which goals the machines will be driven by? Is it possible to have “friendly” AI? If we attempt to turn them off, will they care? Would they care about their own survival in the first place? There is no a priori reason to assume that intelligence necessarily implies any goals, such as survival and reproduction.