“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.”
A rebuttal to Ramez Namm: the singularity is still closer than it appears.
Ben Goertzel in response to some common objections covered in an article on io9 by George Dvorsky
Perhaps, as Prof. Stephen Hawking thinks, it may be difficult to “control” Artificial Intelligence (AI) in the long term. But perhaps we shouldn’t “control” the long-term development of AI, because that would be like preventing a child from becoming an adult, and that child is you.
In a March 2nd, 2014 New York times article, Susan Schneider used the recent film ‘Her’ to analyze some of the philosophical challenges to the concept of mind-uploading, whereby a person’s mind is transferred from a brain to some sort of computer. She presented some common (although admittedly fascinating) hypothetical thought-experiments, drawing conclusions to match. I offer alternative explanations for these scenarios.
Bioethics cage match between Sheldon Krimsky – Professor, Tufts University and Lord Robert Winston -Emeritus Professor, Fertility Studies, Imperial College VS Nita Farahany – Professor of Law and Philosophy and Professor of Genome Sciences & Policy, Duke University and Lee Silver – Professor, Princeton University