Can Singularity Critics Pass the Turing Test?
George Dvorsky, Ramez Naam, Erik Sofge and other Singularity critics will appear increasingly silly over the coming years. Via their unreal views they are beginning to resemble badly composed AIs.
Publishing Singularity critical articles shortly before a narrow-AI passed the Turing Test was exceptionally bad timing for them. The test, held in London on 7 June 2014, was regarding a narrow-AI called Eugene Goostman.
Admittedly there is controversy regarding this Turing Test. Ben Goertzel thinks the result means “pretty much nothing.” George Dvorsky thinks the test should be compared to waste-matter from a bull. Other critics have dismissed the test.
I can easily address the dispute. This Turing Test allegedly being passed is only one minor point of growing evidence. The test seems to have validity from the viewpoint of at least one judge. It is important to note the judges were genuinely fooled by Eugene. I have also been informed the online version of Eugene Goostman, mentioned by some journalists, is an old version.
If we completely disregard the results, we can at least note the large cultural interest in AI. The large cultural interest seems based upon a rapidly evolving landscape of narrow-AI. We are at a pivotal point regarding AI. If the test isn’t valid it should nevertheless be notorious regarding the point shortly before all AI scepticism evaporated.
It’s an “extraordinary moment” the FT explained: “Scepticism aside, AI is enjoying a resurgence. The goal is to build a machine that thinks like a human, and Google leads a pack of companies keen to see this promise fulfilled.”
Robert Llewellyn was judging the test alongside Professor Martin Smith from Middlesex University. Martin Smith is president of the Cybernetics Society. Martin Smith is possibly a more rigorous judge according to Robert Llewellyn, but Martin was fooled on four out of ten occasions. Robert’s AI detection was less accomplished, he states he was fooled on six out of ten occasions.
Published via The Guardian after the test, Robert Llewellyn wrote: “At the moment, the fact that a computer programme developed to simulate a 13-year-old boy managed to convince 33% of the judges that it was human is a very big step. With ever increasing computing power, huge developments in software and improvements in voice recognition and artificial voices, we have to accept that before long we’ll be chatting to machines without a second thought.”
On 8 June 2014 Gizmodo wrote: “This is big. A computer program has successfully managed to fool a bunch of researchers into thinking that it was a 13-year-old boy named Eugene Goostman. In doing so, it has become the first in the world to have successfully passed the Turing Test.”
Responding to this Turing Test on Twitter, Ramez stated: “We’ll look back on this like we will on Deep Blue beating Kasparov. Nothing to do with strong AI.” Erik Sofge was similarly dismissive claiming the “chatbot” fooling one third of the judges does not constitute a pass.
Professor Kevin Warwick stated via a media release: “In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human. It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British Science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.”
Certainly there is room for improvement in various areas of the test. The pass rate could be higher. The supposed human could be older. The proofs or humanness more rigorous than basic conversation. The testing period longer.
Yes a narrow-AI or chat-bot passing the Turing Test isn’t AGI. Passing the Turing Test is nevertheless undeniable evidence of narrow-AI progress. Ramez is wrong to think this narrow-AI progress has no relevance to AGI.
Debating with Ramez Naam on 5 June 2014, George Dvorsky claimed some Singularity commentators engage in “pseudo-futurism.” Published via io9 George described how he thinks the Singularity term is invalid. George wrote: “Personally, I think the time is right to retire the term. It’s not very useful; it describes what we don’t know rather than what we do know — and a picture of the future is slowly starting to come into focus.” George and company are problematic because they don’t take the time to think seriously about these issues.
In the same io9 article Ramez appears incoherent, very contradictory. First he claims the Singularity is “quasi-religious” because people expect technology to end poverty. Ramez stated: “Poverty? Don’t worry, there’s a Singularity coming.” Later in the same article he stated: “I also think that, on balance, we’re going to see huge benefits for humanity. We can see already. Poverty is plunging around the world.”
The future is brighter than you think according to Peter Diamandis, who stated in 2012: “I’ll start with poverty, which has declined more in the past 50 years than the previous 500. Over the last 50 years, in fact, even while the Earth’s population has doubled, the average per capita income globally (adjusted for inflation) has more than tripled.”
Technology is the reason poverty is reducing. It is very rational to expect a total reduction of poverty to the point where everything is free. This is not “quasi-religious.” The evidence is clear regarding technology. We are merely making a logical forecast based on actual accelerating progress.
Ironically Singularity critics are the ones engaging in faith type thinking. They engage in pseudo-futurism, hand-waving. They ignore facts when it suits their irrational bias.
Erik Sofge should have had waited a couple of weeks or years before claiming the Singularity resembles a religion devoid of evidence.
The hand-waving of Erik, Ramez, and others hasn’t yet been proven totally ludicrous. They are right, the evidence of their silliness isn’t totally clear in 2014, which is why I think they will appear “increasingly” silly over the coming years. Currently they are only mildly preposterous. In the year 2016 or 2018 the foolishness of Singularity opposition will be clearer.
If they want to become more entrenched in their opposition they should go for it, but they should be aware it’s their reputations at risk. I for one would welcome their buffoonery for posterity.
Over the next 31 years rudimentary AIs will be refined. The next 31 years will be very substantial if we consider how progress is accelerating in 2014. We only need to observe how rapidly technology progressed from the first mobile phone in 1983 to smartphones in 2014. Yes it took a while to invent Watson, but now we’ve made the breakthrough we will begin to progress rapidly.
The writing was on the wall before Erik Sofge published his Singularity critique. If not now then in the near future computers passing the Turing Test is inescapable. Better Turing Test pass rates are also inevitable during subsequent years. AIs will progress from teenagers to adults. We will also develop better tests than the Turing Test.
Many indicators of radical progress existed before Erik’s article, but Erik wrote in PopSci: “Lacking evidence of the coming explosion in machine intelligence, and willfully ignoring the AGI deadlines that have come and gone, the Singularity relies instead on hand-waving.”
Evidence! The problem with Singularity critics is they don’t do adequate research.
There is evidence of AI discovering a potential cancer drug. Narrow-AI is being employed to detect cancer. IBM’s Watson is developing its debating skills. Doctors are regularly using AI to treat patients. Wired wrote regarding doctors utilizing AI: “Artificial intelligence is still in the very early stages of development–in so many ways, it can’t match our own intelligence–and computers certainly can’t replace doctors at the bedside. But today’s machines are capable of crunching vast amounts of data and identifying patterns that humans can’t.”
Yahoo’s acquisition of Incredible Labs, to develop an intelligent mobile assistant, is noteworthy. Wired reported on Yahoo’s acquisition: “In fact, we will certainly see more artificial intelligence (AI) in products, given that we now have all the fundamentals in place to best leverage the technology.”
Yes historical AI-forecasts have been wrong, but it’s insane to assume being wrong on several occasions means you’ll be perpetually wrong. The illogic of once wrong forever wrong resembles religious thought, it is especially ludicrous considering narrow-AI is undeniably blatant.
Siri, Google Now, Cortana, and Intel’s Jarvis are not truly intelligent but over the next 31 years this research will progress significantly. We are leaving our stumbling toddler years behind. It is illogical to assume current actualities regarding AI will not progress. The denial of progress ignores the evidence of technology.
We live in a world where a computer has already solved an 80 year old mathematics problem, but the answer is too long for humans to check the proof. It is a world where a robot-scientist in 2009 devised its own experiment. The robot-scientist in question made a breakthrough where humans since the 1960s had failed. In November 2013 Google engineers claimed their deep learning system was thinking in ways they could not understand.
The problem resembles the Boy Who Cried Wolf. The first few cries were wrong therefore people think a wolf will never bite them.
I promise you, people such as George Dvorsky, Ramez Namm, and Erik Sofge will look very silly over the coming years. Their quasi-religious dismissals of the Singularity will come back to bite them. Go on critics, wave your hands regarding your pseudo-futurism!
The evidence, if you care to look, shows clear AI progress. The Singularity is not science fiction or a quasi-religious view.