Defeat Any Chatbot in a Turing Test with ASCII Art
It was all over the news recently, a chatbot named “Eugene Goostman” had supposedly beaten the famous Turing Test ushering in the age of real human like artificial intelligence. While various pundits and AI experts weighed in on the specific event and the software’s capabilities and limitations, the details of Turing’s proposed procedure, and so on, only a few authors went on to point out to fundamentally incomplete and flawed nature of the Turing Test procedure itself.
Simply stated, we know the Turing Test can be easily defeated and is an inadequate test of human level intelligence. Although academics will continue the debate, I hope to share with you here how an ancient and arcane art, ASCII art to be precise, shows how Turing’s idea was not sufficient to capture the full range of human level intelligence.
The Turing Test entails a human communicating with two subjects via a terminal or teletype. The tester must determine which of the two subjects is a human and which is a computer simply by asking questions or engaging in text based conversation with them. Turing’s idea was that intelligent conversation couldn’t be faked, and that any machine which could hold up its end of a text based conversation must be intelligent and would correctly be identified as such. The problem is he was wrong.
In order to understand why, it helps to take a step back in time to the early days of public access computing and a fun idea known as TTY or more recently ASCII Art. It was more than a decade before the dawn of the personal computer era when Joseph Weizenbaum created the early sofware program named ELIZA in 1966 . ELIZA was an early implementation of simple natural language processing today known as a “chatbot” and therefore ELIZA is in a sense the chatbot Eugene Goostman’s great grandmother.
Using almost no knowledge of the world and only a weak model of human communication, ELIZA was able to present simple human-like interactions that convinced many people it was intelligent or even human. But I didn’t meet ELIZA until about a decade later when thousands of lucky Berkeley and San Francisco bay area residents were given very early access to computers at the Lawrence Hall of Science or LHS. The LHS exterior was famously featured in The Forbin Project, a film about two artificial intelligences that take over the world’s nuclear arsenals during the Cold War.
But the LHS was real place you could go. There they were in the lobby glowing softly, three video display terminals on which you could play Lunar Lander, tic-tac-toe, NIM, and also interact with ELIZA.
The fun of playing with ELIZA was of course to get it to say something weird or nonsensical. And it wasn’t hard to do. ELIZA ran a script known as DOCTOR which was intended to be an implementation of a Rogerian therapist. As such, ELIZA liked to talk about your mother and made a lot of vague encouraging statements such as “Tell me more.” And she repeated things you said so this made it very easy to get her to produce a crude version of “yo mama” jokes or talk about your butt.
But it was downstairs in the terminal room where the real action was. In the terminal room the public was given access to a timeshared HP 2000B computer. The terminal room was filled with teletypes or TTYs which for those too young to know were a combination of a keyboard and paper printer that also featured an early all digital input/output system known as paper tape.
Here you could play a wider variety of games including a very popular text based Star Trek game, write BASIC programs, and also print out TTY art. Back in the day, some early hackers cut their teeth capturing unsuspecting users’ passwords with simple BASIC programs to get free time on the HP.
TTY Art consisted of pictures made by printing alphanumeric characters (sometimes with overprinting) on the teletype paper output. The most popular picture that kids would print out was this one of Snoopy, but even back in the earliest days there were ASCII art pin up girls.
What does this picture of Snoopy have to do with Turing’s famous test?
The test as envisioned by Turing in his famous 1950 paper and later conversations simply neglects the possibility of an obvious visual encoding based in text transmissions. He assumes that no visual intelligence is required to pass his “imitation game”. But upon closer examination it is easy to see that this is incorrect.
Consider Snoopy. I don’t have to teach a human to see these arrangements of text as a picture, this visual intelligence is inherently part of human existence and intelligence. Once you see the transmitted characters, you can immediately see the picture and answer questions about it and also the underlying text as well.
For example, Snoopy has a scarf where the “fringe” is made from “/” characters. Humans find it difficult to avoid seeing pictures in such text patterns, however chatbots such as ELIZA and the more recent Eugene Goostman lack any visual intelligence. They can’t see Snoopy or talk about his scarf at all.
An easy to use strategy to defeat all such chatbots in a Turing Test is simply to send them an obvious ASCII art image, and then ask them to answer questions about it. All conversation based chatbots will fail to answer relevant questions about a transmitted image because the software can not see the image and can not generate any internal representation of it. Chatbots will be forced to use evasion and distraction to avoid answering questions about an image whereas humans will be able to respond immediately and correctly.
Even if we limit our text messages to short “one liners” or 140 characters we aren’t entirely able to save the original Turing Test from this sort of attack. Consider the following:
A person with “beats” headphones.
Again, the chatbot can not answer questions about the person wearing headphones, because it can’t see a person in the transmitted text. While visual understanding of some set of these patterns could be faked, faking understanding of all possible character based patterns seems problematic.
Text can also be used to transmit messages at multiple scales or levels, and again chatbots can not see these multiple levels or answer questions about them.
What does the text below say?
A chatbot can’t see LOVE, it only reads HATE (possibly misspelled).
What is going on here?
As Marshall McLuhan noted, human intelligence was first visual and acoustic, and only later with the invention of the phonetic alphabet, writing, and printing became textual and linear. Our ability to understand pictures predates our ability to type and read words. but the human ability to understand text emerged from this underlying visual intelligence. In the case of chatbots, they have no such underlying visual intelligence and they therefore will always fail to capture this critical aspect of human level intelligence.
To build a human level AI you need to start with computer vision and not natural language processing. Because all humans share this underlying visual intelligence we can communicate messages that rely on this shared ability. The receiver does not need to be taught how to decode the message, it is obvious merely by “looking at it”. Any machine that can correctly be said to have achieved human level intelligence must also have the ability to see these messages and respond appropriately.
Pictures from the early days of the LHS found here: http://www.rakahn.com/shared/Kahn-LHS_Public_Computer_Access.pdf
This notion of a secret or hidden communication channel can also be employed in transmission of hidden messages between humans. For example, the picture below contains a hidden morse code message encoded into a repetitive spatial pattern. Can you find and decode it? Have fun!