Defeat Any Chatbot in a Turing Test with ASCII Art

It was all over the news recently, a chatbot named “Eugene Goostman” had supposedly beaten the famous Turing Test ushering in the age of real human like artificial intelligence. While various pundits and AI experts weighed in on the specific event and the software’s capabilities and limitations, the details of Turing’s proposed procedure, and so on, only a few authors went on to point out to fundamentally incomplete and flawed nature of the Turing Test procedure itself.

Simply stated, we know the Turing Test can be easily defeated and is an inadequate test of human level intelligence. Although academics will continue the debate, I hope to share with you here how an ancient and arcane art, ASCII art to be precise, shows how Turing’s idea was not sufficient to capture the full range of human level intelligence.

The Turing Test entails a human communicating with two subjects via a terminal or teletype. The tester must determine which of the two subjects is a human and which is a computer simply by asking questions or engaging in text based conversation with them. Turing’s idea was that intelligent conversation couldn’t be faked, and that any machine which could hold up its end of a text based conversation must be intelligent and would  correctly be identified as such. The problem is he was wrong.

In order to understand why, it helps to take a step back in time to the early days of public access computing and a fun idea known as TTY or more recently ASCII Art. It was more than a decade before the dawn of the personal computer era when  Joseph Weizenbaum created the early sofware program named ELIZA in 1966 . ELIZA was an early implementation of simple natural language processing today known as a “chatbot” and therefore ELIZA is in a sense the chatbot Eugene Goostman’s great grandmother.

Using almost no knowledge of the world and only a weak model of  human communication, ELIZA was able to present simple human-like interactions that convinced many people it was intelligent or even human. But I didn’t meet ELIZA until about a decade later when thousands of lucky Berkeley and San Francisco bay area residents were given very early access to computers at the Lawrence Hall of Science or LHS. The LHS exterior was famously featured in The Forbin Project, a film about two artificial intelligences that take over the world’s nuclear arsenals during the Cold War.

But the LHS was real place you could go. There they were in the lobby glowing softly, three video display terminals on which you could play Lunar Lander, tic-tac-toe, NIM, and also interact with ELIZA.

Screen Shot 2014-07-02 at 7.17.16 AM

The fun of playing with ELIZA was of course to get it to say something weird or nonsensical. And it wasn’t hard to do. ELIZA ran a script known as DOCTOR which was intended to be an implementation of a Rogerian therapist. As such, ELIZA liked to talk about your mother and made a lot of vague encouraging statements such as “Tell me more.” And she repeated things you said so this made it very easy to get her to produce a crude version of “yo mama” jokes or talk about your butt.

But it was downstairs in the terminal room where the real action was. In the terminal room the public was given access to a timeshared HP 2000B computer. The terminal room was filled with teletypes or TTYs which for those too young to know were a combination of a keyboard and paper printer that also featured an early all digital input/output system known as paper tape.

Here you could play a wider variety of games including a very popular text based Star Trek game, write BASIC programs, and also print out TTY art. Back in the day, some early hackers cut their teeth capturing unsuspecting users’ passwords with simple BASIC programs to get free time on the HP.

Screen Shot 2014-07-02 at 7.17.30 AM

TTY Art consisted of pictures made by printing alphanumeric characters (sometimes with overprinting) on the teletype paper output. The most popular picture that kids would print out was this one of Snoopy, but even back in the earliest days there were ASCII art pin up girls.

Screen Shot 2014-07-02 at 7.39.10 AM

What does this picture of Snoopy have to do with Turing’s famous test?

The test as envisioned by Turing in his famous 1950 paper and later conversations simply neglects the possibility of an obvious visual encoding based in text transmissions. He assumes that no visual intelligence is required to pass his “imitation game”. But upon closer examination it is easy to see that this is incorrect.

Consider Snoopy. I don’t have to teach a human to see these arrangements of text as a picture, this visual intelligence is inherently part of human existence and intelligence. Once you see the transmitted characters, you can immediately see the picture and answer questions about it and also the underlying text as well.

For example, Snoopy has a scarf where the “fringe” is made from “/” characters. Humans find it difficult to avoid seeing pictures in such text patterns, however chatbots such as ELIZA and the more recent Eugene Goostman lack any visual intelligence. They can’t see Snoopy or talk about his scarf at all.

An easy to use strategy to defeat all such chatbots in a Turing Test is simply to send them an obvious ASCII art image, and then ask them to answer questions about it. All conversation based chatbots will fail to answer relevant questions about a transmitted image because the software can not see the image and can not generate any internal representation of it. Chatbots will be forced to use evasion and distraction to avoid answering questions about an image whereas humans will be able to respond immediately and correctly.

Even if we limit our text messages to short “one liners” or 140 characters we aren’t entirely able to save the original Turing Test from this sort of attack. Consider the following:

A person with “beats” headphones.

Again, the chatbot can not answer questions about the person wearing headphones, because it can’t see a person in the transmitted text. While visual understanding of some set of these patterns could be faked, faking understanding of all possible character based patterns seems problematic.

Text can also be used to transmit messages at multiple scales or levels, and again chatbots can not see these multiple levels or answer questions about them.

What does the text below say?

Screen Shot 2014-07-02 at 9.35.53 AM

A chatbot can’t see LOVE, it only reads HATE (possibly misspelled).

What is going on here?

As Marshall McLuhan noted, human intelligence was first visual and acoustic, and only later with the invention of the phonetic alphabet, writing, and printing became textual and linear.  Our ability to understand pictures predates our ability to type and read words. but the human ability to understand text emerged from this underlying visual intelligence. In the case of chatbots, they have no such underlying visual intelligence and they therefore will always fail to capture this critical aspect of human level intelligence.

To build a human level AI you need to start with computer vision and not natural language processing. Because all humans share this underlying visual intelligence we can communicate messages that rely on this shared ability. The receiver does not need to be taught how to decode the message, it is obvious merely by “looking at it”. Any machine that can correctly be said to have achieved human level intelligence must  also have the ability to see these messages and respond appropriately.


Pictures from the early days of the LHS found here:

This notion of a secret or hidden communication channel can also be employed in transmission of hidden messages between humans. For example, the picture below contains a hidden morse code message encoded into a repetitive spatial pattern. Can you find and decode it? Have fun!

morse code message

16 Responses

  1. Snake Plissken says:

    Again, this is about the fifth of sixth post you have made about this and yet there is really no need. I could tell that Eugene was a chatbot from the very first message, there is literally zero progress here. Image processing won´t help anything if the thing can´t even speak remotely close to the manner in which a human does.

    Besides, in humans the visual processing just happens to take place before linguistic processing, in any case its two different parts of the puzzle. Computers can decipher language from images already, its just a different program. Once they can understand language all you need to do is plug in the OCR module and you are good to go. So no, the Turing Test doesn´t need to pass Ascii Art tests.

    • Peter says:

      I agree the Eugene Goostman chatbot is trivially detected without the need to probe its lack of a visual-spatial representation.

      My point here is to propose a general method of defeating the Turing Test in the case where the AI participant does not have a visual representation. If the AI can’t see and the human can, we can reliably tell them apart by sending both parties pictures and asking them to talk about them. Sure, if you add a visual system back into the AI it could hypothetically respond correctly to ASCII art transmitted this way. A Goostman style chatbot will try to evade the issue with canned replies, distractions, or simply by changing the subject. This will fool some people but not everyone. Importantly Turing expressly ruled out the use of computer scientists and related specialists as judges in his description of the test procedure. His notion was that intelligence can’t be faked, and any system capable of fooling most people should be considered intelligent.

  2. Peter says:

    Although these usually include pictures, rebuses can also be transmitted via a text only channel.

    For example, a text based rebus puzzle can use the directionality of text to indicate a spatial reference:


    N N N N N N N

    A A A A A A A

    C C C C C C C


    “clean up”

    “seven up cans”

    Other approaches use capitalization to indicate size, or location within the text itself to create a spatial reference.

    BAD wolf



    “big bad wolf”

    “one in a million”

    Current chatbots will of course fail to observe or understand these messages, and even humans are challenged by them. However, once I tell you the answer, it is easy to answer questions about the hidden message. A human could explain that “BAD wolf” = “big bad wolf” because the word BAD is larger than the word wolf as written. This sort of puzzle doesn’t seem to be quite what Turing has in mind in his original paper, but he doesn’t really rule it out either and in modern administrations of the test it seems ASCII art, rebuses, and any related puzzle related ideas are all fair game.

  3. Peter says:

    I’ll be updating the article with some of this material and some recent new knowledge about the origins of ASCII art. And even predating ASCII Art was “typewriter art” on which an interesting book recently appeared. See

  4. Peter says:

    What I am trying to say here is that the notion that a “text only” channel of communication can even be established is in question. The original description of the imitation game does not include transmission of images or media, nor recognition of images, very possibly because Turing thought these were too hard to attempt. Remember he didn’t even have a video camera and the first digital image had not yet been created.

    We know that human visual intelligence predates the development of written language and is a critical element of human abilities to act and survive in the world. Turing’s idea of excluding visual intelligence from the imitation game was very arguably incorrect. For example, consider the human ability to create and solve so called “rebus” puzzles.

    “This test brings up important philosophical questions. If Turing Tests could be performed at various difficulties for various modes of communication, then could a current computer pass an easier Turing Test, and does that mean that it can think? What makes text-based communication the limit? Shouldn’t a computer be required to fool us in speech and even brain-to-brain communication before we consider it thinking? If a human cannot correctly identify and synthesize the pictures, is it any better or worse than a machine not being able to synthesize text? Does this say anything about the way that we define artificial intelligence, or intelligence in general? We could use these questions to put this idea into more human terms; an individual could use our program to test their knowledge of culture and picture synthesis.”


    See also:

  5. My questions are: would it be possible for a blind from birth person to see visual patterns at different levels if they had been blind from birth and then suddenly given braille or functioning eyes as an adult or perhaps teenager? After a period of time this person would probably be able to identify patterns and what they were associated with in real objects. If there is now available computer programs that can learn visual patterns and identify pictures of objects inside other visual information after training with multiple pictures of objects in different pictures from different angles, which I have seen is certainly now available, would it be valid to say a computer has not passed the Turing test or the test is not valid. This computer might be only be passing it in a textual manner or even not very well in a textual manner but still be able to show some degree of relevance to context in its replies. If it was connected to cameras and and visual processing software was added to its textual processing software, so it could associate pictures with words or phrases would not its textual ability become much stronger and be more likely to convince a human that it passed as human? How would conversing with hidden from view human appear to others if the person was presented with Braille patterns but had never been shown braille before and had been blind from birth?

  6. trinity says:

    Except here the blind person would have to have the insight to interpret it as a whole, the claim of inherent knowledge of how to decode the information falls flat. You confuse human level intelligence with human like intelligence.

    • Peter says:

      I’m not so sure that a blind person wouldn’t interpret the rabbit image correctly as a picture but it might depend on whether they had vision and lost it or never had vision at all. Consider also a braille image of the LOVE vs HATE type. This encoding where an arrangement of braille forms was used to construct a larger form would be decodable by anyone that can read braille wouldn’t it? You don’t need any insight of the sort entailed by having a visual model of the world to understand that there could be multiple levels of messages in such a presentation. A blind human observer would notice this hidden meaning right away I think, but I haven’t actually run the experiment to find out.

  7. Houshalter says:

    TIL blind people are not intelligent.

    • Peter says:

      A similar attack can be made on a tactile version of the test using braille for the visually impaired.

      Braille Bunny

      • Houshalter says:

        Someone who’s never had sight would not know what that was supposed to be. It’s also really difficult to distinguish images from touch alone.

        But the bigger point is that an AI doesn’t require vision. If an AI can pass a normal Turing test but not be able to recognize ASCII art, we would still consider it a success.

        • Peter says:

          Anyone that knows braille will recognize a braille based multiple scale image like the LOVE/HATE font I showed in the article.

          However this doesn’t even matter to the argument.

          The Turing Test is based on the notion of the imitation game and the criteria is whether the judge can identify the human OR computer in a blind terminal based session. An AI without a visual representation of the world won’t see the ASCII art although it might be able to converse intelligently otherwise. In contrast the human participant does see the ASCII art and can respond with answers about it.

          The judge would therefore be able to correctly identify the human participant 100% of the time in this situation and so the use of ASCII art renders the test useless.

          This does not mean that the AI you describe wouldn’t be an impressive achievement, useful or a success. It would be all of the above.

  1. July 7, 2014

    […] You can get any chatbot to fail the Turing Test because the test has a fundamental flaw […]

  2. July 7, 2014

    […] You can get any chatbot to fail the Turing Test because the test has a fundamental flaw […]

Leave a Reply

buy windows 11 pro test ediyorum