With your shield gun pointing at the building ahead of you and your biorifle in your holster, you see heavily armored, well-muscled computer game characters running at you. They’re coming at you in squads with team names like Thunder Crash, Iron Guard, and Fire Storm. Your mission? Obliterate your opponents and claim the Unreal Tournament Trophy. But, who exactly — or what — is that large pixilated dude coming after you in the camouflaged flak jacket?
Epic Games’ Unreal Tournament 2004 is a multiplayer FPS (First Person Shooter) PC game that “combines the kill-or-be-killed experience of gladiatorial combat with cutting-edge technology.” Users compete in “death match” teams over the Internet for a prized Tournament Trophy. Although there has been very little research into the psychological and social aspects of FPS games, existing studies show the players are almost exclusively young men (mean age about 18 years) who spend a lot of their leisure time on gaming (about 2.6 hours per day).
But young men are not the only players. Gamebots (as opposed to Internet bots or web robots) are a type of weak AI expert system software used to simulate human behavior in computer games such as Unreal Tournament and its ilk: World of Warcraft, Guild Wars, Lineage, and Everquest – to name a few. Each bot is a separate instance of an AI computer program. Bots control pixilated characters that are often indistinguishable from human characters.
Unreal Tournament 2004 is designed to be hacked so that an AI program on a user’s PC sends sensory information for a character over a network connection. Based on this information, the AI program decides what actions the character should take and issues commands causing the character to move, shoot, and talk. Project “Gamebots” at the University of Southern California’s Information Sciences Institute “seeks to turn the game Unreal Tournament into a domain for research in artificial intelligence.”
It may seem odd that a shoot-’em-up death match game might be a breeding place for machine intelligence. The IEEE Symposium on Computational Intelligence and Games (CIG) took this notion seriously enough to host the first ever “BotPrize” contest in December 2008 to see if a computer game-playing bot could convince a panel of expert judges that it was actually a human player.
The bots competing in the death match tournament were created by teams from Australia, the Czech Republic, the United States, Japan and Singapore. The judges included AI experts, a game development executive, game developers, and an expert human player. A $7000 cash prize was offered to the team who could create a bot indistinguishable from a human player.
Will the first bot to pass the Turing Test end up obliterating its opponents in Epic Games’ Unreal Tournament 2004?
How did the judging work? Well, remember the Turing Test? In 1951, Alan Turing wrote a famous paper in which he proposed a test to demonstrate machine intelligence. Often characterized as a way of dealing with the question of whether machines can think (a question that Turing considered meaningless), the “standard interpretation” of the Turing Test includes an interrogator or judge (Player C) tasked with determining which of two players (Players A and B) is a computer program and which is a human. The judge is typically limited to using responses to written questions in order to make the determination. In the case of the BotPrize, the judges actually played against the other players and then rated them.
The results? You can judge the players yourself based on short clips of the game’s action posted on the Internet. It’s not always easy. On a scale of 0 to 4 (4 is the most human-like), the humans in the contest all scored higher than the bots (humans: 4, 3.8, 3.8, 3, 2.6; bots: 0.4, 0.8, 2, 2.2, 2.4). The winning bot team AMIS, from Charles University in Prague, managed to fool 2 out of 5 expert judges, and achieved a mean rating of 2.4. Startlingly, one human competitor scored only 2.6, just two tenths higher than the winning bot. The AMIS team did not win the $7000 prize: they were unable to pass the test by fooling 4 out of 5 judges. However, they did take home $2000 for having the winning entry in the tournament. CIG’s BotPrize contest is a variant on the Loebner Prize, an annual competition started by philanthropist Hugh Loebner in 1991 that challenges programmers to create a program that can pass the Turing Test.
Both the CIG and the Loebner prizes have yet to be claimed. Will 2009 be the year? And will the first bot to pass the Turing Test end up obliterating its opponents in Unreal Tournament 2004? stay tuned.
Surfdaddy Orca is another monkey with a laptop and a cell phone waiting for Godot or the Singularity or whatever comes next.