In 2009, we reviewed the results of the 2008 2K BotPrize Competition and raised the question “Was That a Bot or a Human?” Now the results are in for the 2010 competition: the ability for judges to distinguish between gamebots – those wily non-human AI online game characters – and humans – has narrowed almost to the point of standard error, a mere 3.6657% between the “most human” bot and the “least human” human. Here are the results.
Gamebots (as opposed to Internet bots or web robots) are a type of weak AI expert system software used to simulate human behavior in computer games such as Unreal Tournament and its ilk: World of Warcraft, Guild Wars, Lineage, and Everquest. The BotPrize Competition uses a specially hacked version of of the first person shooter Unreal Tournament 2004 so an AI program on a user’s PC can send sensory information for a character over a network connection.
This year’s most human bot – designed by the Conscious-Robots team of Jorge Muñoz and Raúl Arrabales – came very close to passing the videogame equivalent of the Turing Test. The Turing Test includes an interrogator or judge (Player C) tasked with determining which of two players (Players A and B) is a computer program and which is a human. The judge is typically limited to using responses to written questions in order to make the determination. In the case of the BotPrize, the judges actually played against the other players and then rated them.
The winning bot developed by the Conscious-Robots team (CERA-CRANIUM Bot 2 or CCBot2 for short) runs the CERA-CRANIUM cognitive architecture, which is based on the CERA-CRANIUM computational model of machine consciousness derived from global workspace theory. The global workspace model of consciousness, proposed by Bernard Baars, an Affiliated Research Fellow of The Neurosciences Institute in San Diego, California, proposes that perceptions below the threshold of consciousness are processed in relatively small, local areas of the brain. Broadcasting this pre-conscious information to the global workspace — a network of neural regions — results in conscious experience.
As Arrabales points out in a paper on conscious-like behavior in computer game characters, physical embodiment and real world situatedness are likely key factors in the production of “machine consciousness.” He argues that software agents such as video game bots can be “embodied and situated” with a digital body equipped with software sensors and actuators that allow them to perform actions.
What humans have that’s missing in bots is “a combination of cognitive capabilities like attention, learning, depiction, set shifting, Theory of Mind (ToM, or the ability to attribute mental states to others), planning, feelings (in the sense of higher order representations of emotions), etc.”
So, if a bot fools the judges in next year’s 2K BotPrize Competition, does it pass the Turing Test? Perhaps. Is it conscious? You be the judge.