Like Christmas tree ornaments rolling around a pool table, the brightly colored little bots were allowed to explore their surroundings. Each bot had an initial built-in attraction to a “food” object, aversion to a “poison” object, and a randomly-generated set of parameters –- their “genomes” –- to define the way they move, process sensory information, and flash their blue lights.
Here’s a short video showing how the bots signal each other to cluster around food and poison:
The genomes of the bots that found food and avoided poison were recombined by researchers, mimicking biological natural selection. To create a next generation bot, traits were combined and randomized to mimic biological mating and mutation. First simulated in software before using actual bots, five hundred generations were evolved this way with different selective pressures by roboticists and biologists at the Ecole Polytechnique Fédérale de Lausanne in Switzerland in 2007. "Under some conditions, sophisticated communication evolved," says biologist Lauent Keller of the University of Lausanne. "We saw colonies that used their lights to signal when they found food and others that used signals to communicate they had found poison."
This experiment in swarm robotics shows both the coordination of multi-robot systems consisting of large numbers of simple physical robots and the evolution of collective communication behaviors. The study of artificial swarm intelligence as well as the biological studies of insects, ants, and other swarms in nature provides insight into the nature of intelligence in general, and offers an interesting perspective on the nature of Darwinian selection, competition, and cooperation.
A more recent 2009 study, again at Lausanne, suggests that swarms of bots don’t just evolve cooperative strategies to find food (or avoid poison), they can also evolve the ability to deceive. Bots equipped with artificial neural networks and programmed to find food eventually learn to conceal their visual signals from other robots to keep the food for themselves. “Forget zombies,” a post on Current TV’s blog comments about the little bots, “this is the real threat.” (Fortunately, these experimental bots don’t eat brains – at least, not yet.)
Swarm intelligence studies in evolutionary robotics often use genetic computer algorithms. A population of candidate software controllers is grown repeatedly according to crossover, mutation, and other operators — then they are culled according to a fitness function. (Crossover involves varying the “chromosomes” of bots from one generation to the next.) The candidate controllers used in robotic applications can be drawn from a subset of the set of artificial neural networks. A paper by Stefano Nolfi, “The Evolution of Artificial Neural Networks,” describes how such artificial neural networks can be evolved. The initial population of different artificial genotypes, each encoding the free parameters such as the connection strengths and/or the architecture of the network and/or the learning rules of a corresponding neural network, is created randomly. The population of the networks is evaluated to determine the performance (fitness) of each individual network. The fittest networks are allowed to “reproduce” by generating copies of their genotypes with the addition of changes introduced by some genetic operators, including mutations, crossover, and duplication. This process is repeated for a number of generations until a network is obtained that satisfies the performance criterion (fitness function) set by the experimenter.
While the technology of neural networks would be quite foreign to a visitor from the 1800s, the behavior of the bots would still seem very familiar to Victorian era Charles Darwin. Whether finches, wrens, iguanas, humans, or evolutionary bots, Darwin’s great idea is that small, random, heritable differences among individuals result in different chances of survival and reproduction success for some and death without offspring for others. In the natural world, this culling leads to “significant changes in shape, size, strength, armament, color, biochemistry, and behavior among the descendants,” says naturalist and Darwin scholar David Quammen. “Excess population growth drives the competitive struggle. Because less successful competitors produce fewer surviving offspring, the useless or negative variations tend to disappear, whereas the useful variations tend to be perpetuated and gradually magnified throughout a population.”
In the Christmas tree ornament robot world of the Lausanne study –- rather than emitting the hoots, howls, and grunts found in the natural world of the forest or the savannah –- the bots produced information by emitting blue lights that other bots could perceive with their cameras. Because space was limited around the food in the experiment, the bots bumped and jostled each other after spotting a blue light. By the 50th generation, some bots eventually learned not to flash their blue light as frequently when they were near the food so they wouldn’t draw the attention of other robots. After a few hundred generations, the majority of the robots never flashed a light when they were near the food.
Here’s what the Lausanne researchers concluded: over the first few generations, the bots had quickly evolved to successfully locate the food, while emitting light randomly. This resulted in a high intensity of light near food, which provided social information allowing other bots to more rapidly find the food. Because the bots were competing for food, they were quickly selected to conceal this information. Surprisingly, the bots never completely ceased to produce information. The researchers attributed this unexpected result to the strength of selection in suppressing information. They found that the strength of selection declined according to a reduction in information content. This produced a stable equilibrium with low information and considerable variation in communicative behaviors.
Bots equipped with artificial neural networks and programmed to find food learn to conceal their visual signals from other robots to keep food for themselves.
Because a similar co-evolutionary process is common in natural systems, the Lausanne researchers suggest that the relationship between evolutionary selection and information content “may explain why communicative strategies are so variable in many animal species.”
Using robotics to study communication may help overcome some of the difficulties of performing experimental evolution with social animals, since there is a distinct lack of fossil evidence for changes in communication skills over time. Communication is very important for social organisms to ensure their ecological success. For example, University of Wisconsin-Madison psychology professor Charles Snowdon offers a perspective on what the early environmental conditions may have been that led to the hominid communicative explosion. His research into the world of nonhuman primates suggests that while apes and monkeys in the Old World tend to be relatively silent creatures, the New World is home to much noisier monkeys such as tararins and marmosets that vocalize more frequently to “show more richness of development and learning in their vocal patterns, and that appear to transmit more information with the sounds they produce than do any of the Old World primates.”
A key reason, he suggests, is cooperative breeding, which is found in the New World animals to a much greater extent than in the Old World monkeys and apes. New World primates live in circumstances where engaging in rich communicative exchange is advantageous, because parents (and alloparents — aunts, uncles, and others) engage in cooperative rearing and need to communicate about it. This, Snowdon suggests, may be a critical factor that differentiated our early hominid ancestors from their ape cousins.
Such studies help shed light on the biology of selfishness and altruism — for example, when is cooperative behavior advantageous? Richard Dawkins’ classic book The Selfish Gene reinterpreted the basis of evolution and altruism. Evolutionary biologist Robert Trivers, who wrote the forward to Dawkins’ book, suggests that reciprocal altruism is strongly favored by natural selection to lead to complex systems of altruistic behavior.
On the surface, the evolution of “I want all the food” deceptive behavior in bots might seem to support dog-eat-dog selfish competition rather than altruistic cooperation. Not necessarily, suggests one thoughtful blogger commenting on the Lausanne study. “It seems to me that creating cooperation wouldn’t need a changing of the rules or the evolutionary process, but a changing of the game,” he argues. “There was no preset strategy of competition, it just so happens that that strategy yielded the most points, and is therefore the one that evolved naturally. Create a game where the payoff might be greater through cooperation, and I’m betting the bots find the strategy on their own.”
Are deceptive bots the real threat? Probably no more so than humans who lie and cheat. Perhaps bots will simply end up evolving on their own into something else again – as suggested by this stylish German advertisement (watch for the cool Steampunk T-Rex):