Here Come the Neurobots

Can we build a brain from the ground up, one neuron (or so) at a time? That’s the goal of neurobotics, a science that sits at the convergence of robotics, artificial intelligence, computer science, neuroscience, cognitive psychology, physiology, mathematics and several different engineering disciplines. Computationally demanding and requiring a long view and a macroscopic perspective (qualities not often found in our world of impatient specialization), the field is so Here Come the Neurobotsfundamentally challenging that there are only around five labs pursuing it worldwide.

Neurobotics is an outgrowth of a growing realization that, when it comes to understanding the brain, neither computer simulations nor top-down robotic models are getting anywhere close. As Dartmouth neuroscientist and Director of the Brain Engineering Lab Richard Granger puts it, “The history of top-down-only approaches is spectacular failure. We learned a ton, but mainly we learned these approaches don’t work.”

Gerald Edelman, a Nobel Prize-winning neuroscientist and Chairman of Neurobiology at Scripps Research Institute, first described the neurobotics approach back in 1978. In his “Theory of Neuronal Group Selection,” Edelman essentially argued that any individual’s nervous system employs a selection system similar to natural selection, though operating with a different mechanism. “It’s obvious that the brain is a huge population of individual neurons,” says UC Irvine neuroscientist Jeff Krichmar. “Neuronal Group Selection meant we could apply population models to neuroscience, we could examine things at a systems’ level.” This systems approach became the architectural blueprint for moving neurobotics forward.

The Edge of Real Brain Complexity
The robots in Jeff Krichmar’s lab don’t look like much. CARL-1, his latest model, is a squat, white trash can contraption with a couple of shopping cart wheels bolted to its side, a video camera wired to the lid, and a couple of bunny ears taped on for good measure. But open up that lid and you’ll find something remarkable — the beginnings of a truly biological nervous system. CARL-1 has thousands of neurons and millions of synapses that, he says, “are just about the edge of the amount of size and complexity found in real brains.” Not surprisingly, robots built this way — using the same operating principles as our nervous system — are called neurobots.

Krichmar emphasizes that these artificial nervous systems are based upon neurobiological principles rather than computer models of how intelligence works. The first of those principles, as he describes it, is: “The brain is embodied in the body and the body is embedded in the environment — so we build brains and then we put these brains in bodies and then we let these bodies loose in an environment to see what happens,” This has become something of a foundational principle — and the great and complex challenge — of neurobotics.

When you embed a brain in a body, you get behavior not often found in other robots. Brain bots don’t work like Aibo. You can buy a thousand different Aibos and they all behave the same. But brain bots, like real brains, learn through trial and error, and that changes things. “Put a couple of my robots inside a maze,” says Krichmar, “let them run it a few times, and what each of those robots learns will be different. Those differences are magnified into behavior pretty quickly.” When psychologists define personality, it’s along the lines of “idiosyncratic behavior that’s predictive of future behavior.” What Krichmar is saying is that his brain bots are developing personalities — and they’re doing it pretty quickly.

Here Come the NeurobotsKrichmar’s bots develop personalities because, instead of preprogramming behaviors, these robots have neuro-modulatory systems or value judgment systems — move towards something good, move away from something bad — that are modeled around the human’s dopaminergic system (for wanting or reward-based behaviors) and the noradrenergic system (for vigilance and surprise). When something salient occurs — in CARL-1’s case that’s usually bumping into a sensor in a maze — a signal is sent to its brain telling the bot to react to the event and remember the context for later. This is conditional learning and it mimics what occurs in real brains. It also allows Krichmar to examine one of the great puzzles in systems neuroscience — how do the brain’s neurons work together?

“We’re pretty sure you need a certain brain size for the level of complexity we see in biological organisms,” he says, “but we don’t have the tools to make a network that big behave in any stable way. The biological brain is remarkably stable. We can alter it with drugs, we can put it into all sorts of varied environments, pretty much it still knows how to function. Our robots are still brittle by comparison.”

CARL-1 has thousands of neurons and millions of synapses that “are just about the edge of the amount of size and complexity found in real brains.”

Besides personality, another thing these robots develop are types of episodic and categorical memory not found in other computers. After running early brain bots Darwin X and Darwin XI through a few mazes, Edelman, working alongside Krichmar and a researcher named Jason Fleischer, found they’d naturally developed place cells — meaning they didn’t program them in. These are cells in the Hippocampus that fire whenever an animal passes through a specific location, essentially linking place with time. more than that, when Edelman examined his bots’ brains, he found these place cells would not only fire based on where the robot had been, but also on where it was planning to go, “which,” says Krichmar, “is exactly what you would see in the brain of a rat and nothing anyone’s seen in a robot before.”

The Biggest Dragon: Higher Cortical Functions
Meanwhile, Richard Granger is using brain bots to hunt down yet another grail: where language originates in the brain. “It’s been pretty widely demonstrated that the brain is modular and highly uniform,” he says. “There are certain broad stroke differences between humans and other animals, but we can count the number of those on two hands. Yet humans can speak and animals can’t. That’s a pretty big difference. And even the variations that have been found in brain language areas like Broca’s Area don’t hint at how language could emerge from the changes found. So where is language? We’ve spent billions trying to track down its origins and still can’t find it.”

Here Come the NeurobotsGranger believes that the only real differences between animal and human brains are size and connectivity, an argument he lays out in his book Big Brain. “Humans have a lot bigger brains so we have much more space for neurons to make connections, to link with other neurons. ” It’s in that space, in those extra connections, where Granger thinks language emerges. If he’s right, as his bot brains draw closer and closer is size and complexity to human brains, language should start to emerge — and Granger will get to watch it happen.

Of course, since neurobotics is a dragon-slayer’s approach, there are also a few scientists going after the biggest dragon. Just like Granger is upping complexity to examine language, researchers at Imperial College in London are doing the same thing for consciousness. “All of this work is comparable,” says Granger, “because we’re all modeling cortical structures to build whole brain models with the intention of seeing if higher functions like language and consciousness develop.” And if what they’ve discovered so far is any indication, then when it comes to developing higher cortical function in neurobots, it’s really not a question of if, only “when.”

8 Responses

  1. Nice Try, No Cigar says:

    No. Language comes from a subspatial domain. No amount of technology utilizing only third-dimensional capabilities can replicate the process. It has already been demonstrated that the brain operates on a cross-spatial reflection along another dimensional plane. When we start merging LHC-like concepts, quantum mechanics (and computing), and neural networks, THEN we will have the next revolution in AI. All current AI techniques can be equated to a bunch of kids in a kiddie pool.

  1. October 17, 2011

    […] device. It is difficult to mimic the synaptic circuits. Several academics have previously modeled reward systems. Even if the neural networks arrange in a related way, it doesn’t mean the machine is actually […]

Leave a Reply