Sign In

Remember Me

Brain On a Chip

Brain on a Chip

Are we humans – with our carbon-based neural net “wetware” brains – at a point in history when we might be able to imprint the circuitry of the human brain using transistors on a silicon chip?

A well-covered recent article in MIT’s Technology Review reports that a team of European scientists may have taken the first steps in creating a silicon chip designed to function like a human brain.

What’s involved in this seemingly Herculean task? The brain is a parallel processor. The colorful blue jay I see flitting from tree to tree in my garden appears as a single image. But the brain divides what it sees into four components: color, motion, shape, and depth. These are individually processed – at the same time – and compared to my stored memories (blue things, things with feathers, things that fly, other blue jays that I’ve seen).

My brain then combines all of these processes into one image that I see and comprehend. And that’s just vision aspect of a multiplexed moment of perception. At the same time, I smell the fragrant flowers in my garden, hear the neighbors talking about a party, feel my muscles relax as I sit in my lounge chair, and daydream about the beaches of Fiji while I answer my cell phone.

The MacBook Pro Intel core duo that I’m using to type this article is also doing several things at once. At the highest level, its world consists of programs with multiple computational threads running at the same time. Parallel processing makes programs run faster because there are more CPUs or cores running them.

Today’s most powerful supercomputers are all massively parallel processing systems with names like Earth Simulator, Blue Gene, ASCI White, ASCI Red, ASCI Purple, and ASCI Thor’s Hammer. Through Moore’s Law – which states that the number of transistors on a chip double every eighteen months – single chips that function as parallel processor arrays are becoming cost effective. Examples include chips from Ambric, picoChip, and Tilera.

The brain is also massively parallel, but currently on a different scale than the most powerful supercomputers. The human cortex has about 22 billion neurons and 220 trillion synapses. A supercomputer capable of running a software simulation of the human brain doesn’t yet exist. Researchers estimate that it would require at least a machine with a computational capacity of 36.8 petaflops (a petaflop is a thousand trillion floating point operations per second) and a memory capacity of 3.2 petabytes – a scale that supercomputer technology isn’t expected to hit for at least three years.

Enter the team of scientists in Europe that has created a silicon chip designed to function like a human brain. With 200,000 neurons linked up by 50 million synaptic connections, the chip is still orders of magnitude from a human brain. Yet, the chip can “mimic the brain’s ability to learn more closely than any other machine” – thus far.

“The chip has a fraction of the number of neurons or connections found in a brain, but its design allows it to be scaled up.” So says Karlheinz Meier, a physicist at Heidelberg University in Germany, and the coordinator of the Fast Analog Computing with Emergent Transient States project, or FACETS.

Henry Markram, head of the Blue Brain project at the Ecole Polytechnique Fédérale de Lausanne, uses the same databases of neurological data as FACETS. Among the challenges he faces is “recreating the three-dimensional structure of the brain in a 2-D piece of silicon.”

Markram admits that the simulations of biological brain functions using a silicon chip are still crude. "It’s not a brain. It’s more of a computer processor that has some of the accelerated parallel computing that the brain has," he says.

Markram doubts that the FACETS hardware approach will ultimately offer much insight into how the brain works. For example, unlike the Blue Brain project, researchers aren’t able to perform drug testing – simulating the effects of drugs on the brain using silicon. "It’s more a platform for artificial intelligence than understanding biology," he says.

Markram’s Blue Brain project is the first comprehensive attempt to reverse-engineer the mammalian brain. The brain processes information by sending electrical signals from neuron-to-neuron using the “wiring” of dendrites and axons. In the cortex, neurons are organized into basic functional units – cylindrical volumes – each containing about 10,000 neurons that are connected in an intricate but consistent way. These units operate much like microcircuits in a computer. This microcircuit, known as the neocortical column, is repeated millions of times across the cortex.

The first step of the project is to re-create this fundamental microcircuit, down to the level of biologically accurate individual neurons. The microcircuit can then be used in simulations such as a genetic variation in particular neurotransmitters, mimicking what happens when the molecular environment is altered using drugs.

Brains In Silicon, an interdisciplinary program at Stanford, also combines neurobiological research with electrical engineering. The program has two complementary objectives: to use the existing knowledge of brain function to design an affordable supercomputer that can then itself serve as a tool to investigate brain function, “feeding back and contributing to a fundamental, biological understanding of how the brain works.”

Kwabena Boahen, Brains In Silicon principal investigator and an associate professor of bioengineering at Stanford, has been working on implementing neural architectures in silicon. One of the main challenges to building this system in hardware, explains Boahen, is that each neuron connects to others through 8,000 synapses. It takes about 20 transistors to implement a synapse. Clearly, building the silicon equivalent of 220 trillion synapses is not an easy problem to solve.

The quest to reverse-engineer the human brain is described in detail in Jeff Hawkins’ well-known book On Intelligence. Hawkins believes computer scientists have focused too much on the end product of artificial intelligence. Like B.F. Skinner, who held that psychologists should study stimuli and responses and essentially ignore the cognitive processes that go on in the brain, he holds that scientists working in AI and neural networks have focused too much on inputs and outputs rather than the neurological system that connects them.

Hawkins’ company, Numenta, is creating a new type of computing technology modeled on the structure and operation of the neocortex. The technology is called Hierarchical Temporal Memory, or HTM, and is applicable to a broad class of problems from machine vision, to fraud detection, to semantic analysis of text. HTM is based on the theory of the neocortex first described in Hawkins’ book.

In The Singularity Is Near, Ray Kurzweil comments that, “…hardware computational capacity is necessary but not sufficient. Understanding the organization and content of these resources – the software of intelligence – is even more critical and is the objective of the brain reverse-engineering undertaking.” He goes on to famously say that once a computer achieves a human level of intelligence, it will necessarily soar past it.

h+ contributor Ben Goertzel (like Kurzweil) has stated that – given the problems facing humanity – we may not be able to wait on advances in hardware and the reverse-engineering of the brain to achieve the AI vision of human-like intelligence (or greater). His Novamente Artificial General Intelligence (AGI) software is not dependent on a specific hardware architecture, although it will obviously benefit from massively parallel supercomputer architectures. Key cognitive mechanisms of the system include a probabilistic reasoning engine based on a variant of probabilistic logic and an evolutionary learning engine that is based on a synthesis of probabilistic modeling and evolutionary programming. It’s a different approach than reverse-engineering the brain, but one that may yield results more quickly.

With research and development converging on all fronts – hardware and software – it would seem to be only a matter of time until a brain with human-level complexity is available using a massively parallel architecture on silicon chip. Karlheinz Meier’s FACETS group now plans to further scale up their chips, connecting a number of wafers to create a superchip with a total of a billion neurons and 1013 (10 trillion) synapses, well on the way to the 22 billion neurons and 220 trillion synapses of the human brain.

If Ray Kurzweil is right, superchip development won’t stop at 22 billion neurons, even if Moore’s law is no longer applicable and it becomes impossible to get additional transistors on a piece of silicon. Physicist Freeman Dyson at Princeton University has visualized spheres extracting usable stellar energy. Currently the stuff of SciFi, a “Class B stellar engine” would consist of a series of nested Dyson spheres – a Matryoshka brain like a series of Russian dolls enclosed inside each other – and composed of nanoscale computers powered by a star.

 

13 Comments

  1. FPGAs are the best hardware we have now, for the future of AI. In fact they might even be more accurate, as “AI” means just that: artificial intelligence, not human-mind simulation. FPGAs learn, remember, and apply their memory. They are the only computer hardware that doesn’t necessarily go out of date.

  2. Make photonic fiber of erbium doped silica sponge spicules packed so voids and dopant align to create cellular automata for emergent computation. 3-D navigation through a crystal structure would enable adequate synapse. Spicules could be grown or replicated to include dopant, which would provide 3-D circuitry.

  3. Brain simulation using transistors may be possible once we understand how the brain is organized and how the brain actually works. For the moment most of the workings, structure and organization of the brain is a mystery. We would do well in first understanding the brain of simpler animals or even microorganisms and then move from there. It is ill-advised to try to simulate, replicate or understand what may be the most complex structure of the universe right away without understandig simpler structures. The current efforts to build an artificial brain with transistors may be compared with the efforts of the ancient Ejyptians trying to reach the stars and the moon with stones and piramids. Besides, the brain “programs” itself and most of its functions remain blissfully unperceptible to us. Perhaps the software for such a system (brain) based on transistors would be elusive or be even a larger and more daunting project. Where is the “software” and what does it do to the neurons of the brain? Much more studies are needed. Another thousand years perhaps. I am afraid the answers will be found through biology, DNA, etc. Computers will serve as tools to store, track and create blueprints of these studies like they help us in designing planes or medicines.

  4. I submit what Jeff Hawkins and team are doing at Numenta. They are trying to model the way the human brain actually works, and implement the model in software and calls it the nuPIC platform. You can download it for free for personal use and experimentation. http://numenta.com/

    Once this is complete, whatever can be done in software can be synthesised into an FPGA or even taken to an ASIC.

  5. “For the moment most of the workings, structure and organization of the brain is a mystery. ”

    That is not even remotely true. You would do well to research the past 50 years of brain, mind, and AI research, otherwise you will be in for a very rude surprise within your own lifetime.

  6. Hate to break it to everyone but the brain photo with circutry in the background is the silkscreen + traces for a PCB (i.e. printed circutit board) not for a chip. While you might, and most people do, attache chips to boards it’s a cool photo composition but not really representative of the article.

  7. It’s funny how the article only talks about people who are at the fringe of the field, not at the leading edge. Kurzweil, Goertzel, Hawkins, and Makram (to a lesser extent) have ZERO credibility in the machine learning/AI/Computational neuroscience communities. Why do journalists never interview the reputable scientists that actually push the envelope, instead of just reinventing the wheel (Hawkins), making unsupported futuristic claims (Kurzweil), pontificating about general AI (Goertzel). All of these people have a commercial incentive.

    Why not interview the serious AI/machine learning/computational neuroscience people who actually produce new methods with measurable results. What about interviewing Geoffrey Hinton, Terry Sejnowski, Rich Sutton, Sebastian Seung, Yann LeCun, Tommy Poggio, Andrew Ng, Yoshua Bengio? They make the field progress quietly. They don’t just re-invent the wheel or present other people’s ideas as their own. They don’t make wild claims about the singularity or the fact that AI will happen “real soon now”. They are in the trenches making it happen.

    People have been building neural net hardware for at least 20 years (there was work at Bell Labs and Intel in the late 80’s), but not much came out of it. Why? because mainstream hardware progresses so fast that the performance advantage of specialized hardware always gets overtaken in a few years by Moore’s Law. Also, to build an artificial brain that actually does something useful, you need a good learning algorithm. The current learning algorithms work pretty well, but not as well as what the brain uses. The hardware mentioned in the article does not even support learning. What’s the point of simulating 10 zillion neurons if you can’t get them to do something useful?

    The problem is that serious scientists don’t tend to make the wild claims that fringe researchers, startup CEOs, publicity whores, and crackpots are willing to make. Hence they don’t get the attention of the press. All the hype is damaging to the field: it creates false expectations from government, industry, and the public. It is such hype that lead to the multiple crashes of “AI” over the last 50 years.

  8. You certainly have a point, but such is the media. Even if the scientists themselves weren’t making outrageous claims the media would write outlandish headlines and rephrase their comments to support outlandish claims. It’s just how people work I guess.

    I will say, however, that what they are doing in Europe with the architecture built to model the brain is not simply a neural network. It is a far more complex model than the neural networks of the 80’s. It is definately interesting work, if you ask me.

    I am not quite sure what you mean by it not supporting learing. In a neural network the learning comes from the way the neurons are connected along with the way you adjust the weights over time. A common configuration is a perceptron network, but there are many formulas for how you intitialy connnect the neurons and how you adjust the weights.

    From what I understand this new architecture actualy trys to model more closely the way the brain chooses connections and weights between neurons. If it is working the way they claim then the learning algorithm is implicit in the hardware. This is, I think one of the ways it’s different than a traditional neural network.

    In any event it will be interesting to see what comes out of it. I wouldn’t really say “nothing came out of” traditional neural networks. They just didn’t hold up to the outlandish claims made in the media. I’m sure this new tech won’t either, but it still might have something to teach us.

  9. It is the apparent crackpots — Einstein in physics, Goertzel in AI — who make the necessary quantum leaps in any field. See http://code.google.com/p/mindforth/wiki/AiHasBeenSolved for details.

  10. I wonder what these researchers could do with the memristor!