We are all traveling into the future, as are our children and grandchildren. So it is personally relevant to everyone what that future is like. We call a very good future, a future where our species thrives, a utopia. We call a very bad future a dystopia.
We have some agency in the matter of futures, which sort of future we end up experiencing. Particularly, we have agency at a fortunate time such as this, where we have a global infrastructure, global economy and global science to direct at the problems we choose.
I am sure that almost everyone reading this has been exposed to some examples of what is considered a dystopia. Science fiction books and movies are full of examples of dystopian futures. And I think it is fair to say that many people in parts of the world today are living in circumstances that closely resemble those dystopian scenarios. The end of the world is a popular scenario. The possibility that humanity will simply cease to exist at all appears in ancient revelations and pop culture. This very year 2012 has been singled out by some.
There are many ways in which humanity could be wiped out, anything from a man made environmental disaster, war, plague, and robot uprisings to large meteor impacts. Which scenario you are most worried about depends largely on what you think are the most significant developments or natural threats.
There is an even bigger category to be concerned with. Even if humanity continues to exist, a dystopia can mean the end of civilization. It could be a post-apocalyptic result or simply the outcome of ever increasing methods of control. In a dystopia without civilization we imagine people behaving without compassion. We generally do not want to be treated like objects or resources, to be nothing but a cog in a machine.
So what is this civilization that separates us from dystopia? There are some crucial developments. Let us begin with evolution, the consequence of selection.
This is a very important concept, because selection will shape the future just as it has shaped the past. It is everywhere. Daniel Dennett has described this as Universal Darwinism.
Evolution takes place in an environment and in an epoch. We are the result of natural selection that made us suitable to the environment and challenges of a place and time. Things change.
Environments change and challenges change. One day, the Earth will no longer be a shelter to humanity. That could happen soon by our own hand, or by the unpredictable hammer from above, or perhaps at the latest when our own sun expands its searing atmosphere to engulf the Earth.
If we have time then we might ourselves move on to other places. But if we are still only adapted to Earth then it will be difficult, because we have to take a replica of its biosphere everywhere we go. In essence, Neil Armstrong never could fully touch and experience the moon, because he was still encased in a suit of Earth atmosphere, inhaling the scent of his own body.
It is also non-optimal to have senses and thinking selected for survival problems that were most relevant millions of years ago. Let us revisit the concept of Universal Darwinism with that problem in mind. Consider the big picture: What sort of intelligence and culture would inhabit and influence the most of universal space-time?
By adapting our bodies, we were able to inhabit and influence all of Earth. The clothing you wear, the cellphones you use, the cars you drive and the houses you live in are the tools that make it possible. So, the most adaptable will inhabit and influence most of everything that will ever be.
To be human is to be augmented. What we learned and what we teach all of our children is to use our minds in order to augment our bodies. Consider when you learned to drive a car. It is much faster and stronger than your limbs, but you make it your own and soon it feels entirely normal to delicately control a car in complex traffic situations.
We augmented our bodies in every way possible. So far, our minds made do by sharing burdens. We specialized to different skills, depending on each other ever more.
Culture continues to grow, and its complexity is the beauty of our creative abilities. We now use computing tools and networks such as the Internet to speed up and to add ever more information.
I remember when we used to learn how to carry out calculations and derivations in school. I remember when we used to learn facts and data about our history and society. Now we have to learn how to sift through masses of data, how to ask the right questions by entering the right Google phrases. We off-load most of the computation, the data collectors that add to the databases, and especially we rely on external memory.
Meanwhile, we live longer and experience more. We can participate in a greater part of the future and play a role in it. But what does that really mean, that You can live longer and experience more?
What are you? What is a personal identity, a self? And correspondingly, what is everything else? You experience yourself sitting here, reading this. You feel the seat. You see and you hear. But really, those things are all results of something. They are generated, processed results. Everything that you experience, everything that you are thinking, remembering, your concept of what is around you, where and when you are… all of that is generated by mental processing. Without it there is nothing, and that processing is all there is to Being.
Some say the self is an illusion, but everything else is just as much an illusion. It is all a construct, a way of structuring things, labeling them, constraining the patterns of your mental activity. Just as much as we can say that your mind is generating an experience, the same is true at different scales. We can equally say that a society of minds is generating an experience. And similarly, parts of your mind, pieces of activity in your brain are parsing their input and generating output, creating their own experiences.
We need to understand this to be more enlightenment and to strive for better things. We generally do not want to fight or harm our friends, because we know them, feel kinship and understanding. We need to understand that everything we are, everything we experience, our very identities and our experiential universes are simply that which we are processing and generating in our minds. As we learn to understand these foundations of Being our civilization matures.
There is no reason why processing information, which is the basis of experience at every level should be unique to one implementation of the processing functions, such as a human brain. In principle, the same processes could be carried out in many different substrates. Ultimately, that is where the solution to adaptability lies, in the ability to move functions of the mind to many different types of substrates – to be substrate independent minds (SIM).
That removes the constraints of a single environment and opens up the door to new senses and new ways of thinking. Imagine remembering with the precision of a computer database or finding optimal solutions with the comparative ease of a quantum computer.
To understand SIM, consider platform independent computer code. It requires a means of processing, but can run on many different platforms. Almost every religion attempts to address the problem of Being, and most espouse some form of adaptable existence whereby experience can be carried on in another substrate.
This is urgent, because we have a window of opportunity. We can tackle fundamental problems we all face, because civilization is largely intact. We have what it takes to get to the next stage.
Ideally, a Substrate-Independent Mind would achieve all of the processing that we expect in the manner most optimal to the substrate it is in. In a computing analogy, that is when you write and compile software to suit a hardware platform. We cannot do that yet.
If you ask any honest neuroscientist: “Do we understand how the mind allows me to recognize my mother?” “Do we understand the human mind?”
The answer in each case has to be no. We simply do not understand enough about the strategies used by the mind at various levels from the top all the way down to cells, which is really what is being asked when someone asks “do we understand”.
Neuroscience has spent most of the last 100 years learning how to identify elements of brain physiology and how to measure signals and compounds at the level of neurons and synapses.
That is why nearly every serious effort to identify functions of a specific person’s mind and ultimately to transfer such to substrate-independent minds is presently seeking to do so through the most conservative means, which we call Whole Brain Emulation.
An emulator re-implements function. You probably know some emulators, such as programs that allow you to run PC software on a Macintosh. Every emulation is achieved by carrying out what is known in engineering circles as System Identification.
System Identification is when you have a black box that receives input, carries out processing and produces output. You try to determine what functions constitute that processing by investigating the correlated input and output.
The very first step is of course to know what are your input and output signals of interest. Consider once more the computer analogy. Assume that we are trying to emulate the microprocessor of a PC. In that case, we know that the signals of interest are the streams of 1s and 0s that go into and come out of the chip. The 1s and 0s are really pulses of voltage above and below certain thresholds. There are many other signals, such as air pressure, cosmic radiation, noise on top of the pulses of voltage, heat being generated by the microprocessor. Those are not of interest.
Similarly, in the brain we should concentrate on the signals that are of interest at the relevant precision. Note the feedback loop that the brain creates with the rest of the body and its environment through neural action potentials or spikes. Sensory input produces spikes. Spikes drive muscles such as for speech. And the order and delay between spikes is essential for storing memory at synapses. If we could predict spike timing sufficiently we may have a working emulator.
We are now talking about a concrete roadmap to SIM based on the requirements for system identification.
FIRST REQUIREMENT: How big can the black box be for which we can reliably identify functions that predict its behavior? The bigger the box, the longer you need to observe it. If we chose the entire brain as the black box then you would probably have to observe its input and output over the entire course of its life-span. What you deduce would still be flawed and likely miss latent functions.
With literally billions or trillions of operational elements within, tuning any emulation created at that level would be computationally intractable.
The more we know about the relevant I/O and the architecture of the brain, the smaller we can make the black boxes for which system identification needs to be carried out. This first requirement is all about determining the right scope and resolution for emulation.
SECOND REQUIREMENT: We will need a platform on which an emulation of a specific mind can be implemented. How much processing are we talking about?
Let’s assume a traditional general purpose supercomputer. And let us assume that we simplified system identification by building what we call compartmental models of neurons based on structural scans. Each compartment is like an electrical circuit and governed by a set of equations know as the Hodgkin-Huxley equations with several parameters to measure and tune.
Consider how many ATP molecules providing energy at the cellular level are needed for one action potential to propagate to neighboring neurons. And consider that 20-40W are consumed by the brain. From this I calculated how many events can take place in a unit of time. When each neuron is represented by 10,000 compartments, processing those events on a generic supercomputer would require one exaflop of computing power. Present supercomputers max out at 10 petaflops, which is 100 times too slow. But, US, European and even Indian initiatives aim to have exaflop computing centers up and running between 2017 and 2020.
That would be a brute-force approach. It is much better to co-design your hardware using neuromorphic computing. A famous example is the DARPA SyNAPSE project at IBM. Computation is not the main hurdle for SIM. The main hurdle is building better tools for large scale high resolution acquisition of data from the brain.
THIRD REQUIREMENT: Obtain the detailed specific structure, the connectome of an individual brain. The better that data is, the smaller the black boxes become for our system identification problem. We would like to be able to predict as much as possible about the parameters for a compartmental model from structural measurements.
Tools in this area are advancing rapidly, spurred on by research interest into the human connectome. In 2011, two teams published remarkable results demonstrating a proof of principle for system identification in retina and visual cortex using Serial Block Face Scanning Electron Microscopy techniques developed in the lab of Winfried Denk in combination with two-photon functional recordings.
Ken Hayworth, a strong proponent of whole brain emulation is improving the volume capacity of his earlier tool, the Automatic Tape collecting Ultramicrotome by developing a parallel processing technique for Focused Ion Beam Scanning Electron Microscopy at Janelia Farm laboratories.
FOURTH REQUIREMENT: We need reference measurements of characteristic responses at a high resolution to correct and tune parameters. Tuning all of the parameters of combined compartmental models that make up a whole brain is otherwise an intractable optimization problem. The functional recording techniques used in neuroscience today can obtain a very few measurements at high resolution using electrodes or a larger number of measurements at low spatial and temporal resolution using techniques such as MRI. We need something much better.
If we try to do this by improving external recording techniques then the distance at which measurements need to be resolved poses a physics problem.
If you want to resolve a signal at a given spatial resolution and within a given temporal resolution, but you increase the distance from being immediately adjacent to a synapse to being outside the skull then there is a quadratic increase in power requirements. At the resolutions required, this rapidly leads to doses that are far from non-invasive. They would be very damaging, and would severely affect measurements through their own effects on the neural tissue.
The brain carries out its own measurements at large scale and high resolution by remaining very proximate to the sources of activity, detecting activity through microscopic synaptic receptor channels. At the same scale, we may build means to measure without interfering. Also, the brain handles the enormous quantity of information by using a vast hierarchy of mostly local connections. In previous publications I have referred to this as the Demux-Tree approach to neural recording.
Practical implementations in development that are based on these insights are threefold. First there is a move to arrays with very many electrodes. Then there is work to create a means of recording neural activity at the molecular scale, using DNA or similar substrates for the recording. That is called a molecular ticker tape (a collaboration between Northwestern University, Harvard, and MIT).
The third implementation combines technologies to produce a microscopic hierarchical system for in-vivomeasurements.
The basic component is an agent built in familiar IC technology. Prior work has already shown that you can successfully combine microscopic chips with living cells (e.g. work by Gomez-Martinez et al., 2009).
A chip the size of a red blood cell can contain more transistors than the original Intel i4004 microprocessor. Power can be delivered in a number of ways, from magnetic induction to glucose fuel cells, but most easily through light. There is a wavelength of infrared light between 800 and 1000 nm at which tissue is essentially transparent.
Recording of activity can be done either by detecting voltages over a capacitor or by optical means when operated in combination with voltage sensitive proteins that are used to show activity in neurons.
To conduct brain-wide measurements and to deliver data to the outside, large numbers of microscopic agents need to collaborate, each carrying out specialized roles. They would form a team or a secondary network of computation within and side-by-side with the brain.
Measurements made by agents can be collected, multiplexed and converted into signals that are more readily identified by external imaging methods. Locations of measurements can be obtained by combining direct detection of larger hubs with a protocol for relative triangulated distances between agents.
These machines within the mind are purposely conceived as a combination of presently feasible technology. They are an ambitious next step in neuroscience that once again involves a collaboration with MIT and Harvard laboratories.
The long term future will either not involve us or will demand that we become vastly more adaptable. Right now, we have an incredible global infrastructure, an economy that – though shaken – is still powerful, and research and development are thriving. We don’t know if we will have those things in 20, 40 or 60 years.
Yes, it is clearly very ambitious to try to solve these fundamental problems that Universal Darwinism will throw our way. But that is exactly why now is the time to take it on. Right now, we have the means and the opportunity to bring about advances that open up our understanding of who we are, what it means to exist, to be, and to make us a species that will thrive. It stands to reason that we should grab that opportunity while it is here!
Randal A. Koene is a Dutch neuroscientist and neuroengineer, and co-founder of carboncopies.org, the outreach and roadmapping organization for Advancing Substrate-Independent Minds (ASIM). He is currently directing the Analysis team at the nanotechnology company Halcyon Molecular