Sign In

Remember Me

Randal Koene on Substrate-Independent Minds

Dr. Randal Koene is a cross-disciplinary scientist whose background spans computational neuroscience, psychology, information theory, electrical engineering and physics.   Currently serving as Director of Analysis at Halcyon Molecular, working on breakthrough methods in DNA sequencing and other projects, he was previously Director of the Department of Neuroengineering at Tecnalia, the third largest private research organization in Europe, and a professor at the Center for Memory and Brain of Boston University.  He’s also been perhaps the world’s most vocal, steadfast and successful advocate of the idea of “mind uploading” or, as he now prefers to call it “substrate-independent minds.”  Putting a human brain in a digital computer, for example – or in general making practical technology for treating minds as patterns of organization and dynamics, rather than being tied to particular physical implementations.  His websites and contain valuable information for anyone interested in these subjects – and in this interview he gives a wonderful overview of the current state and future prospects of R&D regarding substrate-independent minds.

First, just to get the question out of the way — why the shift in terminology from “mind uploading” to “advancing substrate independent minds” (ASIM)?  Is it just the same idea in new clothing or is there a conceptual shift also?


There are multiple reasons for the shift in terminology. The first reason was a confusion, at least on the part of the uninitiated, about what mind uploading means. There’s uploading, downloading, offloading. With respect to memory, I’ve heard all three used before. But what’s worse, uploading is normally associated only with taking some data and putting it somewhere else for storage. It does not imply that you do anything else with it, while the most interesting part about the objective is not just to copy data, but to re-implement the processes. The goal is to emulate the operations that have to be carried out to have a functioning mind.

The second important reason was that mind uploading does not clearly say anything about our objective. Mind uploading is a process, and that is exactly what it should stand for and how we use that term: Mind uploading is the process of transfer, a process by which that which constitutes a specific mind is transferred from one substrate (e.g. the biological brain) to another (e.g. a silicon brain).

There was also a slight conceptual shift involved in the adoption of the new Substrate-Independent Minds terminology.  Or, if there was not a conceptual shift then the perceived shift can be blamed even more squarely on the original terminology. 😉


Elaborate a bit?


Consider this: When you have accomplished a transfer, i.e. a mind upload, then that mind is no longer in its original substrate. The mind could be operating on any sufficiently powerful computational substrate. In that sense, it has become substrate-independent. You managed to gain complete access and to transfer all of its relevant data. That is why we call such minds Substrate-Independent Minds. Of course, they still depend on a substrate to run, but you can switch substrates. So what do you do with a substrate-independent mind? Our interests lie not just in self-preservation or life-extension. We are especially interested in enhancement, competitiveness, and adaptability. I tried to describe that in my article Pattern survival versus Gene survival, which was published on Kurzweil AI.

Furthermore, we are not only interested in minds that are literal copies of a human mind, but have a strong interest in man-machine merger. That means, other-than-human substrate-independent minds are also topics of SIM.


Ah, for instance AGI minds that are created from the get-go to be substrate independent, rather than being uploaded from some originally substrate-dependent form.   Yes, I’ve got plans to create some of those!


Right.  Admittedly though, the current emphasis is heavily tilted towards the initial challenge of achieving sufficient access to the biological human mind.

We have been using the term ASIM on the web-site, because we wanted to make it very clear that the organization is action-oriented and not a discussion parlor.


The “A” is for “Advancing”


And SIM is the usual abbreviation of Substrate-Independent Minds.


So that’s a very, very broad concept – encompassing some current neuroscience research and also some pretty far-out thinking.


Indeed, there are many suggestions about how SIM might be accomplished. Those include approaches that depend on brain-machine interfaces, Terasem-like personality capture, etc. I imagine that most of those approaches can be quite useful, and many will bump into a number of shared challenges that any route to SIM has to overcome. But many approaches are also mired in philosophical uncertainty. For example, if you did live with a BMI for decades and relied more and more heavily on machine parts, would it be fair to say that you lose hardly anything when the biological brain at its origin ceased to function? That is the hope of some, but it seems a little bit like saying that society depends so much on machines that if all the people disappeared it would not matter. Similarly, if we capture my personality through careful description and recording of video and audio, at which point could I truly say that I had been transferred to another substrate? How does such an approach differ substantially from an autobiography?

Given those philosophical uncertainties, nearly everyone who is interested in SIM and has been working seriously in the field for a while is presently investigating a much more conservative approach. Those people include Ken Hayworth, Peter Passaro, but also well-known project leads who have an interest in emulating the functions of brain circuitry, such as Henry Markram, Jeff Lichtman and Ed Boyden. The most conservative approach is a faithful re-implementation of a large amount of detail that resides in the neuroanatomy and neurophysiology of the brain. The questions encountered there are largely about how much of the brain you need to include, how great the resolution of the re-implementation needs to be (neural ensembles, spiking neurons, morphologically correct neurons, molecular processes), and how you can acquire all that data at such scale and resolution from a biological brain? That conservative implementation of SIM is what I called Whole Brain Emulation many years ago. That term stuck, though it is sometimes abbreviated to “brain emulation” when the scope of a project is not quite as ambitious.  [EditorÕs note: see the Whole Brain Emulation Roadmap produced at Oxford University in 2007, based on a meeting involving Randal Koene and others.]

So — in the vein of the relatively near-term and practical, what developments in the last 10 years do you think have moved us substantially closer to achieving substrate-independent human minds?


There are five main developments in the last decade that I would consider very significant in terms of making substrate-independent minds a feasible project.

The first is a development with its own independent drivers, namely advances in computing hardware, first in terms of memory and processor speeds, and now increasingly in terms of parallel computing capabilities. Parallel computation is a natural fit to neural computation. As such, it is essential both for the acquisition and analysis of data from the brain, as well as for the re-implementation of functions of mind. The natural platform for the implementation of a whole brain emulation is a parallel one, perhaps even a neuromorphic computing platform.

The second major development is advances in large-scale neuroinformatics. That is, computational neuroscience with a focus on increasing levels of modeling detail and increasing scale of modeled structures and networks. There have been natural developments there, driven by individual projects (e.g. the Blue Brain Project), but also organized advances such as those spearheaded by the INCF. Any project towards substrate-independent minds will obviously depend on a large and rigorous means of representation and implementation with a strong resemblance to modeling in computational neuroscience.

Third, actual recording from the brain is finally beginning to address the dual problems of scale and high resolution. We see this both in the move towards ever larger numbers of recording electrodes (see for example Ed Boyden’s work on arrays with 10s of thousands of recording channels), and in the development of radically novel means of access during the last decade. A much celebrated radical development is optogenetics, the ability to introduce new channels into the synapses of specific cell types, so that those neurons can be excited or inhibited by different wavelengths of light stimulation. It is technology such as this, which combines electro-optical technology and biological innovation that looks likely to make similar inroads on the large-scale high-resolution neural recording side. Artificial or synthetic biology may be the avenues that first take us towards feasible nanotechnology, which has long been hailed as the ultimate route to mind uploading and substrate-independent minds (e.g. recall the so-called Moravec procedure).



Prototype Automated Lathe Ultramicrotome (ATLUM) at the Lichtman lab at Harvard University


A fourth important type of development has been in the form of projects that are aimed specifically at accomplishing the conservative route to SIM that is known as whole brain emulation. There we see tool development projects, such as at least three prototype versions of the Automated Lathe Ultramicrotome (ATLUM) developed at the Lichtman lab at Harvard University. The ATLUM exists solely because Ken Hayworth wants to use it to acquire the full neuronanatomy, the complete connectome of individual brains at 5nm resolution for reconstruction into a whole brain emulation. There are a hand full of projects of this kind, driven by individual researchers who are part of our network that actively pursues whole brain emulation.

The fifth development during the last decade is somewhat different, but very important. It is a conceptual shift in thinking about substrate-independent minds, whole brain emulation and mind uploading. Ten years ago, I could not have visited leading mainstream researchers in neuroscience, neural engineering, computer science, nanotechnology and related fields to discuss projects in brain emulation. It was beyond the scope of reasonable scientific endeavor, the domain of science fiction. This is no longer true.


Yes, that’s an acute observation.  I’ve noticed the same with AGI, as you know.  The AI field started out 50 years ago focused on human-level thinking machines with learning and self-organization and all that great stuff, but when I got my PhD in the late 80s, I didn’t even consider AI as a discipline, because it was so totally clear the AI field was interested only in narrow problem-solving or rigid logic-based or rule-based approaches.  I was interested in AGI more than pure mathematics, but I chose to study math anyway, figuring this would give me background knowledge I would be able to use in my own work on AGI later on.  Even 10 years ago it was hard to discuss human-level AI – let alone superhuman thinking machines – at a professional AI conference without someone laughing at you.  Then 5 to 8 years ago, occasional brief workshops or special tracks relevant to human-level AI or AGI started to turn up at AAAI and IEEE conferences.  And then I started the AGI conference series in 2006, and it’s flourished pretty well, drawing in not only young researchers but also a lot of old-timers who were interested in AGI all along but afraid to admit it for fear of career suicide.


Yes, the situations are fairly similar – and related, because AGI and SIM share a lot of conceptual ground.


And I guess the situations are correlated on a broader level, also.  As science and technology accelerate, more and more people – in and out of science – are getting used to the notion that ideas traditionally considered science fiction are gradually becoming realities.  But academia shifts slowly.  The Òacademic freedomÓ provided by the tenure system is counterbalanced, to an extent, by the overwhelming conservatism of so many academic fields, and the tendency of academics to converge on particular perspectives and ostracize those who diverge too far.  Of course, a lot of incredibly wonderful work comes out of academia nonetheless, and IÕm grateful it exists – but academia has a certain tendency toward conservatism that has to be pushed back against.


In fact I’m still mildly wary to list the names of ASIM-supportive scientists in publications – though I can if pressed.  It can still be a slightly risky career move to associate oneself with such ideas, especially for a young researcher.  But even so, it is now evident that a significant number of leading scientists, lab PIs, and researchers celebrated in their fields do see brain emulation and even substrate-independent minds are real and feasible goals for research and technology development. I can honestly say that this year, I am in such a conversation with specific and potentially collaborative aims and within the context of SIM on a bi-weekly basis. That development is quite novel and very promising.

So, let’s name some names.  Who do you think are some current researchers doing really interesting work pushing directly toward substrate-independent human minds?



Knife-edge scanning microscope, from Texas A&M



I will try to keep this relatively short, so only one or two sentences for each:

  • Ken Hayworth and Jeff Lichtman (Harvard) are the guiding forces behind the development of the ATLUM, and of course Jeff also has developed the useful Brainbow technique.
  • Winfried Denk (Max-Planck) and Sebastian Seung (MIT) popularized the search for the human connectome and continue to push its acquisition, representation and simulations based on reconstructions forward, including recent publications in Science.
  • Ed Boyden (MIT) is one of the pioneers of optogenetics, a driver of tool development in neural engineering, including novel recording arrays and a strong proponent of brain emulation.
  • George Church (Harvard), previously best known for his work in genomics, has entered the field of brain science with a keen interest in developing high-resolution large-scale neural recording and interfacing technology. Based on recent conversation, it is my belief that he and his lab will soon become important innovators in the field.
  • Peter Passaro (Sussex) is a driven researcher with the personal goal to achieve whole brain emulation. He is doing so by developing means for functional recording and representation that are in influenced by the work of Chris Eliasmith (Waterloo).
  • Yoonsuck Choe (Texas A&M) and Todd Huffman (3Scan) continue to improve the Knife-Edge Scanning Microscope (KESM), which was developed by the late Bruce McCormick with the specific aim of acquiring structural data from whole brains. The technology operates at a lower resolution than the ATLUM, but is presently able to handle acquisition at the scale of a whole embedded mouse brain.
  • Henry Markram (EPFL) has publicly stated his aim of constructing a functional simulation of a whole cortex, using his Blue Brain approach that is based on statistical reconstruction based on data obtained from studies conducted in many different (mostly rat) brains. Without a tool such as the ATLUM, the Blue Brain Project will not develop a whole brain emulation in the truest sense, but the representational capabilities, functional verification and functional simulations that the project produces can be valuable contributions towards substrate-independent minds.
  • Ted Berger (USC) is the first to develop a cognitive neural prosthetic. His prosthetic hippocampal CA3 replacement is small and has many limitations, but the work forces researchers to confront the actual challenges of functional interfacing within core circuitry of the brain.
  • David Dalrymple (MIT/Harvard) is commencing a project to reconstruct the functional and subject-specific neural networks of the nematode C. Elegans. He is doing so to test a very specific hypothesis relevant to SIM, namely whether data acquisition and reimplementation can be successful without needing to go to the molecular level.

Well, great, thanks.  Following those links should keep our readers busy for a while!

Following up on that, I wonder if you could say something about what you think are the most important research directions for SIM and WBE right now?  What do the fields really need, to move forward quickly and compellingly?


Overall, I think that what SIM needs the most at this time are:

  1. To show convincingly that SIM is fundamentally possible.
  2. To create the tools that make SIM possible and feasible.
  3. To have granular steps towards SIM that are themselves interesting and profitable for entrepreneurial and competitive effort.

The first point is being addressed, as noted above. Points 2 and 3 are closely related, because the tools that we need to achieve SIM can be built and improved upon in a manner where their successive capabilities enable valuable innovations. will be organizing a workshop aimed specifically at that question in the fall.

I think that technologies able to represent and carry out in parallel the functions needed by a substrate-independent mind are highly valuable and may require improvements in neuromorphic hardware or beyond. And yet, my main concerns at this stage are at the level of acquisition, the analysis of the brain. Any technology that leads to greater acquisition at large scale and high resolution will have enormous potential to lead us further toward substrate-independent minds. That is why I pointed out large scale recording arrays and mixed opto-electronic/bio approaches such as optogenetics above. If we were to attempt whole brain emulation at a scale beyond the nematode at this time, then a structure-only approach, with attempted mapping from structure to function, using the ATLUM would be the best bet.
Stepping back a bit from nitty gritty, I wonder what’s your take on consciousness?  It’s a topic that often comes up in discussions of mind-uploading — er, ASIM.   You know the classic question.  If you copied your brain at the molecular level into a digital substrate, and the dynamics of the copy were very similar to the original, and the functional behavior of the copy were very similar to the original — would you consider that the original genuinely was YOU?  Would it have your consciousness, in the same sense that the YOU who wakes up in the morning has the same consciousness as the YOU who went to sleep the previous night?  Or would it be more like your identical twin?


You are really asking three questions here. You are asking what I consider sufficient for consciousness to be an emergent property of the reimplemented mind. You are also asking if I would consider a reimplementation to be equal to myself, to be myself. And you are asking if this would be me in specific circumstances, for example if there were multiple copies (e.g. an original brain and a copy) that existed and experienced things at the same time. I will try to answer each of these questions.

I personally believe that consciousness is not a binary thing, but a measure. It is like asking if something is hot. Hot relative to what? How hot? There should be measurable degrees of consciousness, at least in relative terms. I believe that consciousness is a property of a certain arrangement of a thinking entity, which is able to be aware to some degree of its own place in its model of the world and the experiences therein. I do think that such consciousness is entirely emergent from the mechanisms that generate thought. If the necessary functions of mind are present then so is consciousness. Does this require molecular-level acquisition and reimplementation? At this stage, I don’t know.

If a copy of my mind were instantiated as you described and it told me that it was self-aware, then I would tend to believe it as much as I would believe any other person. I would also be inclined to believe that said copy was to any outside observer and to the world at large as much an implementation of myself as my original biological implementation (assuming that it was not severely limited mentally or physically).

Is such a copy indeed me? Is it a continuation of myself to the extent where a loss of my original biological self would imply no significant loss at all? That is where things become philosophically tricky. Those who argue that personal identity and the sense of self-continuity are illusions would probably argue that the means by which a copy was created and whether that copy was the only thing to continue to exist are irrelevant. They would be satisfied that self-continuity was achieved as much in such a case as it is when we wake up in the morning, or even from moment to moment. On the other end of the spectrum, you have those who would argue, even in a quantized universe, that the arrangement of instantiations from one quantum to its adjacent quantum states has some implication for real or perceived continuity. In that case, it can matter whether your brain was sliced and scanned by an ATLUM prior to reimplementation and reactivation, or if a transition was arranged through large-scale neural interfaces and a gradual replacement of the function of one neuron at a time. I have often described myself as a fence-sitter on this issue, and I still am. If you confront me with one of the two perspectives then I will argue the opposite one, as I see and personally sense relevance in both positions. If my stance in this matter does not change then it will have implications about the sort of uploading procedure that I would find cautiously satisfactory. In such a case, a procedure that assumes a problem of self-continuity would be the safe choice, as it would satisfy both philosophical stances.

Of course, even with such a procedure or in the event that there was no self-continuity problem then you can still end up with something akin to the identical twin situation. You could have two initially linked and synchronous implementations of a mind and identity. You could then sever the synchronizing connection and allow each to experience different events. The two would legitimately have been identical to begin with, but would gradually become more different. That is not really a problem, but rather an interesting possibility.

Yes, I tend to agree with your views.

But just to return to the gradual uploading scenario for a moment – say, where you move yourself from your brain to a digital substrate neuron by neuron.  What do you think the process would  feel like?  Would it feel like gradually dying, or would you not feel anything at allÉ or some more subtle shift?  Of course none of us really knows, but what’s your gut feeling?


What would it feel like? Do you currently feel it when a bunch of your neurons die or when your neural pathways change? The gradual process is something we are very much accustomed to. You are not the same person that you were when you were 5 years old, but there is a sense of continuity that you are satisfied with. Our minds have a way of making sense out of any situation that they are confronted with, as for example in dreams. Assuming that a gradual process were sufficiently gradual and did not in effect seem like a series of traumatic brain injuries, I don’t think that you would feel at all strange.


Well, I’m certainly curious to find out if you’re right!  Maybe in a couple decadesÉ

Moving on – I wonder, what do you see as the intersection between AGI (my own main research area) and substrate-independent human minds, moving forward?  Of course, once a human  mind has been ported to a digital substrate, we’ll be able to study that human mind and learn a lot about intelligent systems that way, including probably a lot about how to build AI systems varying on the human mind and perhaps more generally intelligent than the human mind.  (At least, that’s my view, please say so if you disagree!).  But I’m wondering about your intuition regarding how general will be the lessons learned from digital versions of human minds.  One point of view (let’s call it A) could be that human minds are just one little specialized corner of mind-space, so that studying digital versions of human minds won’t tell us that much about how to make vastly superhuman intelligence, or even about how to make nonhumanlike (but roughly human level) general intelligences.  Another point of view (let’s call it B) could be that the human  mind embodies the main secrets of general intelligence under limited computational resources, so that studying digital versions of human minds would basically tell us how general intelligence works.  Do you lean toward A or B or some other view?  Any thoughts in this general direction?


I think I lean a little bit towards A and a little bit towards B. I think that the human mind probably is very specialized, given that it evolved to deal with a very specialized set of circumstances. I look forward to being able to explore beyond those constraints. At the same time, I think that much of the most interesting work in AI so far has been inspired directly or indirectly by things we have discovered about thinking carried out in biological brains. I don’t think we have reached the limits of that exploration or that exploring further within that domain would impose serious constraints on AGI. It is also true that much of what we think of when we think of AGI are in fact capabilities that are demonstrably within the bounds of what human intelligence is capable of. It seems that we would be quite happy to devise machines that are more easily able to interact with us in domains that we care about. So, from that point of view also, I see no problem with learning from the human mind when seeking to create artificial general intelligence. The existence of the human mind and its capabilities provides a reassuring ground-truth to AGI research.

Actually, I believe that there are many more areas of overlap between AGI and SIM research. In fact, they are close kin. It is not just that an AGI is a SIM and a SIM an AGI, but also that the steps needed to advance toward either include a lot of common ground. That effort, those steps required, will lead to insights, procedures, tools and spin-offs that impact both fields. The routes to AGI and to SIM are ones with many milestones. It is no coincidence that many of the same researchers who are active in one of the two fields are also active or strongly interested in the other.


Your latter point is an issue I’m unclear about, as an AGI researcher.  I would suppose there are some approaches to AGI that are closely related to human cognition and neuroscience, and these have obvious close relatedness with WBE and human-oriented SIM.  On the other hand, there may be other approaches to AGI owing relatively little to the human mind/brain – then these would relate to SIM broadly speaking, but not particularly to WBE or human-oriented SIM, right?  So, partly, the relationship between AGI and SIM will depend on the specific trajectory AGI follows.  My own approach to AGI is heavily based on human cognitive science and only loosely inspired by neuroscience, so I feel like its relatedness with WBE and human-oriented SIM is nontrivial but also not that strong.  On the other hand, some folks like Demis Hassabis are  pursuing AGI in a manner far more closely inspired by neuroscience, so there the intersection with WBE and human-oriented SIM is more marked.
Next, what about   It seems an important and interesting effort, and I’m wondering how far you intend to push it.  You’ve done workshops – do you envision a whole dedicated conference on ASIM at some point?  Do you think ASIM is taken seriously enough that a lot of good neuroscientists and scientists from other allied fields would come to such a conference?

Randal: has four main reasons for existence:

  1. To explain the fundamentals of SIM, demonstrating its scientific basis and feasibility.
  2. To create and support a human network, especially those who can together bring about SIM. Among those, to facilitate multi-disciplinary for-profit and non-profit projects.
  3. To create and maintain a roadmap, relying in part on access enabled through affiliation with Halcyon and such, but also relying in part on its independence as a non-profit.
  4. To provide and maintain a public face of the field of SIM. That includes outreach activities such as publications and meetings.

SIM is obviously a multi-disciplinary field, so that one of the challenges is to carry out 1-4 across the spectrum, what our co-founder has called a polymath and initially technology agnostic approach.

We spent some time and effort on point 4 during the past year, but it looks like some of that activity may be well taken care of by others at present (e.g. Giulio Prisco and his teleXLR8 project). For this reason, our near future efforts concentrate more strongly on points 1 to 3. For example, we will ask a select group of experts to help us address the question of ventures and entrepreneurial opportunities in the area of SIM.

But yes, I do believe that at this point SIM has gained enough traction and feasibility that good scientists such as those mentioned above are interested in our efforts. We have confirmed the intention of several to participate in the expert sessions.


OK, next I have to ask the timing question.  (I hate the question as a researcher, but as an interviewer itÕs inevitable.)  When do you think we will see the first human  mind embodied in a digital substrate?  Ray Kurzweil places it around 2029, do you think that’s realistic?  If not what’s your best guess?

And how much do you think progress toward this goal would be accelerated if the world were to put massive funding into the effort, say tens of billions of dollars reasonably intelligently deployed?


I work for Halcyon. 🙂 ….  Therefore, I cannot give an unbiased estimate of the time to SIM or the acceleration that large-scale funding would provide. Achieving such funding is an explicit aim of mine, and the acceleration should be such that SIM is achieved within our lifetimes. The reason why exists is also to ensure a roadmap that lays the groundwork for several possible routes to SIM. The preferred routes are those that are achievable in a shorter period of time, and they will require the attraction and allocation of greater resources and effort. Perhaps 2029 is optimistic, but I would hope that SIM can be achieved not long after that.

Note that it is also a bit difficult to say exactly when SIM is achieved, unless SIM were achievable only by means of a Manhattan-style project with no significant intermediary results, until a human brain is finally uploaded into a SIM. I think that is an unlikely approach. It is much more likely that there will be stages and degrees of SIM, from a gradual increase in our dependence on brain enhancements to eventual total re-implementation.
Ah, that’s a good point.  So what do you think are some of the most important intermediate milestones on the path between here and a digitally embodied human mind?  Are there any milestones so that, when they’re reached, you’ll personally feel confident that we’re on the “golden path” technologically to achieving digitally-embodied human minds?


We are already on the golden path. 🙂

One of the most important milestones is to establish a well-funded organization that is serious about supporting projects that address challenges that must be overcome to make SIM possible. The inception of such organizations is now a fact, though their maturation to the point where the necessary support is made available is still a few years away.

Technologically, the most important milestones are:

  1. Tools that allow us to acquire brain-data a high resolution (at least individual neurons) and large scale (entire regions and multiple modal areas of a brain), and to do so functionally and structurally. There are smaller milestones as these tools are developed, and we will notice those as they enable valuable spin-off creations.
  2. Platforms that enable brain-size processing at scales and rates at least equivalent to those required for the human experience. Again, the development of those platforms brings with it obvious signs of progress as their early versions can be used to implement previously impossible or impractical applications.

Applications of brain-machine interfaces straddle both development areas and may be indicators or milestones that demonstrate advances.
Hmmm.  Another question I feel I have to ask is: what about the quantum issue?  A handful of scientists (Stu Hameroff among the most vocal) believe digital computers won’t be adequate to embody human minds, and that some sort of quantum computer (or maybe even something more exotic like a “quantum gravity computer”) will be necessary.  And of course there are real quantum computing technologies coming into play these days, like DWave’s technology, with which I know youÕre quite familiar.

And there is some evidence that quantum coherence is involved in some biological processes relevant to some animals’ neural processes, e.g. the magnetic field based navigation of birds.  What’s your response to this?  Will classical digital computers be enough?  And if not, how much will that affect the creation of digitally embodied human minds, both the viability and the timeline?


I don’t personally care much about debates that focus on digital computing platforms. I am not trying to prove a point about what digital computers can do. I am trying to make functions of the mind substrate-independent, i.e. to make it possible to move them to other processing substrates, whatever those may be.

Are there platforms that can compute functions of mind? Yes. We have one such platform between our ears.

Can other platforms carry out the computations? Can they carry out ALL of the processing in the same space and in the same time that it takes to carry them out in the original substrate? No.

I think it is probably impossible to do better than elementary particles already do at doing what they naturally do, and all that emerges from that. But do we care?

I would say that we do not.

By analogy, let us consider the emulation of one computing platform (a Macintosh) on another platform (a PC). Do we care if we can reproduce and emulate all of the aspects of the Mac platform, such as its precise pattern of electrical consumption, the manner and rate at which a specific Macbook heats up portions of its environment? We really don’t. All we care about is whether the programs that we run on the Mac are also producing the same results when run on a Mac emulator on the PC. In the same sense, there are many levels at which the precise emulation of a brain is quite irrelevant to our intentions.

The interesting question is exactly where to draw the lines as we choose to re-implement structure and function above a certain level and to regard everything below that level as a black box for which specific transformations of input to output must be accomplished. What if some aspects do require quantum coherence? In that case one must decide if the particular feature that depends on quantum coherence is one that we care to re-implement, one that is essential to our intentions. If so, then a platform capable of the re-implementation will be needed. Would that change the time-line to SIM? Possibly, or possibly not if D-Wave makes rapid progress! 😉

Personally, I am quite interested in the possibilities that quantum computing may provide, even if it is unnecessary for a satisfactory SIM. That is, because I am interested in enhancements that take us beyond our current mental capabilities. It is nice to be able to use a quantum computer, but it may be even more interesting to experience quantum computation as an integral part of being, just as we experience neuromorphic parallel computation.


Thanks a lot Randal, for the wonderfully detailed answers.  I look forward to my upload interviewing your upload sometime in the future!





  1. I don’t believe anymore that mind is computation. I think now that it should be some physical effect or field. Consider this comic
    We can make computation by moving rocks, or by handwriting. The thing is that it would be just rocks and paper. There is no known reason why those rocks will show become concious of themselves as we are.

    • A pebble starts an avalanche. So does an impulse grow into a thought and from a thought into an action. How fantastic is that! I love AI.
      Back in 2005, when I was working on building neural nets with a PLD (programmable logic devices) for a speech recognition product, I came across the classic problem of noise. You can make a network, put it in the PLD, and train it, but you can’t remove it and put it into another PLD! The minute variations in timing (sub-picoseconds) between gates made the individual network inseparable from the device it was running on.
      The Ship of Theseus is such a wonderful paradox.

      • >So does an impulse grow into a thought and from a thought into an action.

        Physically there is just an impulse. Problem can be defined like that: is whole thing something different from it’s parts. If we are talking about macroscopic things it’s definitely isn’t but some quantum effects may be interpreted in a way it is.

        If whole is no different from its parts, physically, and yet any machine can be conscious then obviously its parts are also concious. Just like with mass or electric charge: whole thing has mass which is sum of masses of it’s elementary particles.

        It seems Dr. Koene thinks about mind in different terms and i can’t fully grasp it. If mind is physical phenomena then it’s software emulation is something completely different from it. And if mind is not physical phenomena how can it has physical substrate?

        • The way I see it, mind is an emergent property of the physical matter that makes up the brain. It is a Whole that can only exist if its parts – neurons, synapses, possibly molecules or even atoms – are in a certain configuration. It is by no means necessary that the parts exhibit the same property, quite the contrary.

          To illustrate, consider a water molecule on its own. You can’t claim that this molecule is liquid (or in any other state, for that matter), since it takes many molecules together to form something with the usual properties of water.

          So when we learn which parts, which functions of a brain it takes to give rise to consciousness, we can attempt to rewrite these parts and functions in a different code – in silicon and bits, for example, instead of neurons and spikes. If our subset of functions was chosen well, then the new version should exhibit the same properties as the old one, namely, be conscious.

          • The thing is that you arbitrary choose to look at number of molecules from big distance. It is not new property appears it’s the way we see arbitrary grouped elementary particles interacting. You got an recursive explanation. That is conciousness is what somebody concious will see if he/she will invent and choose proper level of description. Brain is made of same matter and should not be able to feel or think anything because it has same(we don’t know for sure actually yet) physical reaction that happens everywhere else. I like the citation of Ervin Schrödinger:

            “The sensation of colour cannot be accounted for by the physicist’s objective picture of light-waves. Could the physiologist account for it, if he had fuller knowledge than he has of the processes in the retina and the nervous processes set up by them in the optical nerve bundles and in the brain? I do not think so.”

            I read some neuroscientists and it looks like Schrödinger was totally right.

            >give rise to consciousness, we can attempt to rewrite these parts and functions in a different code

            Definitely there is lot of processing in brain and sooner we study it the better. I just don’t see how that can make a piece of matter to feel or to think anything. Again you can write on paper “I’m concious” directly or you could think up difficult algorithm and get same words after some time. I don’t see what has changed except number of steps it will take to get words written.

            The way we interpret behaviour of machines will affect how we will treat them. So it is practical question.

      • Yeah, it seems that robots are going to learn from each other, at least in near future, not by sharing state directly but rather by using something similar to language. That’s where roboearth is heading.

  2. @Victoria re “I too am curious… I’d love another salsa partner!”

    Great comment, especially “I simply do not see the advance of machine intelligence as any obstacle to human survival any more than having a child is. I love children. They can be naughty, but then again everyone is at one time or another. They are still human. They exceed us as we exceeded our parents.”

  3. @Victoria re “Why do we look to technology and computers to make our lives better instead of each other?”

    We look to technology and computers AND each other to make our lives better. Also to music, literature, beer, the sea, and [insert whatever makes you happy]. Things don’t need to be black and white, I consider ASIM as _one of_ the things that can make our lives better.

    Re “Why do we think that we can unmake a cake, pull out the flour, and make a new one that has a longer shelf-life? How will that one be better?”

    Because we can. There is no guarantee that the new cake will be better, but it CAN be better, depending on the new flour that we put in, and many other factors including shelf life. Other factors being equal, I prefer a cake with a longer shelf life.

    • O, how I love cake. 🙂
      My intent is not to frame my arguments in black and white terms. 18 years of engineering has taught me the hazards of that!
      Like you, I would gladly take on a longer life of a healthier body. I know of very few that wouldn’t. Perhaps it is a caretaker instinct, but personally I would only take on that better form for the sake of my community and those who depend on me. I am contented in music, dance, creation and study. 100 years or 1000, I have tasted and enjoyed love and pain.
      If I could serve the giant herd of humanity by living 1000 years, then I would do it. Fear of death won’t move me an inch even though I am not religious.

  4. Have we become so contemptuous of our own flesh that we have forgotten that we grow out of this world as forces of nature with as much power and beauty as the winds and stars themselves? We are not poor spirits awaiting an afterlife. We are not bags of flesh awaiting pointless death. Why do we wish to abandon the body?

    Sickness, injury, and death strike fear into a creature without purpose or understanding. Yes. Life is hard. Why do we look to technology and computers to make our lives better instead of each other? Why do we think that we can unmake a cake, pull out the flour, and make a new one that has a longer shelf-life? How will that one be better?

    Age does not bring wisdom by itself. We do not have all of the answers. A computer-copy of us will be no more “superhuman” than a well-programmed computer.

    Trans-humanism? We have not even embraced Humanism!

    • Short answers: Because we can, and because we’re inherently curious.

      Longer answer with my personal opinion and bias:
      ASIM may sound to you like copying in the derisive sense, but consider that it must not necessarily be meant as such. ASIM can be many more things: The struggle to survive against the inevitably approaching non-human general intelligences that AGI research will bring to us, whether we all approve or not. The desire to learn more about ourselves. The desire to extend the beautiful and vast design of our own minds, to bring about and experience cognition and intelligence unlike any this world has seen before.
      Or it could simply be the wish for evolution to keep doing what it has been doing for hundreds of millions of years. I, for one, cannot accept any viewpoint that says we’re the pinnacle, we’re at the end. The only way we could be is if all life were eradicated! So there has to be gradual change, and since mankind has been on the technology trip for a couple thousand years, why not follow through with it? Wonderful things will ensue, and if you just look at them from the right angle, you’ll notice they’re not even nearly unnatural. In fact, it’s just evolution taken to a new level – like so many times before.

      • I too am curious. I look forward each day to the wonders that my mind and hands create. I am not a Luddite by any sense. I also do not see the technology as a cold box of machine parts that is not me. For me, each program and device I create is a “natural” as any flower or rock.

        I have worked in the AI field for 16 years. I have seen and build wonders. Every one of those wonders was an extension of the people that created them. We are not even close to becoming obsolete.

        Oranges do not grow from apple trees. Why would something non-human grow from our technology that has grown from us? Why do we fear being out-smarted? I simply do not see the advance of machine intelligence as any obstacle to human survival any more than having a child is. I love children. They can be naughty, but then again everyone is at one time or another. They are still human. They exceed us as we exceeded our parents.

        Like all life, we will evolve. We are not the end, nor the beginning, but we are far nobler than we give ourselves credit. So are our creations.

        When the machines awaken to understand their creators, I’ll be there with my dancing shoes on. I’d love another salsa partner!

        • Victoria,

          Your comments are very inspiring, and I’m a little ashamed of the way I responded at first, preaching instead of arguing (I should’ve known better, given where we are). That said, however, I’d like to just pick out the point you are making about AI vs. our own children.

          I do see a difference between the two, and it’s one that makes me glad there are researchers like Randal Koene who do what they do. If (or rather, when) human level AGI is achieved, it will be hosted on a vastly more scalable platform than what we are working with. It will likely be extendable, and if so, I have no doubts that human (or artificial) creativity will find ways to make the AGI surpass our intelligence fairly quickly. Whether that be in a Kurzweilian singularity or in a more gradual form has no bearing on the result, which is that biological humans are left in second place (or further behind, depending on how you count). Surely, they’ll be human in origin just like any of our technology, yet somehow, to me it feels at least vaguely menacing to human survival.

          In contrast, our children are still completely human, they have the same potential and the same limitations we do. I’m not sure if the argument counts in light of SIM seen as a likely future, but I feel more connected to children than to any AGI I can imagine because they’re of our flesh and blood. (I expect that will change as AGIs actually reach our level of intelligence and emotion, in which case my argument will be rendered as moot as yours, I guess. Or is that the point you are making, that any human-made AGI will be apples from the apple tree? Now I feel out-smarted, though I’m unsure who’s to blame.)

          Realistically, though, I don’t see any reason to be afraid of “the machines taking over” or suchlike. I think the creation of a broad AGI, if at all possible, is inevitable, but so is the development of SIM. Both fields profit from each other, possibly even need each other to succeed; in that sense, I’m confident that the human apple tree will eventually bear fruit that is not exactly apple-y – extraordinarily tasty apples, maybe.

    • Victoria, I clearly read from your comment that you are not a Luddite at all. But your well-intended complaint does contain some unwarranted assumptions. You mention looking to each other instead of technology to improve our lives. It is not just that those two are not in conflict, rather, they are inseparable in human society – it is really what defines us, the tool makers. Our society and what we enjoy in it revolves around technology all the time. Just think, if we removed technology from the equation, we would literally have to return to a time before the stone-age. We would have to live like animals that do not make tools.

      It is interesting that you say a computer-copy of us will be no more superhuman than a well-programmed computer, since we already know that computers can accomplish superhuman feats… such as instantaneously scouring the internet for bits of knowledge, precisely computing mathematical results, or making untold perfect copies of valuable information.

      I’m not sure why you are talking about abandoning the body. SIM is obviously not about having “no body”, rather it is about being able to exist in many substrates. It is about having a choice of many bodies – the present one included (when that makes sense; human bodies don’t make much sense in the vacuum of the moon, for example).

      • I should clarify myself on a few points. You are correct. My argument was incomplete. I’ll be thinking about how to answer for a few days. Thanks you!
        The “superhuman” I was referring to was the one postulated by Nietzsche. The one who “crosses over”. I can’t be a mountain, but the mountain is very good at being a mountain. I can’t be a computer, but they are very good at computing! There is nothing degrading about being what you are even though there is honor (indeed an imperative) in aspiring to extending yourself. I see too many people (especially cyber-punk enthusiasts) with contempt and disregard for the meat they are stuck in. I do hope that the move from one substrate to another would teach some appreciation for the delicate nature of all life. Carbon or silicon alike.
        On the matter of abandoning the body, my question was one of motivation. I have seen three views on the nature of the mind-body connection. The first is that we are purely spiritual and we will shed this shell for a glorified one in heaven. The second is that the mind is just chemicals and this is all just meat and we’re already as good as dead. The third is that the mind-body is merely a distinction without a difference just as the body-environment is connected. In this case, nothing really “dies”. It merely changes name and form. I digress…
        Of course we have not always had the form that we have now and we will not have this one for long. I think that we will evolve, but not in any purely directed manner.
        I simply hope that everyone looks at the move to another substrate as an enhancement of good-to-better and not bad-to-acceptable.

    • It’s not the flour we want to transfer – but the love the cake inspires, it did not exist in the ingredients – but it came about during the bake. I disagree, some of us are bags of flesh that hope to sidestep death towards an existence without the betrayal of flesh. These bodies are fragile, and often fraught with inborn flaws that can neither be corrected nor overlooked. We are not abandoning our bodies, we are working toward surviving the time when our bodies abandon us.

      • Excellent points.
        We can never correct all of the flaws. We will exchange one set for another. I am fine with that. So long as we do not loose our identity and believe ourselves as completely perfect or completely imperfect we will have a chance.
        I do hope that the hands of the baker are given as much attention as the mind of the baker when it comes time to baking the baker up.

    • Victoria,

      Our personalities run algorithms based on received data input. Just because the current state of computing architecture has not yet rendered a set of algorithms that behave like we do, does not mean we wont be able to soon.

      As far as a SIM not being superhuman; I challenge you to pull chunks of your brain out (the ones that holds your memories) copy them, place them somewhere safe, and then stick the white and grey matter back inside your skull.

      Can’t do it you say?

      Well, an SIM can do it. And a SIM can image itself to create a snapshot in time, to be used for self development analysis later on.

      I’d pretty much call a SIM superhuman.

Leave a Reply