First, just to get the question out of the way — why the shift in terminology from “mind uploading” to “advancing substrate independent minds” (ASIM)? Is it just the same idea in new clothing or is there a conceptual shift also?
There are multiple reasons for the shift in terminology. The first reason was a confusion, at least on the part of the uninitiated, about what mind uploading means. There’s uploading, downloading, offloading. With respect to memory, I’ve heard all three used before. But what’s worse, uploading is normally associated only with taking some data and putting it somewhere else for storage. It does not imply that you do anything else with it, while the most interesting part about the objective is not just to copy data, but to re-implement the processes. The goal is to emulate the operations that have to be carried out to have a functioning mind.
The second important reason was that mind uploading does not clearly say anything about our objective. Mind uploading is a process, and that is exactly what it should stand for and how we use that term: Mind uploading is the process of transfer, a process by which that which constitutes a specific mind is transferred from one substrate (e.g. the biological brain) to another (e.g. a silicon brain).
There was also a slight conceptual shift involved in the adoption of the new Substrate-Independent Minds terminology. Or, if there was not a conceptual shift then the perceived shift can be blamed even more squarely on the original terminology. 😉
Elaborate a bit?
Consider this: When you have accomplished a transfer, i.e. a mind upload, then that mind is no longer in its original substrate. The mind could be operating on any sufficiently powerful computational substrate. In that sense, it has become substrate-independent. You managed to gain complete access and to transfer all of its relevant data. That is why we call such minds Substrate-Independent Minds. Of course, they still depend on a substrate to run, but you can switch substrates. So what do you do with a substrate-independent mind? Our interests lie not just in self-preservation or life-extension. We are especially interested in enhancement, competitiveness, and adaptability. I tried to describe that in my article Pattern survival versus Gene survival, which was published on Kurzweil AI.
Furthermore, we are not only interested in minds that are literal copies of a human mind, but have a strong interest in man-machine merger. That means, other-than-human substrate-independent minds are also topics of SIM.
Ah, for instance AGI minds that are created from the get-go to be substrate independent, rather than being uploaded from some originally substrate-dependent form. Yes, I’ve got plans to create some of those!
Right. Admittedly though, the current emphasis is heavily tilted towards the initial challenge of achieving sufficient access to the biological human mind.
We have been using the term ASIM on the carboncopies.org web-site, because we wanted to make it very clear that the organization is action-oriented and not a discussion parlor.
The “A” is for “Advancing”
And SIM is the usual abbreviation of Substrate-Independent Minds.
So that’s a very, very broad concept – encompassing some current neuroscience research and also some pretty far-out thinking.
Indeed, there are many suggestions about how SIM might be accomplished. Those include approaches that depend on brain-machine interfaces, Terasem-like personality capture, etc. I imagine that most of those approaches can be quite useful, and many will bump into a number of shared challenges that any route to SIM has to overcome. But many approaches are also mired in philosophical uncertainty. For example, if you did live with a BMI for decades and relied more and more heavily on machine parts, would it be fair to say that you lose hardly anything when the biological brain at its origin ceased to function? That is the hope of some, but it seems a little bit like saying that society depends so much on machines that if all the people disappeared it would not matter. Similarly, if we capture my personality through careful description and recording of video and audio, at which point could I truly say that I had been transferred to another substrate? How does such an approach differ substantially from an autobiography?
Given those philosophical uncertainties, nearly everyone who is interested in SIM and has been working seriously in the field for a while is presently investigating a much more conservative approach. Those people include Ken Hayworth, Peter Passaro, but also well-known project leads who have an interest in emulating the functions of brain circuitry, such as Henry Markram, Jeff Lichtman and Ed Boyden. The most conservative approach is a faithful re-implementation of a large amount of detail that resides in the neuroanatomy and neurophysiology of the brain. The questions encountered there are largely about how much of the brain you need to include, how great the resolution of the re-implementation needs to be (neural ensembles, spiking neurons, morphologically correct neurons, molecular processes), and how you can acquire all that data at such scale and resolution from a biological brain? That conservative implementation of SIM is what I called Whole Brain Emulation many years ago. That term stuck, though it is sometimes abbreviated to “brain emulation” when the scope of a project is not quite as ambitious. [EditorÕs note: see the Whole Brain Emulation Roadmap produced at Oxford University in 2007, based on a meeting involving Randal Koene and others.]
So — in the vein of the relatively near-term and practical, what developments in the last 10 years do you think have moved us substantially closer to achieving substrate-independent human minds?
There are five main developments in the last decade that I would consider very significant in terms of making substrate-independent minds a feasible project.
The first is a development with its own independent drivers, namely advances in computing hardware, first in terms of memory and processor speeds, and now increasingly in terms of parallel computing capabilities. Parallel computation is a natural fit to neural computation. As such, it is essential both for the acquisition and analysis of data from the brain, as well as for the re-implementation of functions of mind. The natural platform for the implementation of a whole brain emulation is a parallel one, perhaps even a neuromorphic computing platform.
The second major development is advances in large-scale neuroinformatics. That is, computational neuroscience with a focus on increasing levels of modeling detail and increasing scale of modeled structures and networks. There have been natural developments there, driven by individual projects (e.g. the Blue Brain Project), but also organized advances such as those spearheaded by the INCF. Any project towards substrate-independent minds will obviously depend on a large and rigorous means of representation and implementation with a strong resemblance to modeling in computational neuroscience.
Third, actual recording from the brain is finally beginning to address the dual problems of scale and high resolution. We see this both in the move towards ever larger numbers of recording electrodes (see for example Ed Boyden’s work on arrays with 10s of thousands of recording channels), and in the development of radically novel means of access during the last decade. A much celebrated radical development is optogenetics, the ability to introduce new channels into the synapses of specific cell types, so that those neurons can be excited or inhibited by different wavelengths of light stimulation. It is technology such as this, which combines electro-optical technology and biological innovation that looks likely to make similar inroads on the large-scale high-resolution neural recording side. Artificial or synthetic biology may be the avenues that first take us towards feasible nanotechnology, which has long been hailed as the ultimate route to mind uploading and substrate-independent minds (e.g. recall the so-called Moravec procedure).
Prototype Automated Lathe Ultramicrotome (ATLUM) at the Lichtman lab at Harvard University
A fourth important type of development has been in the form of projects that are aimed specifically at accomplishing the conservative route to SIM that is known as whole brain emulation. There we see tool development projects, such as at least three prototype versions of the Automated Lathe Ultramicrotome (ATLUM) developed at the Lichtman lab at Harvard University. The ATLUM exists solely because Ken Hayworth wants to use it to acquire the full neuronanatomy, the complete connectome of individual brains at 5nm resolution for reconstruction into a whole brain emulation. There are a hand full of projects of this kind, driven by individual researchers who are part of our network that actively pursues whole brain emulation.
The fifth development during the last decade is somewhat different, but very important. It is a conceptual shift in thinking about substrate-independent minds, whole brain emulation and mind uploading. Ten years ago, I could not have visited leading mainstream researchers in neuroscience, neural engineering, computer science, nanotechnology and related fields to discuss projects in brain emulation. It was beyond the scope of reasonable scientific endeavor, the domain of science fiction. This is no longer true.
Yes, that’s an acute observation. I’ve noticed the same with AGI, as you know. The AI field started out 50 years ago focused on human-level thinking machines with learning and self-organization and all that great stuff, but when I got my PhD in the late 80s, I didn’t even consider AI as a discipline, because it was so totally clear the AI field was interested only in narrow problem-solving or rigid logic-based or rule-based approaches. I was interested in AGI more than pure mathematics, but I chose to study math anyway, figuring this would give me background knowledge I would be able to use in my own work on AGI later on. Even 10 years ago it was hard to discuss human-level AI – let alone superhuman thinking machines – at a professional AI conference without someone laughing at you. Then 5 to 8 years ago, occasional brief workshops or special tracks relevant to human-level AI or AGI started to turn up at AAAI and IEEE conferences. And then I started the AGI conference series in 2006, and it’s flourished pretty well, drawing in not only young researchers but also a lot of old-timers who were interested in AGI all along but afraid to admit it for fear of career suicide.
Yes, the situations are fairly similar – and related, because AGI and SIM share a lot of conceptual ground.
And I guess the situations are correlated on a broader level, also. As science and technology accelerate, more and more people – in and out of science – are getting used to the notion that ideas traditionally considered science fiction are gradually becoming realities. But academia shifts slowly. The Òacademic freedomÓ provided by the tenure system is counterbalanced, to an extent, by the overwhelming conservatism of so many academic fields, and the tendency of academics to converge on particular perspectives and ostracize those who diverge too far. Of course, a lot of incredibly wonderful work comes out of academia nonetheless, and IÕm grateful it exists – but academia has a certain tendency toward conservatism that has to be pushed back against.
In fact I’m still mildly wary to list the names of ASIM-supportive scientists in publications – though I can if pressed. It can still be a slightly risky career move to associate oneself with such ideas, especially for a young researcher. But even so, it is now evident that a significant number of leading scientists, lab PIs, and researchers celebrated in their fields do see brain emulation and even substrate-independent minds are real and feasible goals for research and technology development. I can honestly say that this year, I am in such a conversation with specific and potentially collaborative aims and within the context of SIM on a bi-weekly basis. That development is quite novel and very promising.
So, let’s name some names. Who do you think are some current researchers doing really interesting work pushing directly toward substrate-independent human minds?
Knife-edge scanning microscope, from Texas A&M
I will try to keep this relatively short, so only one or two sentences for each:
- Ken Hayworth and Jeff Lichtman (Harvard) are the guiding forces behind the development of the ATLUM, and of course Jeff also has developed the useful Brainbow technique.
- Winfried Denk (Max-Planck) and Sebastian Seung (MIT) popularized the search for the human connectome and continue to push its acquisition, representation and simulations based on reconstructions forward, including recent publications in Science.
- Ed Boyden (MIT) is one of the pioneers of optogenetics, a driver of tool development in neural engineering, including novel recording arrays and a strong proponent of brain emulation.
- George Church (Harvard), previously best known for his work in genomics, has entered the field of brain science with a keen interest in developing high-resolution large-scale neural recording and interfacing technology. Based on recent conversation, it is my belief that he and his lab will soon become important innovators in the field.
- Peter Passaro (Sussex) is a driven researcher with the personal goal to achieve whole brain emulation. He is doing so by developing means for functional recording and representation that are in influenced by the work of Chris Eliasmith (Waterloo).
- Yoonsuck Choe (Texas A&M) and Todd Huffman (3Scan) continue to improve the Knife-Edge Scanning Microscope (KESM), which was developed by the late Bruce McCormick with the specific aim of acquiring structural data from whole brains. The technology operates at a lower resolution than the ATLUM, but is presently able to handle acquisition at the scale of a whole embedded mouse brain.
- Henry Markram (EPFL) has publicly stated his aim of constructing a functional simulation of a whole cortex, using his Blue Brain approach that is based on statistical reconstruction based on data obtained from studies conducted in many different (mostly rat) brains. Without a tool such as the ATLUM, the Blue Brain Project will not develop a whole brain emulation in the truest sense, but the representational capabilities, functional verification and functional simulations that the project produces can be valuable contributions towards substrate-independent minds.
- Ted Berger (USC) is the first to develop a cognitive neural prosthetic. His prosthetic hippocampal CA3 replacement is small and has many limitations, but the work forces researchers to confront the actual challenges of functional interfacing within core circuitry of the brain.
- David Dalrymple (MIT/Harvard) is commencing a project to reconstruct the functional and subject-specific neural networks of the nematode C. Elegans. He is doing so to test a very specific hypothesis relevant to SIM, namely whether data acquisition and reimplementation can be successful without needing to go to the molecular level.
Well, great, thanks. Following those links should keep our readers busy for a while!
Following up on that, I wonder if you could say something about what you think are the most important research directions for SIM and WBE right now? What do the fields really need, to move forward quickly and compellingly?
Overall, I think that what SIM needs the most at this time are:
- To show convincingly that SIM is fundamentally possible.
- To create the tools that make SIM possible and feasible.
- To have granular steps towards SIM that are themselves interesting and profitable for entrepreneurial and competitive effort.
The first point is being addressed, as noted above. Points 2 and 3 are closely related, because the tools that we need to achieve SIM can be built and improved upon in a manner where their successive capabilities enable valuable innovations. Carboncopies.org will be organizing a workshop aimed specifically at that question in the fall.
I think that technologies able to represent and carry out in parallel the functions needed by a substrate-independent mind are highly valuable and may require improvements in neuromorphic hardware or beyond. And yet, my main concerns at this stage are at the level of acquisition, the analysis of the brain. Any technology that leads to greater acquisition at large scale and high resolution will have enormous potential to lead us further toward substrate-independent minds. That is why I pointed out large scale recording arrays and mixed opto-electronic/bio approaches such as optogenetics above. If we were to attempt whole brain emulation at a scale beyond the nematode at this time, then a structure-only approach, with attempted mapping from structure to function, using the ATLUM would be the best bet.
Stepping back a bit from nitty gritty, I wonder what’s your take on consciousness? It’s a topic that often comes up in discussions of mind-uploading — er, ASIM. You know the classic question. If you copied your brain at the molecular level into a digital substrate, and the dynamics of the copy were very similar to the original, and the functional behavior of the copy were very similar to the original — would you consider that the original genuinely was YOU? Would it have your consciousness, in the same sense that the YOU who wakes up in the morning has the same consciousness as the YOU who went to sleep the previous night? Or would it be more like your identical twin?
You are really asking three questions here. You are asking what I consider sufficient for consciousness to be an emergent property of the reimplemented mind. You are also asking if I would consider a reimplementation to be equal to myself, to be myself. And you are asking if this would be me in specific circumstances, for example if there were multiple copies (e.g. an original brain and a copy) that existed and experienced things at the same time. I will try to answer each of these questions.
I personally believe that consciousness is not a binary thing, but a measure. It is like asking if something is hot. Hot relative to what? How hot? There should be measurable degrees of consciousness, at least in relative terms. I believe that consciousness is a property of a certain arrangement of a thinking entity, which is able to be aware to some degree of its own place in its model of the world and the experiences therein. I do think that such consciousness is entirely emergent from the mechanisms that generate thought. If the necessary functions of mind are present then so is consciousness. Does this require molecular-level acquisition and reimplementation? At this stage, I don’t know.
If a copy of my mind were instantiated as you described and it told me that it was self-aware, then I would tend to believe it as much as I would believe any other person. I would also be inclined to believe that said copy was to any outside observer and to the world at large as much an implementation of myself as my original biological implementation (assuming that it was not severely limited mentally or physically).
Is such a copy indeed me? Is it a continuation of myself to the extent where a loss of my original biological self would imply no significant loss at all? That is where things become philosophically tricky. Those who argue that personal identity and the sense of self-continuity are illusions would probably argue that the means by which a copy was created and whether that copy was the only thing to continue to exist are irrelevant. They would be satisfied that self-continuity was achieved as much in such a case as it is when we wake up in the morning, or even from moment to moment. On the other end of the spectrum, you have those who would argue, even in a quantized universe, that the arrangement of instantiations from one quantum to its adjacent quantum states has some implication for real or perceived continuity. In that case, it can matter whether your brain was sliced and scanned by an ATLUM prior to reimplementation and reactivation, or if a transition was arranged through large-scale neural interfaces and a gradual replacement of the function of one neuron at a time. I have often described myself as a fence-sitter on this issue, and I still am. If you confront me with one of the two perspectives then I will argue the opposite one, as I see and personally sense relevance in both positions. If my stance in this matter does not change then it will have implications about the sort of uploading procedure that I would find cautiously satisfactory. In such a case, a procedure that assumes a problem of self-continuity would be the safe choice, as it would satisfy both philosophical stances.
Of course, even with such a procedure or in the event that there was no self-continuity problem then you can still end up with something akin to the identical twin situation. You could have two initially linked and synchronous implementations of a mind and identity. You could then sever the synchronizing connection and allow each to experience different events. The two would legitimately have been identical to begin with, but would gradually become more different. That is not really a problem, but rather an interesting possibility.
Yes, I tend to agree with your views.
But just to return to the gradual uploading scenario for a moment – say, where you move yourself from your brain to a digital substrate neuron by neuron. What do you think the process would feel like? Would it feel like gradually dying, or would you not feel anything at allÉ or some more subtle shift? Of course none of us really knows, but what’s your gut feeling?
What would it feel like? Do you currently feel it when a bunch of your neurons die or when your neural pathways change? The gradual process is something we are very much accustomed to. You are not the same person that you were when you were 5 years old, but there is a sense of continuity that you are satisfied with. Our minds have a way of making sense out of any situation that they are confronted with, as for example in dreams. Assuming that a gradual process were sufficiently gradual and did not in effect seem like a series of traumatic brain injuries, I don’t think that you would feel at all strange.
Well, I’m certainly curious to find out if you’re right! Maybe in a couple decadesÉ
Moving on – I wonder, what do you see as the intersection between AGI (my own main research area) and substrate-independent human minds, moving forward? Of course, once a human mind has been ported to a digital substrate, we’ll be able to study that human mind and learn a lot about intelligent systems that way, including probably a lot about how to build AI systems varying on the human mind and perhaps more generally intelligent than the human mind. (At least, that’s my view, please say so if you disagree!). But I’m wondering about your intuition regarding how general will be the lessons learned from digital versions of human minds. One point of view (let’s call it A) could be that human minds are just one little specialized corner of mind-space, so that studying digital versions of human minds won’t tell us that much about how to make vastly superhuman intelligence, or even about how to make nonhumanlike (but roughly human level) general intelligences. Another point of view (let’s call it B) could be that the human mind embodies the main secrets of general intelligence under limited computational resources, so that studying digital versions of human minds would basically tell us how general intelligence works. Do you lean toward A or B or some other view? Any thoughts in this general direction?
I think I lean a little bit towards A and a little bit towards B. I think that the human mind probably is very specialized, given that it evolved to deal with a very specialized set of circumstances. I look forward to being able to explore beyond those constraints. At the same time, I think that much of the most interesting work in AI so far has been inspired directly or indirectly by things we have discovered about thinking carried out in biological brains. I don’t think we have reached the limits of that exploration or that exploring further within that domain would impose serious constraints on AGI. It is also true that much of what we think of when we think of AGI are in fact capabilities that are demonstrably within the bounds of what human intelligence is capable of. It seems that we would be quite happy to devise machines that are more easily able to interact with us in domains that we care about. So, from that point of view also, I see no problem with learning from the human mind when seeking to create artificial general intelligence. The existence of the human mind and its capabilities provides a reassuring ground-truth to AGI research.
Actually, I believe that there are many more areas of overlap between AGI and SIM research. In fact, they are close kin. It is not just that an AGI is a SIM and a SIM an AGI, but also that the steps needed to advance toward either include a lot of common ground. That effort, those steps required, will lead to insights, procedures, tools and spin-offs that impact both fields. The routes to AGI and to SIM are ones with many milestones. It is no coincidence that many of the same researchers who are active in one of the two fields are also active or strongly interested in the other.
Your latter point is an issue I’m unclear about, as an AGI researcher. I would suppose there are some approaches to AGI that are closely related to human cognition and neuroscience, and these have obvious close relatedness with WBE and human-oriented SIM. On the other hand, there may be other approaches to AGI owing relatively little to the human mind/brain – then these would relate to SIM broadly speaking, but not particularly to WBE or human-oriented SIM, right? So, partly, the relationship between AGI and SIM will depend on the specific trajectory AGI follows. My own approach to AGI is heavily based on human cognitive science and only loosely inspired by neuroscience, so I feel like its relatedness with WBE and human-oriented SIM is nontrivial but also not that strong. On the other hand, some folks like Demis Hassabis are pursuing AGI in a manner far more closely inspired by neuroscience, so there the intersection with WBE and human-oriented SIM is more marked.
Next, what about carboncopies.org? It seems an important and interesting effort, and I’m wondering how far you intend to push it. You’ve done workshops – do you envision a whole dedicated conference on ASIM at some point? Do you think ASIM is taken seriously enough that a lot of good neuroscientists and scientists from other allied fields would come to such a conference?
Carboncopies.org has four main reasons for existence:
- To explain the fundamentals of SIM, demonstrating its scientific basis and feasibility.
- To create and support a human network, especially those who can together bring about SIM. Among those, to facilitate multi-disciplinary for-profit and non-profit projects.
- To create and maintain a roadmap, relying in part on access enabled through affiliation with Halcyon and such, but also relying in part on its independence as a non-profit.
- To provide and maintain a public face of the field of SIM. That includes outreach activities such as publications and meetings.
SIM is obviously a multi-disciplinary field, so that one of the challenges is to carry out 1-4 across the spectrum, what our co-founder has called a polymath and initially technology agnostic approach.
We spent some time and effort on point 4 during the past year, but it looks like some of that activity may be well taken care of by others at present (e.g. Giulio Prisco and his teleXLR8 project). For this reason, our near future efforts concentrate more strongly on points 1 to 3. For example, we will ask a select group of experts to help us address the question of ventures and entrepreneurial opportunities in the area of SIM.
But yes, I do believe that at this point SIM has gained enough traction and feasibility that good scientists such as those mentioned above are interested in our efforts. We have confirmed the intention of several to participate in the expert sessions.
OK, next I have to ask the timing question. (I hate the question as a researcher, but as an interviewer itÕs inevitable.) When do you think we will see the first human mind embodied in a digital substrate? Ray Kurzweil places it around 2029, do you think that’s realistic? If not what’s your best guess?
And how much do you think progress toward this goal would be accelerated if the world were to put massive funding into the effort, say tens of billions of dollars reasonably intelligently deployed?
I work for Halcyon. 🙂 …. Therefore, I cannot give an unbiased estimate of the time to SIM or the acceleration that large-scale funding would provide. Achieving such funding is an explicit aim of mine, and the acceleration should be such that SIM is achieved within our lifetimes. The reason why carboncopies.org exists is also to ensure a roadmap that lays the groundwork for several possible routes to SIM. The preferred routes are those that are achievable in a shorter period of time, and they will require the attraction and allocation of greater resources and effort. Perhaps 2029 is optimistic, but I would hope that SIM can be achieved not long after that.
Note that it is also a bit difficult to say exactly when SIM is achieved, unless SIM were achievable only by means of a Manhattan-style project with no significant intermediary results, until a human brain is finally uploaded into a SIM. I think that is an unlikely approach. It is much more likely that there will be stages and degrees of SIM, from a gradual increase in our dependence on brain enhancements to eventual total re-implementation.
Ah, that’s a good point. So what do you think are some of the most important intermediate milestones on the path between here and a digitally embodied human mind? Are there any milestones so that, when they’re reached, you’ll personally feel confident that we’re on the “golden path” technologically to achieving digitally-embodied human minds?
We are already on the golden path. 🙂
One of the most important milestones is to establish a well-funded organization that is serious about supporting projects that address challenges that must be overcome to make SIM possible. The inception of such organizations is now a fact, though their maturation to the point where the necessary support is made available is still a few years away.
Technologically, the most important milestones are:
- Tools that allow us to acquire brain-data a high resolution (at least individual neurons) and large scale (entire regions and multiple modal areas of a brain), and to do so functionally and structurally. There are smaller milestones as these tools are developed, and we will notice those as they enable valuable spin-off creations.
- Platforms that enable brain-size processing at scales and rates at least equivalent to those required for the human experience. Again, the development of those platforms brings with it obvious signs of progress as their early versions can be used to implement previously impossible or impractical applications.
Applications of brain-machine interfaces straddle both development areas and may be indicators or milestones that demonstrate advances.
Hmmm. Another question I feel I have to ask is: what about the quantum issue? A handful of scientists (Stu Hameroff among the most vocal) believe digital computers won’t be adequate to embody human minds, and that some sort of quantum computer (or maybe even something more exotic like a “quantum gravity computer”) will be necessary. And of course there are real quantum computing technologies coming into play these days, like DWave’s technology, with which I know youÕre quite familiar.
And there is some evidence that quantum coherence is involved in some biological processes relevant to some animals’ neural processes, e.g. the magnetic field based navigation of birds. What’s your response to this? Will classical digital computers be enough? And if not, how much will that affect the creation of digitally embodied human minds, both the viability and the timeline?
I don’t personally care much about debates that focus on digital computing platforms. I am not trying to prove a point about what digital computers can do. I am trying to make functions of the mind substrate-independent, i.e. to make it possible to move them to other processing substrates, whatever those may be.
Are there platforms that can compute functions of mind? Yes. We have one such platform between our ears.
Can other platforms carry out the computations? Can they carry out ALL of the processing in the same space and in the same time that it takes to carry them out in the original substrate? No.
I think it is probably impossible to do better than elementary particles already do at doing what they naturally do, and all that emerges from that. But do we care?
I would say that we do not.
By analogy, let us consider the emulation of one computing platform (a Macintosh) on another platform (a PC). Do we care if we can reproduce and emulate all of the aspects of the Mac platform, such as its precise pattern of electrical consumption, the manner and rate at which a specific Macbook heats up portions of its environment? We really don’t. All we care about is whether the programs that we run on the Mac are also producing the same results when run on a Mac emulator on the PC. In the same sense, there are many levels at which the precise emulation of a brain is quite irrelevant to our intentions.
The interesting question is exactly where to draw the lines as we choose to re-implement structure and function above a certain level and to regard everything below that level as a black box for which specific transformations of input to output must be accomplished. What if some aspects do require quantum coherence? In that case one must decide if the particular feature that depends on quantum coherence is one that we care to re-implement, one that is essential to our intentions. If so, then a platform capable of the re-implementation will be needed. Would that change the time-line to SIM? Possibly, or possibly not if D-Wave makes rapid progress! 😉
Personally, I am quite interested in the possibilities that quantum computing may provide, even if it is unnecessary for a satisfactory SIM. That is, because I am interested in enhancements that take us beyond our current mental capabilities. It is nice to be able to use a quantum computer, but it may be even more interesting to experience quantum computation as an integral part of being, just as we experience neuromorphic parallel computation.
Thanks a lot Randal, for the wonderfully detailed answers. I look forward to my upload interviewing your upload sometime in the future!