Sign In

Remember Me

Porting Digital Memory

As human neural networks and electronic digital networks converge there is some debate over how to best move data from neural to digital formats. Since the human brain is equipped with high-resolution sensory organs, there are many obvious routes for digital-to-neural input, but organic systems lack sufficient neural outputs for porting thoughts, memories, and dreams directly to digital memory. A variety of methods have been pioneered to capture digital thought, including embedded sensor wires, embedded sensor grids, and non-invasive dermal sensors. Initial success with neural interfaces indicates that progress in this field is possible, but each method has distinct functional limitations and problems. Given the severity of modification needed for high-resolution neural-to-digital memory capture, analysis suggests a mix of invasive and non-invasive methods will be customized to meet specific end-user needs, and that the popular market will lean more towards minimally invasive “good-enough” technologies as opposed to radically invasive technologies for the high-end consumer.

EEG Helmets and Headbands
Based on the history of the videogame console market we can expect commercial neural interfaces to diverge into a handful of proprietary and incompatible platforms. In the field of electroencephologram (EEG) brainwave sensors, a small number of contenders have emerged in niche markets. Mattel’s MindFlex is geared for low-end gaming and personal consumer use; the Emotiv Epoch is designed to be a high-end gaming and biomedical device controller; and the g.tec Intendix is designed as an EEG-based typing system for the disabled. EEG helmets do not provide precise control over output: the data captured by an EEG helmet is a crude aggregation of surface brain activity and responds only to general commands. The Emotiv Epoch, for instance, is only capable of 12 distinct control options, making it comparable to a console game controller that takes some training and practice to master. EEG interfaces are well suited for playing video games, controlling wheelchairs and computers, or performing tasks like choosing YES or NO while scrolling through a list of options, but they can’t read internal thoughts or interpret internal visual data. While EEG hardware platforms are still proprietary, their simple USB interfaces make them highly extensible. For instance, the Emotiv Epoch has already been tested as a thought-based remote-control for a spy robot, and the MindFlex has been hacked by Harcos Labs into the worst toy ever, a meditation/torture device that rewards you for keeping your mind still and shocks you for thinking.

Image Courtesy of: mindflexgames.comThe upside to EEG helmets is that they all employ a similar technology, they are non-invasive, and they can be removed at will; the downside is that they lack fine control, they take some training and practice to use, and they require a high level of concentration to master. The price for commercial EEG headsets is still well over $200, and their use is fairly rare, but it is reasonable to assume that EEG controllers will become more common once the price point drops under $100, or perhaps when Nintendo comes out with an EEG game package for the Wii. Using an EEG headset as a personal memory device (PMD) to monitor daily brain activity may, over time, generate software-based solutions for extrapolating complex thoughts from broad surface-level readings. Intel recently demonstrated software that can predict what a subject is thinking by scanning their fMRI readings. Functional MRI scans are higher powered and more detailed than EEG readings, but the general model for brainwave mind-reading is coming together. In the future non-invasive EEG scanners may be improved with better sensors, like electronic potential sensors (EPS) that can read brainwaves at a distance of a few meters away, even through walls. A mix of EEG and EPS sensors integrated into a single headset may allow finer degrees of control monitoring and output.

Neural Conduction Sensors
Neural conduction sensors monitor electrical activity though the skin, through the nerves, or through muscles. The most common form of neural sensor is the transdermal electrode array, a grid of pins stuck directly into the skin, nerves, and muscle tissue, much like you would stick a microchip into circuit board. Kevin Warwick famously used a transdermal electrode array embedded in his forearm to control a bionic hand and share non-verbal neural pulses with his wife, who was also connected to a transdermal electrode array. Even though transdermal sensor arrays require sticking pins into the skin, they allow a very fine level of control. A typical Utah electrode array is a ten by ten grid of 100 pins embedded directly into tissue. This allows for fine signal detection and aggregation from a relatively small patch of interface, which can be further optimized by having multiple embedded sensors working in unison. Embedded electrode arrays are crude solutions for testing neural interfaces, but organic problems such as infection and scarring make them problematic for long-term use.

Image Courtesy of: news.cnet.comA non-invasive alternative to embedded electrodes is being explored by Ambient Technology in a device called the AudeoSensor. The Audeo was developed to measure energy conduction around the laryngeal muscles for controlling speech production, and uses stainless steel electrodes to read differential voltage across the skin in the front of the throat. A user with the Audeo strapped to their neck can activate a phoneme-recognition dictionary just by thinking clearly in sub-vocalized speech; thinking about speaking. Using this technique, Michael Callahan of Ambient made the world’s first voiceless phone call at the 2008 Texas Instruments Developer Conference in Dallas, using an Audeo sensor to “think” through a Bluetooth connection and have it translated to speech on his cell phone. Like an EEG sensor, the Audeo takes surface-level electrical activity and aggregates it into a library of control options. With the laryngeal muscles, there is a precise predictability to the rhythm of speech, making the task of aggregating sub-vocalized impulses at the muscle easier than capturing a similar brainwave library. With the proper software interface, users with a device like the Audeo strapped to their neck should be able to send text messages by thought, make mental notes and save them to a file, or capture what they are thinking in a running journal or phone conversation with a high degree of accuracy. There is a subtle difference between thinking internal thoughts and producing sub-vocalized speech, but that difference is minor enough to overcome with some training. With the proper laryngeal sensors, learning how to turn verbal thought into mental commands is a matter of practice and mastering a new library of subtle verbal motor skills. Non-invasive dermal contact plates, like those used in the Audeo, can be applied to any muscle group for exerting sub-motor control over external devices just by thinking about moving.

Embedded Neural Interfaces
Early work with embedded wires or electrodes in the brain have shown that neurons fire in direct response to electrical stimulation, and that electrical stimulation from embedded electrodes can be translated directly into perception. Most embedded electrode research has focused on correcting behavioral and motivational problems (such as depression, Parkinson’s disease, and lack of libido) by stimulating glands in the brainstem or basal forebrain to promote fine-tuned transmitter release, typically focusing on the dopamine pathways. Embedded electrodes are inserted into deep brain tissue through a hole in the skull and targeted to stimulate a very small group of neurons. They are designed to be unidirectional controllers for sending current into the brain. They do not sense or decode neural activity for digital output, however there is nothing stopping an embedded electrode from being designed to do both.

Image Courtesy of: EmotivA visual prosthesis that relies on a Utah sensor or similar electrode array being implanted directly into the visual cortex has been proposed, allowing visual input and output of cortical activity through a something like a video card or digital camera interface. An eight-by-eight array of 64 electrode points per hemisphere of the visual cortex has been estimated as easy starting point for maintaining referential integrity in cross-porting the visual field. At this bit-depth visual representations will be blurred and pixilated, but can still help blind people navigate around real-world obstacles. Topographical mapping of visual activity through fMRI monitoring can predict the best locations and depth of electrode insertion needed to promote precise perceptual input and output, and electrode placement can be customized to the visual mapping patterns of each individual. Obviously this procedure would involve something like a craniotomy, where a section of the skull is removed to allow direct access to the cortex. A visual prosthetic sensor array may be embedded as a mesh or grid of electrodes placed directly onto the visual cortex, or may be attached to the interior surface of the skull. A specialized Bluetooth adapter or USB port can be drilled through the skull plate for plug-and-play neural I/O access. When you plug in your brain interface your eyes become an instant webcam; your imagination becomes an instant computer monitor.

Digital visual prosthetics will be developed first for the blind, but as the technology is perfected cheap access to multisensory porting will drive demand for military, academic, industrial, and recreational applications. The concepts applied to a visual electrode array can be applied to a verbal array embedded in Broca’s Area in both hemispheres of the medial forebrain, where language and internal speech originates. A sensor located in this area can monitor the spontaneous production of internal thought, and may also be a good target for digitally monitoring emotional content of experience. An array of embedded electrodes in both the vision and speech areas of the brain would provide almost complete control over input and output of multisensory awareness. Specialized pathways for controlling machinery may also be exploited through these methods, as demonstrated in the experiment where a monkey was able to manipulate a mechanical arm with a 100-pin Utah electrode the size of a freckle placed directly on its motor cortex. Taking these techniques a step further, electrodes implanted in the hippocampus and medial temporal lobe may also be able to monitor dreams, sense memory visualization, generate spiritual experiences, and port music directly into and out of the brain.

The ability to port data in and out of consciousness has been demonstrated in multiple capacities with multiple interfaces ranging from low-fidelity non-invasive to high-fidelity radically invasive.

Next Generation Interfaces
Early experiments have demonstrated that neural tissue can be quickly adapted to communicate through embedded digital sensors. Judging from the experiment with the monkey and the robotic arm, mastering control of an embedded device takes days, not weeks or months. Training with embedded interfaces drives neuroplasticity and new signaling pathways to promote fast and robust connections with the device. Just as motor pathways grow to promote more precise control over machines, sensory pathways will grow to precisely capture, route, and analyze signals from sensory prosthetics. Neuroplasticity ensures that human networks can adapt to digital sensors through training, but currently digital sensors cannot adapt to human physiology. This can cause some problems. Embedded sensors are not organic, so they can cause infection, scarring, and are susceptible to slow corrosion in an organic environment. Because of these limiting factors, even well-designed synthetic sensors will have a useful life of a few months to a year before scarring or slow degeneration means they must be removed or replaced. The obvious solution to these long-term problems is to build a better interface.

Kevin Warwick, the man who embedded an electrode array into his forearm to control a robotic hand, has also demonstrated that rat neurons can be adapted in vitro (in a dish) to interface with a circuit board and control a small robot car. Over time, these embedded neural controllers test and learn the operations of the vehicle until they begin to pilot themselves in insect-like fashion around the environment. Taking this experiment from the motor to the sensory level, it is possible that a similar neuron-chip wired into a sensory feedback loop with a digital camera and video output electrode array will learn to use the camera and start navigating via visual cues. If device control can be grown spontaneously in a dish through simple trial-and-error plasticity, there is a good possibility that neural sensory interfaces for any digital device can be ported directly from the cortex in a similar manner. To attach a neural video controller to the human brain, an array of micro-fine holes would be laser-drilled into the back of the skull in a ten-by-ten grid for each hemisphere of the cortex. These holes would then be fitted with gold or stainless steel sleeves that allowed pre-grown neural fibers to be inserted and removed through micropipettes. The micropipettes would pass through the sleeves and pierce the fine outer layer of cortex at the depth of under a millimeter, allowing neural signals to pass through the skull via electrode-capped terminals or transdermal/transcranial contact points under the skin. Embedded cranial terminals would then connect via a cap or device worn around the back of the skull containing a pre-grown neural device-controller with the same bit resolution established for the camera’s visual field. When terminal contact was made with an external device controller, the neural interface would begin generating input and output directly from device to cortex. With synaptic plasticity generated by both the input and output terminals of the controller and cortex, the subject would quickly wire robust digital-visual pathways simply by practicing and learning how to use the interface.

Image Courtesy of: singularityhub.comA neural electrode, or a neurode, poses many significant advantages over a metal electrode or a wire. A neural bundle passed through a sleeve in the skull will not present the same risk of infection, scarring, or corrosion as an embedded wire. A neural bundle on the interior of the skull can feed and power itself from blood oxygen and glucose, it does not need batteries or electrical stimulation to send and receive signals and can self-repair normal wear and tear. A neural controller capped with a conductive terminal on the outside of the skull and fiber-optic bundles on the interior of the skull can interface with electrical sensors and optogenetic controllers that send neural packet information in pulses of green or yellow light, as opposed to pulses of electricity. Optogenetic controllers offer a much finer control over direct neural signaling than electrical controllers. Optogenetic neurons are genetically altered to contain light-sensitive signaling mechanisms; green LEDs send signal and yellow LEDs inhibit signal. A combination of fiber optic, optogenetic, and transcranial neurode controllers offers the highest bandwidth and highest fidelity potentials of any interface that can be constructed with existing technology. An optogenetic neurode controller with an 1800 stereo bit depth is estimated to be enough for neural-to-digital output with the initial visual fidelity of a cheap digital camera. This would equal 1800 neurode contact points in 30×30 arrays on each side of the visual cortex. Over time, this array would adapt to higher fidelity input and output, reaching HDTV levels and then full 3D topographical rendering within weeks to months of cross-training and mutual plasticity between subject and controller.

Although an optogenetic neurode array promises seamless transfer of neural-to-digital data, the technology poses some interesting ethical and physiological questions. Admittedly, no one wants rat neurons growing into their brains, so neural donors will have to be sourced and genetically typed to resist rejection. If a neurode array is custom built for an individual, the custom controllers would optimally be grown from neurons or stem cells harvested from that individual; this would make the neurode controller a true extension of that individual’s physiology. Neurode arrays can be attached through micro-channels drilled through the skull, or they can be embedded like a screen or mesh along the interior of the skull during a craniotomy; this method would allow for full-hemisphere cortical access through a single drilled transcranial access port. Because neurodes are organic and wire themselves directly into the sensory network, they can be defined as a type of exo-cortex, or a digital extensibility layer of the neo-cortex, making them very difficult to remove once they are attached. Any exo-cortical controllers using live neurons or neurode arrays to interface with cortical structures should come with a genetic kill code that causes the neurodes to instantly sever their synaptic connections. This code would be applied before the array was removed to prevent synaptic tearing, and would presumably kill most of the interface neurons in the controllers, making them useless. Perfecting a clean neurode removal process may be more difficult than building the actual interface, so any embedded neurode solution must be considered to be permanent or semi-permanent from the outset.

Image Courtesy of: braingate.comAlthough the concept of a neurode array is applied here to a full visual cortex controller, it is possible to imagine a more simplified model being scaled to the size of an eyeball, where an artificial pre-grown retinal photo-controller array feeds through a neurode ganglion directly into the optic nerve, forming the basis of an integrated artificial eye. Artificial sensory organs may also be attached through a neural cuff, like the recently developed flat interface nerve electrode (FINE), which uses a clamp that flattens the nerve to capture impulses and carry them across severed nerve pathways. Theoretically, if you were to attach FINE clamps to both optic nerves and then to both cochlear nerves, your eyes and ears would become stereo audio-visual I/O ports for capturing and receiving sensory data. A multi-sensory wiretap like this one would provide a seamless upstream interface for basic sensory input and output, but it would not be able to render downstream cortical information like thoughts, feelings, and dreams. If the process of attaching cuffs or patch cables to existing nerve bundles is perfected, this means hard neural I/O ports can be created where artificial organs and sensory peripherals can be swapped between existing neural inputs without surgery.

The final piece to embedded neural interfaces is the power source, which in the past has been a problem, but in next-generation devices will be completely organic. Instead of using batteries, embedded devices will run on bioelectric power from the body’s cellular metabolism. Aleksandr Noy has recently demonstrated a carbon nanotube transistor that can generate electric current in the presence of adenosine triphosphate (ATP), the chemical messenger that carries metabolic energy through living tissue. Assuming this technique can be integrated into embedded devices, then cellular activity at the prosthetic interface should also generate the current needed to power the device’s embedded electronics. To minimize the hassle of having bulky hardware permanently strapped or plugged into the skull, embedded devices would optimally enable wireless connectivity with something like a Bluetooth controller worn over the ear. The use of bioelectric transistors that generate their own current could potentially enable low-power wireless device connectivity through the skull without the need to change implanted batteries.

Image Courtesy of: nytimes.com

Ongoing Concerns
The ability to port data in and out of consciousness has been demonstrated in multiple capacities with multiple interfaces ranging from low-fidelity non-invasive to high-fidelity radically invasive. Although these technologies seem like science fiction, they are being vigorously explored by academic, medical, and commercial interests, with companies like BrainGate seeking patents on multiple neural interfaces and software platforms simultaneously. While the primary purpose of neural interface research is putatively therapeutic, the functional potentials and ethical concerns of neural porting are problems looming in the future. Right now these are hypothetical concerns, but if a single-access embedded neurode procedure could be perfected and automated and performed at a local clinic in two hours for around a thousand dollars, and it was covered by insurance, the temptation for cosmetic and personal use of such a procedure becomes clear. Neural interfaces can be abused, obviously, and can be hacked into to enslave and torture minds, or drive people intentionally insane, or turn them into sleeper assassins or mindless consumers.  Security is an inherent problem of any extensible exo-cortical system that must be addressed early in the engineering and testing stages, or anyone with an exo-cortical input would be ripe for exploitation. Sensory discrimination is an ongoing problem in any media environment, so individual channel selection, manual override, and the ability to shut down device input should be an integral part of any embedded system.

12 Comments

  1. This begs the question – how long before the exo-cortex shunt you’ve had drilled into your cranium becomes obsolete?

    While I’d love to have greater embedded processing power (not to mention something resembling objective, accurate memory), I can’t see the mainstream adoption of some of the really invasive procedures. Maybe I’m wrong on that, though, who knows. I want my exo-cortex to see widespread adoption, so I can depend on the hacker community to extend its functionality with apps and stuff!

  2. Imagine how amazing it would be to direct a movie into your favorite video editing software directly from your imagination. No more cameras and sets, no more expensive actors and special effects, anyone with a brain-video interface can create a piece of visual narrative just by thinking it. If you can get it to work on one person it really opens up the possibilities in media.

  3. I want my cat to start talking to me (-:. Imagine if an animal starts thinking about food and it tells you to feed it. We could start augment and upgrade our pets. Pet 2.0 LE (-: Why not?

    At one point it seems that we have a developed technology but at some other point it seems that it will take some time still till we start applying these technologies to our lives. How much time till we advance?

  4. Who says it is possible to port “… thoughts, memories, and dreams directly to digital memory.”?

    Of course, we already put thoughts to paper (or screen as here) in the form of character sequences or images and the information is readily stored in digital memory as binary representations. In this sense, a more direct and efficient method of transfer is to be welcomed. This would help people with motor difficulties, stimulate better communication and could lead to more creativity.

    In the stronger sense, this transfer seems to me impossible. We are talking chalk – binary data – and cheese – our experiences. I just don’t see that the latter could be squeezed into the former but then i’m no fan of the theory of computational functionalism (brain as computer).

  5. It doesn’t beg the question, it raises the question.
    http://begthequestion.info/
    Spread the word.

  6. Stone age, Iron Age, …..Information age and now the Cyborg age. Bring it on.

  7. Don’t be such a grammar-nazi.

    Society chooses its own meaning for language. The minority who understand the correct meaning and usage of ‘beg the question’, or, as your referenced page so appallingly abbreviates it, ‘BTQ’, can be commended for their understanding – but they are a minority nonetheless, and it is the majority’s understanding of language that is important.

    Language evolves. Embrace that evolution.

  8. ‘When language becomes sloppy, words can mean anything.’

    –George Orwell

  9. You didnt mention S.Q.U.I.D. s (Superconducting quantum interference device) in your article.

  10. This was an awesome article. It makes me wonder, “under what conditions would I be willing to strap my brain up into these types of technology?”

    Seems like such a fine line between being enhanced by these technologies and being fundamentally changed by them.

  11. Invasive or otherwise, I’d like to see this technology made accessible. It’s already pretty easy and dirt cheap to homebrew simple brain-to-computer interfaces. As for running the other way up the street – although I haven’t digged around for a while, it might exist already – I reckon you could design a external memory reading mechanism à la seeing-tongue without too much fuss.

    I’m not overly concerned anyone would be enslaved by nefarious mind police. I imagine we don’t all encode our memories in quite the same way. Memory can be damaged, but I doubt it can be written arbitrarily. So while it is true one’s implanted extra capacity could be mangled transcutaneously, I’d still be more weary of the humble lump hammer’s record as efficient means for staving heads in.

    On the other hand, one thing that does worry me; what happens if I disconnect and forget where I parked the car?

    Make mine a RAID-5.

  12. The benefits of brain-to-digital information transfer would be an increase in speed, efficiency, and transparency over current methods. For information storage, as already mentioned, we typically use text. Text as a medium serves this purpose fairly well. For brain-to-digital information storage to be better it would have to either create text faster or store information in a new medium. Since a lot of people already can formulate text by typing faster that they can formulate their thoughts I don’t thing brain-to-digital text transfer is worth it. Therefore developing a new medium better than text is necessary. This medium will have be universal, information dense, and unambiguous. Imagining what this medium will be right now is like imagining text before written language was used.