With each passing year, the ability to alter our minds and bodies through technology grows. Advances in biotechnology, neuroengineering, robotics and myriad other fields are steadily changing the human condition. Many of these changes will be for the better, but there will be a downside too. In the course of augmenting our physical and mental abilities, we’re also introducing new vulnerabilities, opening ourselves up to invasive attacks that could threaten our finances, our identities, even our lives. In short, we’re quickly approaching a time when we’ll have to protect against the hacking of Human 2.0.
Hacking is defined as accessing or manipulating a system in ways other than its developers originally intended, often exploiting flaws in the system’s design. At first, computer hacking and phone phreaking were activities borne of curiosity and exploration. But over time, the methods and security flaws that were discovered came to be used by criminals and spies for other, darker purposes. It’s a natural progression; anytime conditions present the opportunity to steal or cause wanton damage, there will be some who want to take advantage of them. Transhumanism will soon have to contend with this very problem.
The Body Electric
The potential for vulnerabilities exist in many places. For instance, any time electronics integrate with the body, we have to consider the potential for unauthorized access to them. Wireless reprogrammable implantable medical devices (IMDs) such as pacemakers, implantable cardioverter defibrillators (ICDs), neurostimulators and implantable drug pumps are all potentially vulnerable to reprogramming attacks. Such potentially lethal acts are possible due to the lack of an in-built authentication system in these devices. A Medical Device Security Center study released in 2008 demonstrated the ability to reverse-engineer an ICD’s communications protocol, then access the device and reprogram it. Because of size and power limitations, including authentication and encryption in IMDs isn’t going to be feasible for several years. Therefore, we’ll need other kinds of solutions. A new jamming system invented by an MIT and University of Massachusetts-Amherst group could be one possible approach. It uses a small transmitter that can be worn externally like a piece of jewelry to jam unauthorized communications with the patient’s implant while allowing authorized signals through. This has the added benefit that the jamming device can be removed by emergency response teams if necessary.
In coming years, numerous devices and technologies will become available that make all manner of wireless communications possible in or on our bodies. The standards for Body Area Networks (BANs) are being established by the IEEE 802.15.6 task group. These types of devices will create low-power in-body and on-body nodes for a variety of medical and non-medical applications. For instance, medical uses might include vital signs monitoring, glucose monitors and insulin pumps, and prosthetic limbs. Non-medical applications could include life logging, gaming and social networking. Clearly, all of these have the potential for informational and personal security risks. While IEEE 802.15.6 establishes different levels of authentication and encryption for these types of devices, this alone is no guarantee of security. As we’ve seen repeatedly, unanticipated weaknesses in program logic can come to light years after equipment and software are in place. Methods for safely and securely updating these devices will be essential due to the critical nature of what they do. Obviously, a malfunctioning software update for something as critical as an implantable insulin pump could have devastating consequences.
Blueprint for Trouble
There are many other threats Human 2.0 may face. For instance, as we become increasingly capable of manipulating our biology in order to live healthier, longer lives, there’ll also be a greater potential to do significant, intentional harm. The genomes of all manner of virulent organisms are already posted on the internet, including smallpox and the 1918 H1N1 influenza virus. Sequences of DNA – portions of genetic code – can routinely be ordered online. Second-hand DNA sequencers for assembling genetic material from scratch are readily available on eBay. The ingredients are quickly coming together to make some very unpleasant DIY terrorism possible.
It’s been argued that biotech processes are too complex and require too much specialized knowledge to worry about homebrewed threats. But this is changing. DIY biotech is a rapidly growing field and today even junior high and high school students can successfully splice a gene from one organism to another. Though the procedures for building an entire genome are considerably more difficult, we shouldn’t underestimate the likelihood that these will become much more accessible in time.
As biotech author Robert Carlson observed in his recent book, “Biology is Technology”: “Today writing a gene from scratch within a few weeks costs a few thousand dollars. In five to ten years that amount should pay for much larger constructs, perhaps a brand-new viral or microbial genome.”
A good analogy for thinking about DIY biotechnology risks might be found in hacking and the creation of computer viruses. Computer programming was once the domain of highly skilled scientists who were intimately familiar with the systems they worked on. Over time, programming tools abstracted more and more of the processes, making programmer’s jobs easier and tasks more user-friendly. Today, it’s entirely possible for users with no real coding skills (derisively known as script kiddies) to use virus toolkits to point-and-click their way to creating a brand new computer virus. Similarly, wannabe hackers can use downloadable user-friendly programs to perform remote denial of service attacks, develop exploit code and crack passwords. Given the informational nature of biotechnology, it seems very feasible that it could one day follow a similar pattern.
When thinking about biotechnology threats, especially DIY biotechnology, it’s important to remember that a lot of good will come from these advances. But not everyone wears a white hat and just as occurred in computing, inevitably there will be some black hat behavior too.
While biotechnology has long touted the dream of personalized medicine, the converse is also true. The cost of sequencing a person’s entire genome is fast approaching $1,000, a decline of more than six orders of magnitude from a little over a decade ago. (This cost is likely to continue to drop considerably further in years to come.) Such technology makes highly targeted approaches to curing disease possible. Unfortunately, it also holds the potential for causing targeted afflictions as well. Targeting individuals, families or entire races becomes a real possibility when the point of attack is the genome. If such behavior seems far-fetched, consider the genocides of the past century. If the perpetrators of those atrocities had had such tools at their disposal, do you think they wouldn’t have used them? As techniques grow more sophisticated, theoretically it should become increasingly easy to identify a particular sequence of nucleotides or genes and, based on this, deploy a “payload” that acts on the sequence in a predetermined way.
One of the most straight forward methods of attack might be the switching of a single nucleotide polymorphism (SNP). Certain SNPs are associated with resistance to specific diseases or conditions such as arteriosclerosis. Switching these SNPs could make targeted individuals more susceptible to the associated disease. In other SNPs, such as those that occur in exons (the region that frames the coding sequence for a protein), switching the nucleotide can actually result in a fatal amino acid substitution.
The genome, the proteome (the entire set of proteins expressed by the genome) and the metabolome (the complete set of metabolites found in an organism) are probably the most complex systems humankind has ever explored. But as our tools and analytical methods improve, we will come to have a considerably better understanding of them. As we do so, it will be critical that we also understand the types of vulnerabilities we are revealing so that we can take steps to protect against them.
Our senses are essentially the interface between our brain and the rest of the world. How we see, hear, touch, taste and smell the world directly affects our experience of it. As we move into the era of augmented reality and virtual reality, our experiences and how we think about them are going to change.
This transition won’t happen all at once. For instance, today’s augmented reality apps are usually experienced on a smart phone. But later this decade, virtual retinal displays (VRDs) may become the viewing method of choice. In the early part of the next decade, we may forego these for “bionic” contact lenses, such as those being developed by a University of Washington research team led by Babak Parviz. Lenses such as these could one day provide us with continual overlays of images and data. Looking at a similar progression in auditory technology, Bluetooth headsets already offer hands-free, wireless communication. Cochlear implants provide direct stimulus of the auditory nerve, which has restored hearing to over 200,000 users worldwide. All of these technologies could one day converge into a unified reality interface. But regardless of the exact progression, it seems very likely that such technologies will become increasingly ubiquitous. In time, they could even become the default mode of human experience.
As we’ve already seen, such devices will almost certainly be hackable in one way or another. Given a sufficiently sophisticated attack, it may be possible to interfere with a user’s sensory system, potentially even planting false sights and sounds. Such assaults could threaten not only a person’s physical and financial security, but their sense of reality as well.
Therefore steps will need to be taken to protect ourselves. Sophisticated, self-modifying firewalls and virus protection will be essential parts of our personal security package. At the same time, marketers and semi-legitimate spammers will be trying to win our favor, directing us to buy their product or service with contextual, experience-driven offers. Many of us may not want to block these out entirely, but instead allow specific types of products or offers through. Specialized self-learning AIs could provide for highly personalized filtering. But not everyone takes “No” for an answer. An advertising-filtering tit-for-tat could result in an escalation of techniques, leading to progressively more sophisticated and intelligent software and tactics. Such techniques would only add to the arsenals of criminals and aggressor nations.
Eventually, the use of our sensory system as intermediary may give way to brain-computer interfaces (BCIs). Already, BCIs are beginning to allow for thought-directed control of devices such as wheelchairs and computers. As this technology improves, our ability to interface with any type of connected device will almost certainly develop.
But this is output. Developing input technology is going to be much more difficult. Nonetheless, work is already well under way toward developing such technologies. For example, in 2011, researchers from Wake Forest University and the University of Southern California restored memories in rats using previously recorded neural signals. Obviously, these are still early days, but as the ability to beam words, images, even thoughts into our mind becomes possible, what threats will we need to consider? Will we have to resort to some sort of encrypted identifiers in order to be sure our thoughts are our own? Will the task groups responsible for establishing standards for thought transfer require protocols that track key information about a thought’s originator? If they did, how easy would this information be to spoof? And what would all of these countermeasures mean for our civil liberties? If our experience of a half century of networked computing is anything to go by, it will be a continuing escalation of security and intrusion, amidst an ongoing dialog about our personal freedoms.
There are those who take issue with openly discussing security vulnerabilities, but this is exactly how such problems can best be resolved. In time, these weaknesses would be discovered anyway, often by those people who are most likely to exploit them. Open discussion identifies security problems as quickly as possible, allowing the risks to become known and the holes to be patched. Even more importantly, if potential holes can be identified prior to the establishing of formal standards, steps can be taken to avoid them entirely.
The day is fast approaching when we’ll need to protect ourselves against many new kinds of technological vulnerability. As methods of exploitation are developed, many will follow a similar progression: from theoretical to those requiring specialized expertise to highly available user-friendly applications. In the end, it seems likely our many technological advances will march hand-in-hand with an accelerating evolution of security measures. While these measures may prove inconvenient, even to the point of impinging on civil liberties, we may deem some of them necessary if we’re to keep the future safe for Human 2.0.
Richard Yonck is a foresight analyst with Intelligent Future LLC. Writing and speaking about future technology and it’s potential impacts on society, Richard is especially interested in the evolving relationship between technology and intelligence.
This article originally appeared July 12, 2011