Should we crowdsource malicious technology remediation?

Technology cuts both ways

Most powerful tools are like double-edged swords.  Since information wants to be free, disruptive technologies that put huge powers into the hands of individuals are going to be difficult to control.  We already see that in the computer world with botnets.  A handful of hackers can literally control millions of infected machines at a time.  Computer security is a mess right now, but there are some lessons to be learned in there somewhere.  Notably these computer security problems are going to be bleeding into meatspace more and more as we haphazardly stick everything in sight on the internet and leave the default password.

But we are also going to see different sorts of asymmetrical attacks in the future.  Consider the scenario of bad actors using cheap autonomous drones in malicious attacks against the public as Suarez describes in Kill Decision (which is a good book, by the way.)  Another scary attack would be a bioweapon created in a home lab.  Even citizen science could be a double-edged sword.  Instead of trying to blunt the sword, we should take advantage of the fact that there are generally more benevolent actors than malevolent ones in any given field.  I don’t know if I am ready to fully subscribe to the “intelligence implies benevolence” idea, but it does seem to have some merit.  After all, why mess around with randomly terrorizing a bunch of people with disruptive tech when there are so many more lucrative opportunities for intelligent sociopaths in our society.  Hmm, I might need to think that idea through more.

Nonetheless, we can certainly assume that there are numerous individuals and groups currently working with cheap, widely available, dual-use technology who could be considered benevolent.  It’s inevitable that they would argue amongst themselves about what benevolence actually means, but I am sure they could form alliances along a spectrum from “drones that invade privacy are bad, let’s interfere with them” to “eh, a little involuntary genetic hacking isn’t going to kill you (as long as it doesn’t actually kill you.)”  I’m suggesting that these benevolent groups should have coordination protocols so that they work together to help directly address problems that arise.  We should basically crowdsource malicious technology remediation.

OpenSource Citizen’s autonomous Drone security protocol
I am pretty sure that something like an opensource citizen’s autonomous drone security protocol will be incredibly useful someday.  The laws are loosening up on drones in US airspace.   And since makers and hobbyists are already getting into autonomous drone building, there will be a natural user base to help counteract bad actors.  Imagine a protocol that allows citizens to register their homemade drone which can be activated when a problem arises.  It might be similar to the way people donate computer cycles to the folding@home project.

It might work like reddit where anyone (or designated spotters) can create a post to report suspicious drone behavior.  If enough reviewers upvote this to confirm authenticity, then the protocol kicks in and all citizen drones in the area that are registered with the protocol take to the sky and execute some sort of swarm based target location algorithm.  Once the offending drone is located, it can be surveilled by reviewers and then reported to authorities.  One might even consider a more aggressive protocol (with teeth) that provides a mechanism for the citizen drones to disable or even destroy the offending drone.

Biohackers Unite!
But consider the recent ban on H5N1 research that was triggered by the development of bird flu strains that would be transmissible between mammals.  Maybe that particular ban worked and maybe it didn’t.  In computer security this approach might be called security through obscurity and it is not considered super effective.  It seems that the coordinated network of labs worldwide that are currently working together to identify and sequence diseases more and more rapidly could be viewed as part of a more rigorous defense in depth strategy against biological malware.  Why couldn’t independent biohacker spaces like BioCurious be linked into these networks or form networks of their own to respond to problems?  There are a lot of smart people out there playing with this stuff.

Some of my friends have objected to this idea and think that a top-down approach is better.  They might suggest that drones should be banned, etc.  But my argument is that: first of all, good luck with enforcement, and secondly only criminals would end up with drones then.  I guess I fall in with the gun advocates on that one.  Ouch.  The layers of defense are fewer without citizen involvement and if the official defenders screw up then we all get screwed.  It’s hard for big organizations like the US government to keep up with new tech (at least operationally, the research side is good).  Christ, they don’t even encrypt all their drone video feeds yet.  Hackerspaces on the ground are already hosting, toying with, and breaking advanced tech.  That’s where I saw my first 3D printer for example.  Hackerspaces represent a global asset that could be tapped to help defend humanity from malicious actors.

[This post originally appeared on Scott's great blog here: http://oaklandfuturist.com/crowdsource-malicious-technology-remediation/]

1 Response

  1. Mark Waser says:

    Good article.

    Another approach is to add enough intelligence to every tool (drones, etc.) to have them evaluate whether actions are socially acceptable or not — or to consult trusted outside sources if something is questionable. Obviously, this leads to all sorts of governance and security issues, but, as Peter pointed out earlier, we’re reaching the stage where tools are becoming advanced enough that even idiots can quickly and easily do a lot of damage — and we *need* to do something to ameliorate this.

Share Your Thoughts