Starting with his seminal book, Out of Control: The New Biology of Machines, Social Systems, & the Economic World, Kevin Kelly has been showing us how biological evolution and technological evolution follow similar, intersecting patterns.
His current book, What Technology Wants, more closely explores his concept of “the technium” which Kelly describes as “…a word I’ve reluctantly coined to designate the greater sphere of technology – one that goes beyond hardware to include culture, law, social institutions, and intellectual creations of all types. In short, the Technium is anything that springs from the human mind. It includes hard technology, but much else of human creation as well. I see this extended face of technology as a whole system with its own dynamics.”
But let’s let Kelly tell it. I interviewed him via email.
H+: If I understand it, the essence of the book is that technological evolution, like biological evolution, follows patterns that make their developments inevitable and irresistible. Can you comment on that and is there a mystical element to how you view this? Is there a “logos” or some force calling evolution forward to some inevitable end point?
Kevin Kelly: I don’t think there is a mystical force calling evolution forward to some end point. In fact I don’t see any end point in evolution. I would go even further and say the whole point of evolution is that there is no single end point. There are trajectories in evolution, but they are more like explosions outward rather than a climb up a ladder, or a race along a line. I think these directions in evolution (which are then accelerated by technology) are pushed by self-organization. In physical terms, this long-term self-organization in the universe, beginning with the billion-year self-perpetuation of stars, which are machines for creating heavy atoms out of light atoms, and then continuing with the self-organization of life and its self-made increasing complexity is driven by the accelerated production of entropy locally. This force does not escape entropy. It is not supernatural. But it is real. In the times when we are able to re-run history, we see the outlines of its trajectories.
H+: You go into what Freud called “civilization’s discontents” and you use the extreme example of the Unabomber and the anti-civilization anarchists. What about smaller discontents? Do you see individuals continuing to be able to opt out of the various pressures to do “what technology wants” or are we headed for a borg-like totalism?
KK: Personally I have not found it difficult to opt of technology that I want to opt out of. Our three kids grew up in a house without TV. I’m still not twittering. I’m not much of a cell phone user (dumb phone only). I don’t own a gun. I only owned a bicycle, no car, until I was 35. Those are all technologies I wanted to minimize. It was not hard at all. There are folks (Christian Scientists) who opt out of larger systems like medical technology, or strict locavore vegans who opt out mechanical food production, or the Amish who opt out of electricity – so these relinquishments are certainly possible. Does the technium make it easy to opt out? I would say it is biased to opt in rather than opt out, but why not? It is interesting to me that there is actually very little technology that I want to opt out of. I would like to opt out of the technologies of warfare – say nuclear missiles – if I could. But I probably don’t have a strong enough desire because if that was really important to me, I would move to Canada, or Switzerland or somewhere like that. I will grant that opting out is primarily an act of identity, and so it requires a person comfortable in who they are. Technological advance is constantly challenging the nature of who we are. Every new invention in robots or AI or genetics nibbles away at our previous conceptions of what it means to be a human, and so the technium makes it harder to know who we are and in that sense may make it harder to opt out. So I would say the technium is biased against opting out, but it remains possible for those who want to minimize the amount of technology in their own lives, as I do.
H+: I kept on thinking of the famous “It steam engines when it comes steam engine time” quote from Charles Fort. There are these simultaneous emergences in technological invention. Can you say a bit about this? And is there any way we can use this to help us guide toward a desirable future?
KK: I love Fort’s quote and wished I had used it in the book! I think it is accurate. It does indeed networks when it comes network time. We can use these kind of convergences in the technium to help us guide toward a desirable future in this way: rather than deny or refuse technologies whose time has come, let us embrace them with reasonable regulation, adequate education, sensible norms, mindful vigilance, and constant smart engagement. Otherwise, these emergent technologies will be left to the complete ad hoc pressures of pure marketism. By engaging with embryonic technologies as soon as possible we can draw out their most convivial expressions before while they are easier to “move.”
H+: You indicate that most leapfrogging is impossible… that technology proceeds through natural stages of evolution. Can we really afford to have the whole world go through industrialism? Do you think we can consciously leapfrog, at least in some cases?
KK: I certainly hope we can find a way to deliberately leapfrog. It would be wonderful, but so far I have been unable to find any evidence we can skip stages. We have not so far, even in biology. Some might hope that human adults can be made by skipping adolescence, but I have my doubts. There are some definitional issues, too. What is “it” that is leapfrogging? Is it a people, a tribe, a culture, or a place? I think small units of all the above can “leapfrog” by borrowing the technology of a larger tribe, culture or place, without ever making it theirs. A station in Antartica (a place) that has never had industry can acquire a digital economy with a minimum of mass production, but even that requires some diesel engines, pre-fab construction, etc.
I think the best we may be able to do is to fast-forward through a stage with great velocity. Zip through industrialism at an accelerated and guided pace, or more likely, in tandem with the new.
H+: You make the point that as much as technology increases choices, it also increases dangers and that living in this future will require constant vigilance. Sounds like a high anxiety lifestyle. Can we hope to outsource some of this stuff to trustworthy intelligent machines? Do we really personally need to maintain a state of hyper-vigilance?
KK: Most of the problems in the world today are caused by old technologies, and most of the problems in the future will be caused by technologies of today. Therefore most problems are technogenic. But at the same time I believe that the solution to every problem will be a new technology (thus perpetuating the never-ending cycle). It must be so, just as the solution to every bad idea is a better new idea rather than fewer ideas. So we need new technologies to help us remain vigilant over old technologies. And we’ll use new technologies (such as recommendation engines) to handle the overabundance of new technologies. These new technologies of oversight will of course breed new problems (in shared responsibility), and the only way we’ll overcome those new problems is… with more improved technology.
H+: A related question… this whole movement toward the quantified self and self-tracking… does the future favor the sort of person who wants to constantly chart and keep track of everything?
KK: Currently the only folks who are tracking themselves quantitatively are slightly obsessive, slightly nerdy. But like many nerdy pursuits before it – say typography, cartography, statistics – self tracking is on its way to become the new normal. Everyone knows about kerning and fonts today, or about mapping coordinates, or about baseball statistics. Soon everyone will be self-tracking.
H+: H+ Magazine represents the transhumanist or human enhancement movement. What does that mean to you and what do you think about it?
KK: As far as I can tell I am a transhumanist in that I believe that humans are self-created – that we invented our own humanity, that our humanity is our greatest invention, and that we are not done inventing ourselves yet. I see humans as a mid-point in the long-term evolution of order in this corner of the universe. We are a transitional species.
H+: In the book, you discuss many people who have charted the potential evolution of technologies into possible futures, among them Ray Kurzweil. What do you think of his view of the singularity?
KK: I am not a fan of the strong definition of Kurzweil’s singularity – that at some specific year (2039?) we’ll make an AI smarter than us that will in turn make an AI smarter than itself and that this cycle will accelerate so that within a blink the godlike super-smart AI will solve all our problems, including giving us immortality. There are a number of reasons why I reject this premise, but chief among them is that this idea is a type of “thinkism” which says that we can solve problems simply by thinking about them, when I believe that we not only have to think but to do actual experiments, in biological time and geological time. Ray’s response to this is that we can speed up experiments by doing them in simulation, but simulations only speed up if they are simplified; you can speed up a simulation only if it is not as complex as what you are simulating. So the price of speed in simulations is simplification – which can lead to wrong answers. In short, we can’t “think about” our way to immortality or even super-intelligence. We have to do real stuff in real time.
But I do agree with the weaker version of the singularity – that we will go through some kind of phase transition as we make a global super-organism of humanity and machines. And what it is cannot be seen from this present vantage. However I also think the phase change itself is only possible to view in retrospect. It will be invisible to us in the transition.