Top Five Errors in Predicting the Future

Predictions of the future are always wrong. It’s a fact of life every person has to accept. No matter how thorough your research, how good your data, how backed up with state of the art cutting edge technology your predictions are, they will be wrong. Nothing will ever be quite like you predicted, and history is filled with predictions that failed utterly. Even today, with some of the best minds in the world looking forward, and bookshelves filled with their predictions, we know that all of them will be wrong.

Valkyrie Ice

But, like anything, some will be far more wrong than others. In the many years of research I have done into technology, I’ve noticed some common problems that a lot of very educated, very intelligent, and very convincing writers make that more or less ensured that their predictions, despite all the logic and research that backed them up, were never even close to reality. Some of these are more or less unavoidable to some degree, but the more aware you are of these common errors the better you can evaluate how “on track” a prediction might be.

Tunnel Vision

Tunnel Vision is what you get when a futurist focuses on one single technological innovation, and then proceeds to extrapolate how the world will be changed by this innovation. We’ve had a lot of this kind of futurism, and to be honest, it’s often necessary to limit the scope of an article to deal with just the subject under discussion, but excessive focus on a single technological development almost guarantees that the predictions being made will not only be pretty far off the mark, but that five or ten years down the road, even the author may look back and scratch their head and wonder what they were thinking.

The reason this happens is that nothing exists in a vacuum. It doesn’t matter how earthshaking the technological development might be, it’s part of a vast tapestry of technological advance that not only affects how that single tech will be developed, but even whether it will be developed. Case in point, flying cars. The fact is that we have had flying cars since the 1950s. They built the Avrocar in 1958 and the Moller skycar should have been FAA approved for testing well over a decade ago, except that there seems to be some issue about whether the vehicle is a “helicopter” or an “airplane” that has been delaying things. Flying cars are easy. Flying cars that can be driven by the average driver without a year of training, in any weather conditions, and with a high degree of safety are not. The technological innovation “a flying car sized vehicle” was made a long time ago, but the interaction of that innovation with the “real world” failed to result in the “Metropolis” vision of a world where everything flies. And there are thousands of similar examples.

No one can focus on everything, but recognition that there are numerous factors and other developments that can and will affect any prediction can make the difference between a prediction that was merely wrong, and a prediction that is spectacularly wrong.

Ideological Slanting

I’ve said this countless times but it never seems to make an impression on anyone. One Man’s “Evil” is another Man’s “Good.”

Most futurists seem to ignore this fact, but it’s a quick way to take a prediction and turn it into wishful thinking that is never going to come true. To an extent, some bias is unavoidable. We are constantly told from the day we are born what is “Good” and what is “Bad” and as we grow older, some of these biases are so deeply ingrained that it’s nearly impossible to recognize those biases in play. Read a book by a “conservative” and their predictions will inevitably reflect a certain set of ideological viewpoints, one by a “progressive” will reflect another set.  And by and large, any predictions made by someone with a ideological bias will tend to overplay any technologies that they see as “supporting” their ideological views, and downplay any technologies that they see as “threatening” their ideological views.

The problem is that these kinds of bias tend to be “invisible” to the readers, because these biases are likely to be shared. To give an example of how deeply hidden a bias can be I’m going describe a historical figure, then ask you to identify him based on that description.

This man commanded armies of conquest, invaded and took possession of several nations, slaughtered their entire populaces, and justified doing it by claiming that his “race” was morally and ethically superior, and that he could not allow their “impure blood” to contaminate his “Chosen People”.

It’s pretty simple and clear cut to decide that this man is “Evil” right? Now, what was his name?

If you said Hitler, you just became an example of a hidden ideological bias. That man was named Joshua, and his genocides are excused because of a shared ideological bias that “he was doing ‘God’s’ work”

The same holds true of most other ideological biases as well. People once believed we would never successfully create airplanes, since God hadn’t given man wings, so we weren’t “meant” to fly. In much the same way, a lot of “futurism” tends to reflect the political leanings of the author, and the more political the author is, the more slanted their predictions will tend to be. Regardless of what political ideology you hold, being aware of the biases that lurk in a prediction will allow you to better evaluate its probability of actually coming true.

Linearism

Kurzweil covered linear thinking pretty well in The Singularity is Near by pointing out that most people cannot really think of things in terms of “exponentials”. This makes it hard for people to understand just how quickly technology can advance. Linearism is related to this way of thinking, in that many futurists tend to think in “straight lines”, like, technology A leads to technology B which leads to technology C, ad infinitum.

The problem is that technology isn’t being developed in a linear fashion, but in a highly PARALLEL fashion. Take nanotechnology for example. Despite Drexler’s repeated statements that diamonoid based nanotech is not the only potential pathway to functional nanotechnology, you will still find large numbers of futurists who think that nanotech development is at a “standstill” and many decades down the road still. The problem is that because they are looking only at a “single development track”, they are missing all the hundreds of technological breakthroughs in DNA manipulation, graphene and carbon nanotube manipulation and creation, stemcell “programming” and even “controllable bacteria” that could be used to manipulate matter on an atomic scale. And by remaining focused on the “holy grail” of nanotech, many futurists fail to see the implications of 3d printing technologies that can build objects a layer at a time. While these technologies may not be “nanotech” in the sense talked about in “Engines of Creation” they are still a massive step towards the “universal fabricators” of nanotech dreams, and they will only become capable, and able to build on smaller and smaller levels over the next decades. Long before we can build the “diamonoid assemblers” of Drexler’s original nanotech descriptions, we are very likely to have machines able to manufacture any arbitrary product, be it organic, inorganic, or whatever, using a variety of techniques.

So basically, the more a futurist follows a linear developmental path, the more likely his predictions are to stray away from reality. A million lines of research are running in parallel, and any of them can cross in ways that could make a technological development go from A to B or even Z in unexpected ways.

Static Worldview

A static worldview is one of the most common failings in futurism. It’s basically what you get when a futurist projects a prediction into a future were nothing changes but the technologies he’s talking about. No matter how “world changing” the technology they talk about may be, they ignore any possible social, political or cultural changes that may come about because of them. Kurzweil talks about VR as if it’s just a glorified video conference call. Drexler carefully avoids any mention of nanotech being used to radically change human appearance. And science fiction writers? You would think that the future is just now with day-glo special effects. A particularly amusing example is writing about advanced medical abilities that would allow DNA modification, and whole sale body replacement, and then talking about “advanced retinal scanners” and “DNA ID security” when they had just described technology that made them utterly useless.

Population is a particularly common static worldview issue I come across. In fact, it was recently brought up in this article on H+. The static worldview makes the assumption that population growth WILL ALWAYS INCREASE AT THE SAME RATE IT DID IN THE PAST. However, the reality is that population is only growing rapidly in less developed areas, and in highly developed countries, it has dropped to mere replacement, or even less than replacement. Study after study has shown that an increase in standards of living leads to lower birthrates, and yet the static worldview of ever increasing exponential population growth remains as strong as ever.

And population is far from the only static worldview that is common among futurists. While an exhaustive list is beyond the scope of this article, being aware of the “unchanging” assumptions that may lie behind a prediction will enable you to better determine how likely it is to be close to “probable”.

Unrealistic Models of Human Nature

And now we come to the single most common error made by futurists. Unrealistic beliefs about humanity itself. That humans are “Noble” or “Evil” or “Good” or “Bad”. The problem is that we are all those things, and more. You see this kind of bias all the time in futurism articles. Either humanity will “naturally do the right thing” and create a utopia, or they will “wallow in stupidity and greed” and create a dystopia.

Yeah, they will. All at the same time. Corporations will seek to make the world a paradise for themselves, which will be a hell for everyone else. The Wealthy will buy up every rejuvenation technology and try to hoard it. The Eugenicists will try to wipe out everyone they feel is “useless” to their visions of society.

And the naturists will try to make their paradisical “sustainable environment” and the Luddites will try to prevent technological change, and etc, etc, ad nausem.

The simple truth is that humans are individuals, and every single one of them has a different view of what “paradise” is. We will undoubtedly see some of the most noble and most ignoble of human behavior on all sides as we walk into the future, and as I pointed out above, what some of us feel is “good” is what others of us view as “evil.”

But the reality is always going to end up somewhere in the middle. It’s going to tilt radically one way or the other for a time, I won’t deny that, but all one has to do is study history to see that no matter how far to one side or the other things go, sooner or later they begin to move back towards the center, though what that “center” is shifts as well.

But whenever a futurist begins painting a prediction in which someone “wins” be aware that they’ve passed far beyond the bounds of objectivity, and become an “Advocate”

Time never stands still, and any prediction that has a “at this point we’ve won/lost” is a prediction that has very little probability of actually coming to pass.

So when you next read a futuristic prediction, or when next you write one, first ask yourself if it has fallen prey to one of these all too easy to make missteps, and always remember, no prediction will ever be right, all that you can hope for is one that is less wrong.

43 Responses

  1. Robert Bynum says:

    As the ever great Yogi Berra (baseball not Jellystone) once said:

    Predicting is hard , especially the future.

  2. I’ve started reading Nano, which was written in the 1990s, and it’s quite clear that even back then Drexler was talking about adapting the DNA paradigm of going from genes to proteins. This is stuff we’re starting to do now, and Drexler should receive credit for seeing it.

    • Nano was my introduction to Nanotech, long before I read Engines of Creation, or Nanosystems. It allowed me to recognize quite clearly that EoC was a THOUGHT EXPERIMENT, laying out possibilities, not a RECOMMENDED PATHWAY. He stated that OVER and OVER in EoC, and no-one bothered to listen. The same thing in Nanosystems.

      People read what they wanted to read, not what was actually written.

Leave a Reply