I first encountered Jaron Lanier’s work when I taught his essay “One-Half of A Manifesto” to computer science students at the University of Texas at Austin. In it he argues, against most of his fellow computer scientists, that humans are not digital or biological computers and they are unlikely to be replaced by computers anytime soon. Lanier rejects what he calls “cybernetic totalism,” which
1) That cybernetic patterns of information provide the ultimate and best way to understand reality.
2) That people are no more than cybernetic patterns.
3) That subjective experience either doesn’t exist, or is unimportant because it is some sort of ambient or peripheral effect.
4) That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.
5) That qualitative as well as quantitative aspects of information systems will be accelerated by Moore’s Law.
And finally, the most dramatic:
6) That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly, they will overwhelm all the other cybernetic processes, like people, and will fundamentally change the nature of what’s going on in the familiar neighborhood of Earth at some moment when a new “criticality”is achieved- maybe in about the year 2020. To be a human after that moment will be either impossible or something very different than we now can know.
Lanier concludes his piece by specifying why he disagrees with many of his colleagues:
Lanier is probably correct that some rationally unjustified enthusiasm affects the transhumanist movement. Some of us may be guilty of confusing what we wish we had–immortal consciousness–for what we actually have. As for the six basic ideas comprising cybernetic totalism, Lanier is correct that all are philosophically or scientifically debateable. His final point is difficult to assess. If humans merge with their creations as Rodney Brooks and others suggest, then the line between machine autonomy and human responsibility will be blurred. But Lanier is right to stress that we are responsible for creating the future, and he is a welcome voice in the transhumanist movement.
Of course many people find no affinity with, or actively oppose, transhumanism. As far as I’m concerned others can live as Amish or hunter gatherers if they’d like, but they shouldn’t prevent the rest of us from availing ourselves the use of technology. As I’ve argued before in this blog and in my books, technologically guaranteed immortality will end most people’s opposition to life-extending technology anyway. When immortality is real, most will choose it rather than dying and hoping for a heavenly reward.
But for now we probably have to die. Marcus Aurelius’s words from almost two thousand years ago, written during a military campaign in the far flung reaches of the Roman empire, are still true for at least a little longer, “…life is warfare and a strangers sojourn, and after fame is oblivion.”
It is up to us to create a world where our descendants do not suffer this fate.
John G. Messerly, Ph.D taught for many years in both the philosophy and computer science departments at the University of Texas at Austin. His most recent book is The Meaning of Life: Religious, Philosophical, Scientific, and Transhumanist Perspectives. He blogs daily on issues of futurism and the meaning of life at reasonandmeaning.com