2019 Recap: “Vernor Vinge on the Singularity”— (2 decades ago)
by Natasha Vita-More
Has the technological singularity in 2019 changed since the late 1990s?
As a theoretical concept it has become more recognized. As a potential threat, it is significantly written about and talked about. Because the field of narrow AI is growing and machine learning has found a place in academics and entrepreneurs are investing in the growth of AI, tech leaders have come to the table and voiced their concerns, especially Bill Gates, Elon Musk, and the late Stephen Hawking. The concept of existential risk has taken a central position within the discussions about AI and machine ethicists are prepping their arguments toward a consensus that near-future robots will force us to rethink the exponential advances in the fields of robotics and computer science. Here it is crucial for those leaders in philosophy and ethics to address the concept of what an ethical machine means and the true goal of machine ethics. Rather than machine ethics succumbing the flaws identified in the field of bioethics, such as moral biases and academic positioning, machine ethics can, and should, consider the Proactionary Principle.
_______________________________________________________
I met with the intriguing Vernor Vinge a little over two decades ago. Vinge (pronounced Vin-jee) is a delight. My first impression was marked by the sparkle in his eyes and welcoming smile. Suddenly, the Singularity seemed not such a distant and unsettling event, but one that was certainly plausible, given its author. We met just north of San Diego at his favorite Greek restaurant. Sitting in a warm and hospitable atmosphere, listening to insightful analogies, while admiring his acknowledgement of others whose vision influences his own, Vinge talked to me about the Singularity.
NVM: There have been some misinterpretations about the Singularity. Can you explain what you mean by the term?
VV: My use of the word “Singularity” is not meant to imply that some variable blows up to infinity. My use of the term comes from the notion that if physical progress with computation gets good enough, then we will have creatures that are smarter than human. At that point the human race will no longer be at center stage. The world will be driven by those other intelligences. This is a fundamentally different form of technical progress. The change would be essentially unknowable, unknowable in a different way than technology change has been in the past. An analogy I like to use is Mark Twain and the goldfish. If you have a time machine, you could bring Mark Twain forward into 1997. In a day or two, you could explain everything to him about how the world works (and I think he would love it!). On the other hand, if you were to bring in a gold fish and give the gold fish the same treatment — try to explain to the goldfish what’s going on in 1997 — the goldfish would remain permanently clueless. That is the difference that I see between current notions of technical progress and the transformation represented by the Singularity. Progress after the Singularity will be fundamentally and qualitatively different than progress than in the past.
NVM: Have you changed your mind about when the Singularity might occur?
VV: My estimates for when the Singularity might occur have not changed. I’d be surprised if it happened before 2004 or after 2030. But I do not regard the event as being a certainty; it is simply one of the more plausible scenarios. There are a number of symptoms that we can watch for that would indicate that it is happening — and there are some symptoms that might indicate that it is not to be. I spend a lot of time thinking about mechanisms by which the Singularity might not happen.
NVM: Could anything prevent the Singularity?
VV: Of course, there are catastrophic possibilities that could prevent the Singularity. We might be hit by an asteroid, or a war could conceivably kill us all. But leaving aside such dull unpleasantness, it is interesting to imagine that it is 2050 and you have to write an essay explaining why, in retrospect, it is obvious that there never was a Singularity. I have this essay written out in my mind in several ways, because it relates to stories that I am writing. I think the most likely non-catastrophic scenario preventing the Singularity is that we fail to solve the “software complexity problem”. That is, we never figure out how to automate the solution of large, distributed problems. In this scenario, our hardware continues to improve for a while, but more and more the software engineers fail to keep pace. Eventually, we don’t have the supporting software tools needed to develop the new generations of hardware, and hardware progress gradually levels off. We get better hardware and larger memories but only for very regular types of semiconductor designs. We get very nice consumer electronics and digital signal processors and computer memory, but we just can’t take advantage of it all to do better. And so the progress that automation is driving in medicine and in a host of other fields also begins to damp out. In this version of the Year 2050, we have something very much like the scenario Gunter Stent talks about in his book The Coming of the Golden Age: A View of the End of Progress. And in this version of the Year 2050, it is only the doddering old fogies who continue to talk (plaintively) about the Singularity.
NVM: What might happen to our transhuman culture if we don’t hit the Singularity. How might this affect aesthetics and creativity?
VV: If we do not get to the Singularity, we may come to a state in communication and cultural self-knowledge where artists can see almost everything that any human has done before — and can’t do any better. Aesthetics and artistic progress will be in a trap. Artists will be left to refine and cross-pollinate what has already been refined and cross-pollinated beyond all bounds of good taste. One of the features of such an age would be an appearance of greater and greater artistic jumble. (Stent does a very good job of describing this situation.) For as long as civilization survives, it could be a golden age. Artistically, some might consider it golden also, where every creative person can share in all humanity’s artistic history. But at the same time, art will segue into a kind of noise. This noise is a symptom of complete knowledge — where the knowers are forever trapped within the human compass.
NVM: Shall we push the Singularity Curve, or attempt to prevent it?
VV: If the Singularity can happen, then I doubt if it can be prevented. There are certain things that may or may not be possible to do with technology. If they can done, they grant immediate and enormous advantages to the groups that bring those advances about. Trying to prevent the Singularity by laws or public disfavor is essentially futile. If such measures guarantee anything, it is that the law passers and the abstainers will be the losers.
(At the same time, a crash government program to force progress to go faster than what a myriad of independent researchers can do, would probably not be helpful. There have been some spectacular examples of large sums of money being spent trying to force the technological curve, with unsatisfactory results.)
NVM: Are there constants that might remain after the Singularity?
VV: One of the reasons that I use the term “Singularity” is to invoke the notion that it is something that you can’t see into or beyond. Nevertheless, I like to think about what things would be like afterwards! (Call me inconsistent, what the heck!) There are a variety of analogies that I can come up with to imagine the situation afterwards.
After the technological Singularity, there would still be a physical world and physical laws. On the other hand, there would be technologies that apparently subvert those physical laws in ways we would have a difficult time understanding. From the standpoint of ordinary humans, it might not mean very much to say that there are still physical laws. Similar comments of things apply to economic laws. The place where economics (the logic of scarcity) would really apply would be among the superhumans; I think that their reach would exceed their grasp. But for ordinary humans, the world would the creation of the superhumans and the “economics” could be whatever the superhumans chose to make it. (We humans might have everything we can imagine — or much less.)
NVM: What might creative issues be for early posthumans? Will icons such as da Vinci be immeasurably and incomprehensibly surpassed by a posthuman “creativity augmentum?”
VV: Imagining what creativity and aesthetic issues might be for early posthumans is very intriguing. For these creatures, creativity and art might be among the most pleasurable aspects of the new existence. I believe that emotions would still be around, though more complicated and perhaps spread across distributed identities. In writing stories, I have tried to imagine emotions superhumans have that humans don’t have. Creativity may be entirely different from before, and this would depend in part on what types of emotions are available. A more concrete conclusion comes from our own past: before the invention of writing, almost every insight was happening for the first time (at least to the knowledge of the small groups of humans involved). When you are at the beginning, everything is new. In our era, almost everything we do in the arts are done with awareness of what has been done before and before. In the early post-human era, things will be new again because anything that requires greater than human ability has not already been done by Homer or da Vinci or Shakespeare. (Of course, there may be other, higher creatures that have done better, and eventually the first post-human achievements will be far surpassed. Nevertheless, this is one sense in which we may appreciate the excitement of the early post-Singular
years.)
NVM: Even if we easily reach human level AI, might we not find it difficult to go further to superhuman intelligence?
VV: When I give talks about the Singularity, I cite the apparently steady improvements in computer hardware leading to a point where human-equivalent machine intelligences are possible. Then I say, if we get that far it is plausible that quite soon thereafter the really important thing will happen: the creation of superhumanly intelligent beings (and therefore, the Singularity). Normally, the audience is most skeptical of the idea that we could ever make a machine that is a “person.” If that point is accepted (or stipulated), most people find it relatively plausible that we — or our creations — would quickly build a machine that is much smarter than a person. But occasionally I meet people who find this second step to be the less believable one. When I have to make specific arguments for it, I point to the simplest case: If we can build a human-equivalent machine, then surely, we could simply make it run faster. (The analogy is with nowadays-CPU development, and it may have hidden flaws.) If a human equivalent mind can run a thousand or a million times our speed then that seems like a kind of “weak superhumanity”. It is not unknowable; we could imagine ourselves in that context, just being able to think a lot faster than the outside world thinks. In that sense, it is an accessible way of thinking about superhuman intelligence — not an especially attractive one, but achievable.
In fact, I regard “weak superhumanity” as a limited proof-of-principle for the most literal minded. I bet a fast-minded dog would forever be less than an ordinary human. Thus, I think there is also such a thing as “strong superhumanity”, but what could that be like —
(Wherein a wise dog speculates on what it is to be human:) Most likely, strongly superhuman critters would be highly distributed. Their notion of self would be extremely labile, perhaps a little like the hierarchical liability of corporations.
But here is a very different possibility: maybe the self-aware part of a superhuman would not be bigger than human. This situation is fairly imaginable. It’s an extension of our own situation: the part of us that is self-aware is probably a very small part of everything that is going on inside ourselves. We depend on our non-conscious facilities for many the things. (It’s amusing that some of these non-self-aware facilities, such as creativity, are also cited as things that differentiate us from the “nonsentient” world of machines.)
NVM: What type of machines might we engineer and how will we better understand them?
VV: So we might get a creature that would be a lot like a human, but with extraordinarily good “intuition”. The creature would be coordinating and correlating vast amounts of information, but not in its top-level consciousness. Looking at it from the point of view of Marvin Minsky’s Society of Mind, the creature would possess an internal society extraordinarily more complicated than our own. At the top level there might be something that actually makes decisions on the basis of what comes in from the lower level agents. That apex agent itself might not appear to be much deeper than a human, but the overall organization that it is coordinating would be more creative and competent than a human.
Once we have a human-equivalent machine, I believe something better will be created very soon. Such creation does not imply understanding of course; we make things we don’t understand all the time.
NVM: And our creativity?
VV: Our own creativity is something that we will never exactly understand. (On the other hand, I’ll wager that a creature much smarter than we are could model human creativity in a way that it would find precisely predictable. But of course, its own creative powers would be murky to itself and its peers!)