One Singularity…Or Many?

Singularity is one of those wonderful words, like democracy or virtue, which are sort of like the old ditty about the pelican, “whose beak can contain more than his belly can.” That is, these words contain within themselves an extraordinary amount of meaning, far more than even their fiercest partisans (or critics) are aware.

In the case of Singularity, that meanings begin with the obvious. For most everyone in the transhumanist community, it is defined (to quote Wikipedia) as “the hypothetical future emergence of greater-than human intelligence through technological means.” It is that moment, probably not excessively far away, when the human will be confronted with the superhuman…a superhumanity which may be organic, or non-organic, or, and perhaps most likely of all (at least at first), an amalgam of both.

But, I think, there is an unexamined nuance here. I suspect most of us think of the Singularity as being a, well, “singular” thing. We have somewhere in the back of our minds the idea that the Singularity comes along once and only once in the lifetime of a civilization or of a species. It is like evolving warm-bloodedness, developing a central cortex, or inventing language—once done, isn’t repeated.

Yet, I wonder if that’s the case.

I’m going to argue that a possible (albeit not inevitable) future for humanity involves not one but many Singularities, with super-intelligence obtained repeatedly, in regular cycles, perhaps interspaced with fairly dramatic Dark Ages, for so long as Homo sapiens exists.

To support this proposition, which is admittedly not intuitive, I’m going to have to work my way through several basic premises. So, please bear with me, and let’s begin with the first of them.

Premise #1) When the Singularity occurs, not everyone will take advantage of it.

Let’s assume that when super-intelligence is possible, it will be widely and easily available. That may not be the case, but I see no reason why it couldn’t be fairly democratic. It might be possible, for instance, to cheaply produce nanotechnical devices that you could ingest, and which could then swim through the bloodstream to the brain and link you directly to machine or other human intelligences. After that, particularly if the nanobots were self-replicating, then the transformation of the whole human race would be largely a matter of handing out pills. The Singularity would happen in the streets…or, at least, on the street corners, in free samples given out to passer-bys.

But will they take them?

I submit that many people won’t. It is, after all, a fairly scary thing. Imagine having someone say to you, “Hey, here’s a cup of nanobot cola. Swig it down and by morning you’ll be completely, utterly, and totally remade. You’ll have powers beyond your wildest dreams, but you’ll never…ever…ever be the same again.” I’m betting a lot of folks would respond to that suggestion with another proposal, far more forcefully stated, about where you could stick your damn ‘bots.

Oh, by the way, that’s just ordinary folks. There will be lots of other people for whom increasing human intelligence will be not only frightening, but actually hateful, even something to be violently opposed. Think about the opposition that already exists, from Right, Left, and Center to stem cell research, or even childhood vaccinations. Or think about the more unhinged of our Neo-Luddites.

And I’ve not touched on our elite groups who would see transhumanism and the Singularity as serious threats to their own positions. You can almost already hear the leaders of various religions, thundering down from a hundred thousand different pulpits, decrying the “secular religion” of the godless transhumanists, which could actually deliver things like near immortality and the end of want, rather them promising them in some unspecified future. As for our presidents, kings, CEOs, and University deans…. surely, they would be all for the enhancement of human intelligence so long as it was their monopoly, but once it was not…once the peasants could participate…’twould be a different kettle of neurons.

So, I predict that after the Singularity, we’ll see the human race divide between those people who decide to remain as they were and those who don’t. Which brings me to…

Premise #2: Transhumans will be hard to see.

The individuals who elect to be more than human will evolve very, very rapidly. They’ll be able to improve themselves continuously, after all. They’ll be adding new powers, new abilities, new facets of themselves, all the time. Which means, I suspect, that they’ll soon evolve right out of sight.

What do I mean by that? Well, as a character in one of my short stories says, consider the ant. It is a very successful being. It lived long before us. It will exist long after we’re gone. But does it see us? Does it think about us as we think about it? As a living being? As an entity with its own goals and aims?

I submit that it doesn’t. First, I’m not sure it has concepts like “entity” and “living,” but, second, even if it did, I’m pretty sure it would see us as just big, warm, moving, something-or-others. And our constructions? Our cities and farms? No different from any mountain or field.

My point, then, is that once transhumans were as far removed from us as we are from insects, they might be pretty much invisible to us. Or, more precisely, we’d see them, but not know what we were looking at.

Premise #3: The Singularity will bring social chaos.

When the Singularity occurs, particularly if those who elected to become transhuman were basically disappearing from view (a la Premise #2), society as we know it might well collapse. The population would be dividing into two radically different sorts of people. Nations, social institutions, economies, even families would be ripped apart. Indeed, there might be fairly dramatic conflicts between humans and the early transhumans, as the former attempted to prevent the latter from consolidating their place in the world.

But, whatever the process, the result will be (to quote Mr. Yeats) that “things fall apart; the centre cannot hold/Mere anarchy is loosed upon the world.” Normal humans might well regress all the way to something like pre-industrial society.

Then, after that, they (meaning normal humans) could easily forget what had happened, particularly if records were lost during the chaos following the Singularity. After all, what would people see if they looked back in time, as it were, a century or two later? Well, they’d see governments and economies falling apart. A population crash. Incidents of civil violence. Maybe barbarians of some sort invading from the peripheries of civilization. Given that evidence, historians might look back and say, “oh, there was a period of social chaos at the end of the twenty-first century, but it wasn’t much different from similar periods of chaos that have happened before and will doubtlessly happen again.”

Which brings me to

Premise #4: Industrial society would reappear.

No matter how total the chaos, no matter how much “anarchy is loosed upon the world,” eventually things would settle down again. Eventually, normal humans would rebuild their civilization.

That might not be easy. Depending on the degree of the chaos following the Singularity, H. sapiens could regress all the way to the Neolithic, or at least to agrarian empires. They might also be laboring under an anti-technical bias taught by their elites—machines are evil, etc. (It wouldn’t the first time something like that happened. One of the things that kept our own, post-Roman Dark Ages so depressingly dark was the doctrine, preached widely enough by Church Fathers, that excessive concern about the physical world detracted from one’s chances for salvation.)

But, sooner or later, the pressures of the real world…of making a living…of easing one’s troubles and increasing one’s pleasures…would make themselves felt. Technologies would be rediscovered or reinvented. Civilization would once again become urban. Farmers would be once more transformed into factory workers. Oxen and horse-drawn carriages would once again give way to steam and internal combustion. And, eventually, in some lab or university, some latter day Babbage would reinvent computing, some second Lady Ada would recreate programming, and, before much longer, a few hundred years at most, someone would have the bright idea of linking brain and CPU…

In short, a second Singularity, a second family of Transhumans, another period of chaos, and the whole cycle would start all over again.

Of course, I’m leaving out the possibility that one or more group of Transhumans might elect to intervene and make certain that the entire human race made the upgrade at one point or another. But, they might not. Maybe there will be something in Transhuman mores that would prevent them from doing so, say, the idea that super-intelligence is like freedom, and cannot be imposed without the consent of those who receive it. Or, maybe, they’ll elect to preserve us as potential replacements should anything go wrong with one or another particular path to transhumanity. Or, finally, maybe they’d keep us around for curiosity’s sake, the way we do zoo animals. (In one of my stories, a character suggests that transhumans will move off the planet entirely, leaving normal humans behind in a sort of world-wide nature preserve, with ourselves as H. sapiens sapiens in the mist.)

In short, if any of these factors prevented transhumans for forcibly extending super-intelligence to everyone, and if any of my four premises are at all right, then I can see no reason why a cycle of Singularities would not be pretty likely. Maybe not inevitable, but likely.

Which means that the Singularity is a multiplicity, not a thing but a cycle, not a consummation but a recurrent phenomenon. Singularities will appear every few generations, as regular as clockwork, until either all of humanity is “transhumanized” or goes extinct. The analogy that almost instantly springs to mind is that other Singularity, the astrophysicist’s Big Bang, which many a scientist now suggests was only one Creation among many, and that it will be repeated in time.

That, in turn, brings up a disturbing question. To wit, how would you know whether or not you were living in a post-Singularity society? How could you tell if it had happened, particularly if, as I’ve suggested, transhumans were more or less invisible to you?

I think we can safely assume that humanity hasn’t yet had a Singularity. If there had been one, I suspect we would find the remains of a previous industrialized culture in our environment. Indeed, given the human propensity for littering on a titanic scale, it would be impossible to escape the knowledge that someone had been there before us. The tons of discarded soft drink cans, landfills full of broken computer keyboards, and pounds of pennies piled up in wishing wells, or the equivalent of all these things, would tip us off. (Though it would make a great high concept for fantasy fiction. The lost civilizations of Atlantis and Lemuria did not sink beneath the waves. They ascended into posthumanity.)

But human cultures in the future may not be so lucky.

Perhaps, then, if only as a first exercise in a brand new science, the Sociology of the Singularity, we could establish a kind of rubric by which a culture could tell if its past included a Singularity. It might look something like this:

  • If you are living in a society which emerged from a Dark Age of some sort, and,
  • You have overwhelming evidence that societies more rather less technically advanced proceeded your own, and,
  • Your society has strong (but fading) taboos regarding technology and computers and/or the use of those things to improve human intellects, and,
  • Not often, but every once in a while, there are creditable (as opposed to merely popular) reports or artifacts or individuals who appear and demonstrate inexplicable powers and behaviors, then,

You have had a Singularity.

And almost certainly, another one’s coming.

Victor Storiguard is a writer and editor whose career stretches back for over a quarter of a century. Originally a journalist writing about computers, he has now moved on to science fiction. Much of his work deals with “transhumanism” and “posthumanism.” Critics have said that his tales combine hard technology with the appeal of fairy tales and myth. He can be reached via his blog at victor-storiguard.blogspot.com

14 Comments

  1. It’s funny that a lot of the discussion boils down to a dispute over semantics. I guess that’s the problem with any newly developing subject: simply trying to get wide agreement on what the words mean. Either way, it’s a good discussion.

  2. I am highly gratified by the quantity and quality of discussion my small essay has generated. Even the negative comments are welcome. The fact of the matter is that if one says something with which no right-thinking person can honestly disagree then one is saying nothing of importance.

    Still, I will continue to hold to my thesis that a cycle of Singularities is a possible and, indeed, probable scenario. It seems to have historical precedent. Think, for example, of another revolution, industrialization. Any time a culture has shifted from the farm and the field to the factory and the city, that culture has known enormous dislocation. Civil wars, riots, revolutions, intense poverty, all of these and more are the inevitable attendants of urbanization and industrialization.

    Quite predictably, some cultures have elected to avoid industrialization entirely. However, a few centuries later, those cultures find themselves forced into it for one reason or another (if only to save themselves from colonization). And, also predictably, those cultures go through precisely the same dislocations that afflicted the nations and places that had already traveled down that same road. China is, of course, the classic case.

    In other words, we have something that looks very much like a predictable pattern of acceptance and rejection of technologies. The technologies behind the Singularity will, I suspect, know a similar fate.

    But, whether my little story is a valid prediction or merely another of my exercises in science fiction, I think at least one of my premises is proved, i.e., that the word “Singularity” is a copious tent with room for many elephants. Whatever its original meaning, it now contains several different concepts: artificial intelligence, super-intelligence, the enhancement of the human intellect, and much else besides.

    What, of course, will be interesting from a historical perspective is which of these definitions will survive the Singularity itself. Let us hope we all live long enough to find out.

  3. I think you may be overestimating the number of people who will refuse these transformations. I frequently see people fiercely advocate a Thoreau-like lifestyle and shun new technological advancements, and then reach into their pockets excitedly to read a newly received text message, not to mention that they frequently cite information to defend their argument, which they found via the internet. I believe that the anti-singularitarians will be predominantly composed of these people… “Man, it would just be great to live in Walden Pond…. Holy McFuckChrist, GTA16 just came out with a full immersion virtual reality neural hookup. Now when I run over prostitutes in my stolen corvette it looks just like real life!”. My point is that these are mostly people who will not practice what they preach.
    Which isn’t to say that I don’t think they will exist, and I actually found your analogy of the ant/human relationship very interesting. Instantly, I thought of how to an old person who hadn’t been exposed to computers on a regular basis, to see someone operate a computer, for them, is really just to see someone stair at a screen pressing buttons. They aren’t really accomplishing anything, to the view of the non-tech oriented. I believe your prediction of technology resisters not being aware of what is happening with the rest of the population will manifest in our minds connecting to the internet especially. For people who have resisted the singularity successfully, there will be an entire world of transmitted thought that they will be completely unaware of. Again though, I don’t believe that these people will be many enough to be a significant population.
    These few resisters will merely be the first generation of them, and through newer generations it seems that a process of Darwinism and culture will happen. I use the word Darwinism because the generations of resisters will be generations that die, while non-resistors may remain alive indefinitely. We should also be aware of what is going on with new generations now. Every new acquaintance of members of my generation seems to be further evidence that Catholic and other religious families fail in convincing their children to follow in their footsteps. The singularity will be no different.
    I see that many at H+ disapprove of the authors use of the word “singularity” and I do feel compelled to think of objectors newly embracing technology as catching up to the Singularity, rather than experiencing their own private singularity, all though that is would be happening in a way. I think that you guys are getting a bit too distracted by trying to define the singularity and that your disagreement is coming from the fact that because the singularity hasn’t happened, it has no concrete definition, and that while the original and first used definition may be specific, it is useless to us because our main concern is what will happened, as opposed to what Vinge or any other specific person predicted.
    While I wouldn’t think of an Amish community suddenly going cyborg as a seperate singularity, I don’t think the jury should be out on weather or not there will be more than one singularity. What we call the singularity now, will bring us to heights that can’t be comprehended at the present moment, but that doesn’t mean that more advanced states of being won’t exist that we, even at that point won’t be able to comprehend. I believe this because it would seem that:

    There will be one singularity for every exponential trend.
    Every exponential trend has that crazy breaking point of rapid acceleration.

    Our ability to further technology on an exponential trend has thus far depended greatly on our ability to build things on a smaller and smaller scale.

    If Moores law continues until atomic precision, we will have to rely the progress of our ability to manipulate matter on a sub-atomic level to continue this trend.

    And lastly, while promising technologies are coming up for manipulating matter on a sub-atomic level are emerging, for them to continue the current exponential trend, the performance of our sub-atomic technologies will have to be equal or near equal to our atomic technologies during the point at which atomic precision is met. Because these technologies(quantum computing, etc) are very new, I think it’s a bit early to say that they will simply pick up where atomic computing ended, and unless that happens, there will be a gap followed by a new and exciting exponential trend, with a singularity 2.0 included.
    I’m no Ray Kurzweil and I understand that there are details of this beyond my comprehension. I don’t mean to argue that there will be more than one singularity, but merely that that is dependant on what happens after atomic precision, which has plenty of room for debate.

  4. 4. Why there would be a dark age? Of course the 1st post-humanity would look almost Neolithic to the next.. But how would the barbarians/Luddites secede against something so superior?

    Who says they need to secede at all? They will be allowed to remain in their ‘regressed’ state and become as the author said a zoo exhibit of sorts. Those who shamelessly climb aboard at once are in fact lacking in individuality, it is an application ethical superiority which further humanises.

    Transhumanising when one has not yet experienced evolution of technology is cheating somehow. Who wants to become a super being when they have not developed the spirit, not yet learnt the tribulations of pre-tech/industrial era ‘flavours’. Life or awareness is about experience, not ‘skipping’ classes. That itself can only breed quality though plodding, it is the journey that matters not the destination – Omnipresent godlike but meaningless except for the journey itself which will separate the men from the kids.

    There are 7 billion people looking for individuality even this day and the very neo-luddite state, barbarian state or caveman state is a form of individuality that like minded people at the grace and ‘desire for variety’ of ‘something-so-superior’ (HR Giger’s stuff comes to mind but looks poorly maintained as opposed to the grey goo dependant beings which still yet see even less sustainable or ‘frangmented’ and homogenous).

    Are we living in a form of matrix? Well ‘something-so-superior’ has shown a severe surfeit of ethics and tolerance for autonomy. It is like that pet that never survived to adulthood because it’s owners were prodding and petting it so much the stress killed it.

    Any advice from where you stand btw on the nature of government and the ability to distribute land and wealth?

  5. I agree wholeheartedly that the buzz-word “Singularity” is very inappropriate to indicate the very profound changes resulting from the exponential development of technology.

    I prefer the use of the far more descriptive term “phase transition” for this event, which may well be upon us in only a couple of decades.

    I also agree that many of the social structures with which are familiar will undergo radical changes. also I would add that the extinction of our species is a real possibility.

    However, Victor, you do seem to be falling into the same anthropocentric trap as the transhumanists in assuming that the new “intelligent” entities will arise at the level of the individual, whether humanoid or “robot”.

    There is very good evidence from chemistry and biology that the direction of the evolutionary process in which we find ourselves points, instead, to what is now the Internet as the next phase of this on-going process.

    This broad evolutionary model (which extends from stellar nucleosynthesis right through to the current development of technology) is outlined in “The Goldilocks Effect: What Has Serendipity Ever Done For Us?” (free download in e-book formats from the “Unusual Perspectives” website)

  6. 1. If they don’t take them, contaminate the water-supply 😉 The inequality that would result in not taking the pills would justify the means. Society would not manage to function with both normal and superior humans… Linking into civil war scenario in 3.

    3. (skipping 2) To compare the 20th century to the 19th century you will notice the 100’s of millions of humans that died through human intention… We’re only 11 years into the 21st and look at how many have died already.. That and a new ideology or world-changing idea is created every couple of months reccently. I feel that the 21st will be even ‘worse’ if I’m honest, whether or not post-humanity is involved.

    4. Why there would be a dark age? Of course the 1st post-humanity would look almost Neolithic to the next.. But how would the barbarians/Luddites secede against something so superior?

  7. I’ve always considered the Singularity ‘singular’ because it is about the creation of super-intelligence in general, not because it involves any people becoming superintelligent, or even has anything to do with people. There can only be one ‘first’ creation of super-intelligence, which, it is argued, is the kind of event that demarcates past and present like no other. Thus, I regard this article to engage on the basis of a fundimental misapprehension regarding the concept.

    • This is an interesting point, but absolutely nothing in Vinge’s original definition precludes the possibility of more than one Singularity. (For instance there are multiple black hole singularities in the universe.)

      My view would be that this article doesn’t display a fundamental misapprehension of the Singularity, just maybe that we disagree with the author on the totality and permanence of the initial Singularity. (If you view the coming Singularity as all-encompassing and self-preserving then the cyclical view presented in this article tends to be discarded.)

      In other parts of the uni/multiverse, surely smarter-than-human intelligence has already been created. Does that mean the Singularity has already happened?

      • Yes, I maintain my point, which is more philosophical I suppose.

        The Singularity happens regardless of who witnesses it. This is like the classic tree falling in the forest problem in that way. No matter which major school of Singularity thought one adheres to, they must agree that a Singularity would make some change in the universe, even if that change is simply at one moment the smartest being on planet earth was was ‘x’, and the next the smartest being was ‘y’.
        The misapprehension is that people need to be involved at all, or notice the change. The event itself does not entail that anyone notices, it entails an event of a specific kind happens – that being a sudden, higher level of intelligence being created on planet ‘x’. The singularity is, in the most fundimental way (as the author states), the creation of superintelligence.

        But yes, there is also disagreement between myself and the author on the implications of the Singularity and how it could be possible that people didn’t realize. It seems to me that if one follows the implications of superintelligence logically, perhaps by deductive reasoning alone even, then they will recognize that it would very likely leave a lasting and indelible mark on the universe such that nothing short of total extinction would erase the evidence of it. Further yet, anyone that is supposidely ignorant of the phenomenon happening (as tribes are ignorant of the Industrial Revolution), through seeking out superintelligence would undoubtably notice that it had already happened, like tribes coming out of the bush notice that the Industrial Revolution already happened.

        Regarding Singularities happening elsewhere, yes, this article brought up this thought in my head too. I guess a conversation on that topic would have to discuss to what extent a Singualrity means the whole universe waking up as Ray Kurzweil suggests. Kurzweil thinks that if another species somewhere had reached a Singualrity that we would probably know about it due to the rapid expansive nature of an intelligence explosion and its effects. So one would have to make a case that that wouldn’t be likely, if they are to maintain likelihood of elsewhere Singualrities already passed.

        • To make my point clearer, the fundimental misapprehension is that humans can repeatedly become the second smartest species on the planet. Becoming the second smartest species on the planet is the kind of event that by its very definition can only happen once. Thus the Singularity, the creation of superintelligence, is the kind of event that can only happen once.

          • So I guess my fundamental objection here is with the use of the word ‘Singularity’ to apply to anything other than the first instance of it. If we expand the definition of the word to entail every event of creating superintelligence then every superintelligent mind created as a consequence of the first singularity (not just later down the road such as the author portrays) would also be, by this logic, a singularity – thus this kind of conceptual inflation is problematic the very notion.

            What I suggest instead is that the author distinguish somehow between a singularity happening, and the experience of a singularity happening, since what is actually happening in this piece are ‘subjective singularities’, the perception of a singularity, rather than the actual thing. The really interesting question this blog raised for me is consideration of the difference between objective and subjective singularities in this sense – and then applying that logic to the question of elsewhere singularities. If ours isn’t the first, then is it a ‘true’ singularity? I would still say yes, because the definition is that it is the creation of smarter than ‘human’ intelligence, whereby human we mean the exact dna of present humans, which would be near impossible to exist elsewhere, and if it did, then it would likely be that there is an exact duplicate of our scenario, in which case the elsewhere singularity would occur at the exact same time – thus it didn’t already happen, and is still a singular event, multiply instantiated.

            I am highly concerned regarding the misuse of the word, for reasons that will be published shortly on H+.

        • I reckon there could be a few reasons why we don’t see other post-singularity civilizations/beings in the universe, even though they might exist. Off the top of my head:
          (1) Super-intelligent beings are not interested in conquering their surroundings the same way sub-super-intelligent beings are. Maybe post-singularity beings find it easier to simply imagine what the rest of the universe is like than bothering to go out exploring and conquering.
          (2) Super-intelligent beings choose not to communicate with us. Maybe all super-intelligent beings see it as an obviously correct decision not to bother with sub-super-intelligent beings. Maybe they have the ability to stay hidden from us and choose to exercise that ability.
          (3) Super-intelligent beings drive down through smaller and smaller orders of magnitude, exploring the universe of small, instead of expanding out through larger and larger orders of magnitude.
          (4) Super-intelligent beings discover super-luminary travel and communication, shoot up in orders of magnitude well beyond what we can presently see and thereby become simply too big for us to see. That’s one way of interpreting Storiguard’s ant-and-humans analogy.

  8. Nice points. I like the ant-looking-at-humans analogy. Yet, it IS just an analogy. We humans can understand that some analogous situation could exist (ants looking at us –> us looking at super-intelligence). Ants cannot understand by analogy because they’re too dumb. So, we can at least grasp an idea of what super-intelligence would be like (invisible). Ants can’t even ask the question because they are totally clueless. Even so, even if we can think of super-intelligence by analogy, we still might not actually be able to perceive it directly.

    Anyhow, I don’t think you really established premise #1) When the Singularity occurs, not everyone will take advantage of it.

    Sure, there are small alcoves of humans that eschew technology (the Amish, for example). But they make up only a minuscule portion of humanity. And even they DO use technology. It’s just old technology. History shows more or less that eventually everybody comes around. Nobody’s standing around screaming about how bow-and-arrow and fire technology are the devil’s work anymore.

    I reckon we’ll all become super-intelligent particularly because post-human super-intelligent beings will be very, very charming and convincing and empathic. Goers will very easily convince the stayers that going is best.

    Clearly, the average human might have trouble going from 100% organic human to a 100% super-intelligent bio-mechanical computer/human thingy, all in a lazy afternoon. So, they’ll take it a bit more gradually. But they will take it. Little by little, everybody will end up through the singularity. (I write a lot about why this is so in my blog, if anybody’s interested.)

    In fact, it’s not just (post-)humans that will become super-intelligent. We’ll be bringing our pets and all other animals and plants and inanimate matter along with us. Once we can make humans super-intelligent, we’ll surely be able to do the same thing with our dogs and cats and even our GI Joes and Cabbage Patch dolls. How about bonsai-style intelligent safari animals? Narnia, anyone?

Leave a Reply