Sign In

Remember Me

Anthropic Principles and Existential Risks

Existential risks are more likely than you think.

Imagine that God is going to toss a coin. If it comes up heads, he will create 10 people in isolated numbered rooms. If it’s tails, he will create 1,000 people, similarly housed.

You learn all this from a poster on the wall of the room you have suddenly found yourself in. Though interested in the deeper meaning of your strange origin, you find yourself most curious about how God’s coin came up. Are you one of only ten people or one of a thousand?

At first you think there is a 50 percent chance of either. The coin was fair, and everyone who was created was bound to find themselves in a room like this either way. You have no new evidence.

But then you think about it a different way. All of the people who could be created, under either heads or tails, had the same chance of existing — 50 percent. Intuitively, it seems then that you have the same chance of being any of them. And virtually all of them only exist if tails came up, so your existence is strong evidence that tails did in fact come up. Further to this, you reflect that if God repeats this experiment, most people ever born will be born on those occasions when tails comes up.

Just as you are rejoicing at the realization that you are very likely to have 999 brothers and sisters, you suddenly find the first argument persuasive again. And then the second and the first. You may be relieved to read at the bottom of your poster that the answer to this question is a matter of debate among those who have spent much longer thinking about such things.

The thought experiment you just found yourself in is God’s Coin Toss. It was invented by the British philosopher Nick Bostrom. The two answers correspond to two popular kinds of anthropic principles.

A diverting puzzle, you may say, but who cares? God is apparently not a gambling man, so this situation will never arise. And even if you do find yourself in such a situation, the world is unlikely to end if you guess wrong. But there you could be mistaken. To see the deadly fix your anthropic reasoning may illuminate, you need to know that you are in the Great Filter, and also what that is.

If you look very closely at the sky, you will see a fantastically huge number of stars, about 10^22. If you look among these stars for signs of vast alien colonization explosions sweeping out from them, you will see none. This means the chance of a given star giving rise to a civilization which visibly colonizes beyond its home star is less than miniscule.

Now picture the path of necessary steps between a lifeless star and a civilization with the power to colonize other stars. First, the lifeless star needs to have planets. At least one planet needs to give rise to life. That life needs to develop high intelligence. We just worked out that there is a tiny chance of a given star reaching the end of this path. This means at least one of these steps is very hard, most likely more of them. This set of steps is called the Great Filter, because stars get filtered out as they progress along the path.

The big question about the Great Filter is whether most of the hard bits are between steps that humanity has already passed, or whether a lot of difficulty remains in our future. If we have passed most of the hard parts, our star is incredibly unusual. We are alone amongst a vast number of primitive stars. This is actually the optimistic case, because if we have passed most of the filter already, we are quite likely to make it to the end of the path from here. We can expect to colonize space without much difficulty.

On the other hand, if much of the Great Filter’s strength is in steps we have yet to pass, this star’s cargo is ordinary. Aliens at our level are two-a-penny, but most of them don’t last. They get filtered later. Since in that case we have no known advantage over the masses, humanity is also very unlikely to make it through the filter.

It’s very important to make it through. The worst ways to get filtered in future steps include many human extinction risks such as disasters from nanotechnology or biotechnology. Some extinction risks could not be filters because they would destroy such a huge chunk of the universe at once that we can tell they did not obstruct life in the past. For instance, vacuum-state decay would destroy everything including the prevailing physical constants, so it is not one of the filters we are looking for.

Other future filters we may face would not immediately destroy us, but would still prevent the vast future of happy lives possible if humanity can expand beyond its home star. For instance spacefaring technology may be much harder than we imagine. Or ecological or social collapse may send technological civilization into a dark age from which it turns out to be virtually impossible to reascend.

Aliens may be colonizing the galaxy imperceptibly. This can be thought of as a large filter at the last step, a step of being visible after expanding into space. If this is the only large filter in our future, we may yet be safe, or perhaps someone else is enjoying the universe’s lush resources. This unlikely scenario is the best available with most of the filter in our future. Those are much more worrying than filters in our past.

We want to know whether most of the filter is in our past or our future. Preferably, we want to know that it’s in our past. The anthropic principles I mentioned earlier can help answer this question.

Picture an array of possible universes we might be in, some with most of their filter strength before us, some with most of it after us and some with a more even mixture. Some of these worlds are more likely than others, from what we know about things like evolution and space travel. Some of these worlds have more creatures at our stage along the path to space colonization. Those where less of the filter’s strength is in steps we have passed will generally have more creatures at our level, because there will be more solar systems containing such creatures.

You are actually in something very much like God’s Coin Toss again. This time, there are many possible universes you might be in instead of just two possible worlds. The populations of the universes span a range of many orders of magnitude. The main difference that might matter is that these possible universes contain many other creatures at different stages of development whom you know you are not. For instance pterodactyls, neanderthals or space worms. Do the anthropic principles tell us anything interesting here? Since we don’t know which, if any, of these principles is correct, lets try both of them.

The second principle is easiest to apply. It said in God’s Coin Toss that you are a hundred times more likely to be in the world with one hundred times more people. If you apply the same reasoning here, you find yourself much more likely to be in one of those universes where our stage of development is teeming with occupants. That is, a universe where less of the filter’s strength is in our past. This is a universe where we are more likely to be doomed.

The first principle is a little more complex to apply here. In God’s Coin Toss, we learned that since everyone sees what I see, I haven’t learned anything. In general, it says I’m more likely to be in worlds where a greater fraction of people see what I’m seeing. In God’s Coin Toss the fraction was one hundred percent in either world, so we didn’t update our probabilities. The situation is more complex in the Great Filter, since there are other creatures. What proportion of people who see what I see depends on who I count as everyone. If I count only creatures at our own stage of development —those with technology at about our current level— then the situation is identical to God’s Coin Toss. I learn nothing and find myself no closer to doom than I previously thought. However, if I use a wider reference class, counting more creatures in the observers, I will generally begin to receive bad news again. I will explain this step by step.

Suppose I include some creatures from past stages of the filter in the reference class, such as intelligent creatures with significantly less technology than us. Now in worlds that have larger filters between them and us, there are less of us relative to them. That means such worlds are less likely. Intuitively, in those worlds you would expect to be a common earlier creature, not a rare late creature. Since we know the total strength of the filter is pretty big, past filters being weaker means future filters are likely to be stronger. Just looking at past filters, this principle says we are more likely to be in trouble.

Now lets look at future filters. Suppose that I include some creatures from future stages in the reference class and none from the past, such as creatures similar to us but with nanotechnology. Now worlds with larger filters between our stage and the later stage or stages become more likely. This is because worlds with larger filters after us contain fewer creatures at those future stages relative to creatures at our current stage. Our experiences of being early are proportionally more common. Again, we find ourselves more likely to be in a world with bigger filter steps in the future, and so in more danger.

We have seen that if we extend our reference class in either direction (or both), danger looms. Naturally, if we extend it in both directions, we get a combination of these effects and danger still looms. At least it looms larger than you thought it did. Remember that the likelihood of existential risks at this point depends on the prior probabilities you held on being in the various universes, including over which future filters are more likely as well as the anthropic principle you use. You have merely shifted your expectations enormously in the direction of pessimism. If, for instance, you originally thought it virtually certain that we are alone in the universe, you should continue to be fairly optimistic about our chances.

The two principles I showed you make different predictions about details such as the extent of the probability shift and the effect of timing. However, since both principles agree that we have underestimated the risk of future filters, we need not resolve the disagreement over which principle is correct to update our expectations of danger. We need only be confident that something like one of these principles is correct. While we look for a more promising principle, perhaps we should direct some of our attention to the urgent task of investigating and averting extinction risks.

9 Comments

  1. It’s curious that we need to construct such arguments to convince people that it’s a good idea to protect humanity against extinction events.

    “Regardless of how it is stated, eschatology is probably not the sanest way to think about the world ”

    So… no matter what reasons we have to think we are in danger of extinction, it would be insane to take those reasons seriously? Really?

  2. Regardless of how it is stated, eschatology is probably not the sanest way to think about the world 🙂

  3. >>This means the chance of a given star giving rise to a civilization which visibly colonizes beyond its home star is less than miniscule.

    No, it doesn’t. Majority of 10^22 starts are far far away, so we get information outdated by millions or billions of years. Article implies that there should be some extremely old civilizations so we could see them. It looks to me like same old geocentric viewpoint – humanity is supposed to be an exceptional, young civilization among “normal” old ones. It is no better than to suppose that humanity is the first of civilizations to appear.

    • I thought in these terms as well, but I’ve somewhat adjusted my perspective. The universe is indeed incredibly huge, but it is still “just” 13.7 billion years old.

      Compared to that scale our Earth has existed for a whopping third of this whole time (4.5bio yrs), while the earliest cells have appeared about 4bio years ago.

      In other words, it took evolution almost a third of the time since the big bang, to go from the most primitive cells to the first “intelligent” species – at least here on earth. We have little idea how likely or unlikely intelligent life actually is… a big and complex brain is energetically very costly and evolution isn’t exactly in a hurry to produce clever things.

      And apart from brains it seems that there are many other things like opposable thumbs, complex speech and social structure that are a prerequisite to get a civilization started. It may be exceptionally hard for natural selection to actually hit the “jackpot” combination required for the kind of intelligence that can produce a somewhat robust civilization.

      I don’t consider this to be a geocentric viewpoint, I am quite sure that the universe is teeming with life… but not with intelligence.

      Not long ago there were at least 5-6 “smart” hominid species on this planet, and all of them went extinct. And genetic evidence shows that at one point even our human ancestors were down to something like 3000-6000 individuals, so just one little shift in weather could have wiped us out… and then what?

      The great apes living nowadays aren’t exactly in a hurry to become clever, so if by the slightest whim of fate we would have gone extinct, it is quite possible that evolution would take another 2 billion years to hit the “jackpot” once more. Or it may even never happen again.

      In other words, the amount of stars in the universe tells us it should be teeming with life. But it may still be true, that our tiny human brains may in fact be unlikely galactic lottery winners – especially if we remember that it took an incredibly long time for us to evolve.

      Still, it feels a little presumptuous and geocentric to suppose we are the first intelligent species in the universe and I don’t really believe it – but intelligence may be very rare nonetheless.

      • I think longer evolution goes – higher chances for intelligent life to appear. Warm-blooded animals are much smarter than reptiles, and reptiles are smarter than fishes. And modern fishes are not the same like in Jurassic. Today some of big predatory fishes are able to keep higher temperatures in brain and most important muscles. So fact that we do not see any signs of alien civilization suggests that we are among the first to appear, but later we likely to see more of them.
        —————————————————————–
        Those “smart” hominids you are talking about gone instinct due to competition with our direct ancestors btw.

  4. Magnificent article, thanks for your efforts.

    I didn’t understand it completely, but let’s assume that’s because I’m tired. (I’m an optimist, so any other explanation would make me stupid or you less of a writer). I will make a second attempt to take it all in later.

  5. Im super geeked out on the latest Journal of Cosmology. WHen looking at anthropomorphic consciousness shaping the planet and existential dangers in deep probability I cant help but think of this paper:

    “Electromagnetic Bases of the Universality of the Characteristics of Consciousness” http://journalofcosmology.com/Consciousness115.html

    It links the electromagnetic characteristics of human consciousness to the magnetosphere and interstellar dynamics.

  6. you can put all the parts of a wind up watch in a box and shake it til hell freezes but you will never get a working watch. but if you shake a box of magnets you will get a predictable configuration of parts.

    since i may be out there to some degree when i say matter is all nanobots/machines i probably disagree with the drexlerian proposal but i also might disagree when i include vast amounts of time in the mix and point to the terraforming of earth by biological machines leading to an environment leading to virtually the shape of the land and the weather and the composition of the ocean. this biological action has formed our reality as our reality has formed us. it has built a terrarium and we have evolved to exist in it. and prior to that the stars built the machines we are formed from

    these biological nanobots are hardy enough for space travel and known to exist outside the atmosphere and are blasted into space by collisions. they are everywhere and terraforming new worlds now. if viewed in time lapse it would seem to be a continuous adaptable process. and watching it you would not believe it was not engineered.

    if you include the actions of large collaborations of nanobots and given time you would see that these nanobots will build all possible things. we are those collaborations of nanobots and we will build everything possible and if it is efficient to build things at the atomic scale we will and if it is efficient to build things with large robots we will but having said all that it is still nanomachines collaborating to build a wold for us to exist in and do whatever we are here for which i believe is to produce advanced intelligence and if biological is too slow we will use nano and any other to produce it. and above all we are only a colaboration of machines and everthing we touch or work with is machines from smallest to largest.. machines making machines. you might say everything is biology or you might say nothing is biology but i dont think there is another choice.

Leave a Reply