“Biological intelligence is a fleeting phase in the evolution of the universe. – Paul Davies”
Imagine it is the year 3,000 C.E. And everything is the same. Our planet and our species remain fundamentally similar to the way we are today. Politics, religion, science, medicine, communication, transportation, and our socio-sexual lives are all similar. The problems and opportunities we face individually and collectively have not changed very much.
If you are like me, this would be quite a strange future.
In a 2007 Long Now Foundation article by computer scientist Vernor Vinge, he proposed a few thought experiments about our future. One was called “The Wheel of Time” and one was called “The Age of Failed Dreams”. Both hypothetical futures were designed to seem intuitively wrong:
1) “The Wheel of Time” scenario – humanity experiences series of endless cycles. We would construct a global civilization similar to the one existing today, only to have it destroyed every couple thousand years or more by a natural mega-disaster. Our system would be periodically distributed, before we restore it back to its previous level of complexity.
2) “The Age of Failed Dreams” scenario – humanity slowly realizes throughout the 21st century that we can’t accomplish advanced space travel or artificial intelligence no matter how much money we invest into either project. In this scenario, we realize that we are essentially “stranded” on Earth until our Sun dies or something else truly catastrophic causes our extinction (e.g., nearby supernova, flood basalt eruption).
Both of these futures seems intuitively wrong to me. They should seem intuitively wrong to you as well. The first scenario, I think, is possible… but just highly improbable. It would take a string of bad luck that mathematically might as well be considered impossible. The second scenario, I think, is actually out of whack with what we know about cosmic evolution. So I would contend that “The Age of Failed Dreams” is actually an impossible future.
Now let’s remember that both of these futures would not have seemed intuitively wrong to the average person born in say, the 10th century, or the 5th century C.E. And it certainly wouldn’t have been difficult for someone in the Paleolithic to imagine. For people in the Paleolithic, they never expected change, because no discernible change occurred within the scale of one’s own short life of ~20-30 years.
Although the pace of cultural and technological (from here on out “technocultural”) change has been occurring exponentially since the birth of both processes, we are just now hitting the knee of the evolutionary curve. Technocultural change now happens on scales of decades, years, and even months… and the rate of change is unquestionably accelerating.
As a result, the year 2013 C.E. would be an alien “timescape” to someone in the year 1813 C.E. But the year 20,000 B.C.E. would not be an alien timescape for someone in the year 40,000 B.C.E., even though 20,000 years separate the latter example and only 200 years separate the former example.
I think a good analogy for this is with the expansion of the universe itself. Cosmologists now know that the universe isn’t only expanding… but that the rate of the expansion is itself accelerating. In order to explain this they have had to propose the idea of dark energy to describe the cause of this increased expansion rate. Well, in a similar way, evolutionary theorists have realized that not only is the pace of technological change increasing… but the actual rate of change is increasing. The reasons why the rate is increasing, I believe, have a lot to do with a few well developed anthropological, evolutionary, and cybernetic models, which I will discuss below.
But before I move on, I want to leave you with this question while you’re reading. What does this rapid technocultural change mean for the future of our species and planet? And maybe more importantly for you, what does it mean for our own lives, our institutions, our countries, and… for the problems we are likely to encounter… and the opportunities we are likely to explore?
The Future of the Biosphere
Now consider for a second the future of the biosphere and what we can say about the future of life from our knowledge of evolution. Barring a mega-catastrophe (like the ones I previously mentioned), we actually have a good enough understanding of the what to expect of a complex biosphere over the course of millions of years. I believe evolutionary biologist Richard Dawkins explained our understanding of this quite well:
What I would say is that if you asked me what life is going to look like in say, ten million years or twenty million years, […] what there will be is a whole lot of different species doing pretty much the same thing as the present species are, but they’ll all be different. […] What you can predict is that there will be a similar range of species, doing a similar range of things, and that’s a fascinating thought.
Now I think this is a fascinating point, but an endless cycle of different species adapting to varying range of predictable abilities (i.e., swimming, running, climbing, etc.) seems a little… redundant… at least to the human mind. But it is definitely true. In ten million years time, you wouldn’t have chimpanzees, dolphins, bats, kangaroos, penguins, etc… but you would have something quite similar to chimpanzees, dolphins, bats, kangaroos, penguins etc. Perhaps the future versions would have slightly different morphology, behavioural repertoire, etc. They would be quite similar, but if you entered the ten-million-years in the future biosphere you would notice that things had changed and that you were encountering species that were a little different.
Of course, if there was some natural catastrophe thing would become evolutionarily unstable. A continent killer like a supervolcano would dramatically change biogeographic distribution and perhaps threaten entire clades of species. For example, if a supervolcano went off in Antarctica you could say goodbye to all the emperor penguins. And then something like an asteroid the size of one that hit 65 million years ago would significantly alter the entire biosphere, and then it would be tougher to predict the future biosphere because entirely new clades would likely emerge with no contemporary analogue. Although I’m sure they would still be adapted to thing that were surprisingly predictable.
But there is one problem with the above Dawkins quote… it doesn’t control for intelligence. And high intelligence is here now.
High intelligence is very strange. We are very strange. We have lots of evidence that adaptations like very good sight, hearing, running, swimming, etc. have evolved several times. Even more bizarre, is that methods for detecting reality like echolocation or magnetoceptio have evolved several times independently. But high intelligence has never evolved before on Earth. There is no need for archaeologists in the Cretaceous or the Eocene.
And don’t forget the negative data from studying our universe. We see no evidence of any intelligent patterns in our cosmos. This is why physicist Enrico Fermi asked:
Where is everyone?
There are a lot of interesting possibilities to contemplate, but we unfortunately don’t have a good answer for Fermi. Either way, I hope the picture I’m trying to paint is taking focus. We don’t have any examples with which to compare our system’s existence. And having a sample size of one is problematic in science.
At the same time, having a well developed understanding of how a system like ours is going to evolve given deep time is necessary for our own survival. Clearly we have a big problem here.
And the problem is stranger the more you think about it. At a systems level, cosmologists have a pretty good idea of what our entire universe is going to look like in a billion, and even a trillion years. Again, biologists even have a good understanding of how our entire biosphere would look given millions of years of evolution (as long as you control for our existence). In contrast to pretty much every other known system, we just don’t have any good idea of what the human system will look like in even 100 years time!
What is going on here? Is it even possible to develop a theory that will help us explain ourselves?
The Global Brain
In my opinion, the leading hypothesis for what our system is becoming is a Global Brain. A Global Brain is a distributed intelligence emerging from the worldwide network of people and machines. In the analogy of the brain and the human system, humans would be the neurons. This is not just a metaphor. The distributed interactions of neurons produce thoughts and these thoughts are organized in a “global workspace” (i.e., our consciousness as “global brain”). It is odd that from this distrinbuted self-organizing network a unified and seemingly centralized stream of consciousness emerges at all. Yet here we all are with our individual streams of consciousness.
As it turns out, the individual neurons in their system, behave in quite similar ways to how individual humans in our system behave. And with the emergence of the Internet as a global medium for human (and eventually cyborg/robot) communication it is becoming quite tempting to hypothesize that when our species reaches some critical density of connection, some higher-level consciousness is going to emerge that will not only be intelligently mediating our own interactions but will also, potentially, possess quantitatively and qualitatively different consciousness of its own. Evolution will have literally woken up. With the entire universe to follow?
If the brain metaphor is of great practical use we could speculate about what critical density would need to be reached before we bring it into existence. For example, there are approximately 7 billion humans on the planet, but there are approximately 86 billion neurons in our brains. However, total number of agents might not be as important when discussing the emergence of a higher-level consciousness, as the connection density, or the types of connections, or the level of activity in the system. We still don’t really know how many connections or what type of connection density will cause a transition to a Global Brain, and we don’t know how quickly the “take-off” would be or whether we would even recognize it when it was here.
Global Brain as Metaphor
Various scientists and academics have – knowingly or unknowingly – started to envision humanity as forming a type of Global Brain. However, this has always only been as metaphor. As an example, here is philosopher Daniel Dennett’s thoughts on humanity as phenomenon:
Now, for the first time in its billions of years of history, our planet is protected by far-seeing sentinels, able to anticipate danger from the distant future–a comet on a collision course, or global warming–and devise schemes for doing something about it. The planet has finally grown its own nervous system: us.
Now this is essentially what brains themselves are for. Brains are future machines. Organisms have brains so that they can anticipate danger from the future. And organisms grew their own “global brains” through the process of evolution in order to be better at anticipating dangers. This has just taken on a new level of complexity in our species.
Can you tell which images are cities and which ones are neurons? I’m sure you can, but it is impossible to miss the systems level pattern similarity here. And it doesn’t really matter what cities you use in this comparative analysis. All cities follow their own self-organizing evolutionary logic – in the same way patterns of neurons do.
But we need to go beyond metaphor. If the Global Brain idea is to be real science, we need an actual model or theory to explain it. And if we are to go beyond metaphor, the science that can help us the most is undoubtedly cybernetics.
Going “beyond metaphor” for the concept of the Global Brain is something that the Global Brain Institute in Belgium is working towards. And from what I’ve read, there has been tremendous progress in the direction of a theory. So what I want to do is incorporate this within an evolutionary anthropological framework to discuss the pathway to the Global Brain. I feel that be using this approach we will be able to understand our own system complexity and evolutionary trajectory in greater depth, and then we can make meaningful projections for what is likely to happen in the future.
As an anthropologist, I believe it makes sense that anthropological, evolutionary, and cybernetic theory be combined. In fact, anthropology was a major contributor to the formation of cybernetics as a discipline. One of the most prominent anthropologists of the 20th century, Gregory Bateson, described the field of cybernetics as:
A branch of mathematics dealing with problems of control, recursiveness, and information, focuses on forms and the patterns that connect.
And for understanding the Global Brain (and the human future), this is exactly the type of analysis we need.
The discipline of cybernetics is concerned with metasystem transitions. Cybernetics tackles the issue of metasystem transitions because evolutionary theory explores a process with no inherent directionality. Functional complexity can increase and it can decrease. Often times there is a trade off, so if functional complexity increases with one particular adaptation, it will come at the expense of functional complexity in some other adaptive ability.
I can give a simple example of this with human evolution. Our distant ancestors became better adapted to walking bipedally across flat and hilly terrain over the course of millions of years. This increased functionality in regards to bipedality came at the expense of our adaptability to an arboreal lifestyle. And if you want to test this just try and race a chimpanzee up a tree and see how well you do.
But of course we must explain how higher level order and complexity does emerge. How can different systems increase their order? This has happened several times in the history of life and cybernetics has revealed that it is always connected to a continuum of competition and cooperation. Even though we are composed of selfish genes and we are selfish organisms, cooperation can still evolve from this baseline of selfishness. Over time reciprocal altruism in at least one system tends to predominate over selfishness… given enough time. Cooperation can be selectively advantageous.
Throughout nature there seems to be a universal quid pro quo (e.g., a favour for a favour; you scratch my back, and I’ll scratch yours). The degree to which a system cooperates is usually dictated by the communication and transportation medium the agents are within and that in turn is typically dictated by the energy source available. When the system has achieved a high enough degree of cooperation, a new level of order emerges. Individual agents in a system become more and more interconnected until they form a new higher entity.
Here are a few examples of metasystem transitions throughout the history of life:
- non-life to life
- prokaryotes to eukaryotes
- decentralized nervous system to centralized brain
- bands of fifty to societies of billions
The Infinite Adaptation
Going beyond metaphor must include a fundamental understanding of the adaptation in question: intelligence. In the past intelligence has been overly complicated by social scientists, poorly defined by life scientists, and not seriously considered by physical scientists. But as far as I am concerned, intelligence is a problem solving computation. Simple as that.
For an intelligent agent there will be an initial state. And then since the entire universe is information, the agent will receive information from it (the environment). If this information will deteriorate or even threaten the existence of the agent, it will be perceived as a “problem”. The agent must process this information and solve the problem by drawing on information it already possesses (i.e., this is a problem that has been encountered and overcome before), or the agent must draw on information by cooperating with another agent, or they can create the information (both of these last two are dependent on the ability of the agent in question). If the problem cannot be overcome the agent will not reach the goal state, and as a result could experience deterioration (or in many cases die).
Expressed as a computation:
- Input information: Initial state, problem, question
- Output information: Goal state, solution, answer
This seems to be a universality of intelligence. And because it really is just a simple computation, it makes it all the more convincing (to me at least) that our universe is a computer simulation. But discussing that is for another day.
The intelligence adaptation is unique in a sense that in some ways it can be thought of as what separates life from non-life. Life is anti-entropic. All non-life is entropic. This nicely separates the living from the non-living because reproduction and metabolism are not good enough descriptors of life. Fire seems to do something similar to reproduction and metabolism but we don’t consider it living. We don’t consider it living because it is entropic (i.e., random). All life is non-random. For non-life there is no computation because there is an initial state, but there is no goal state. The one strange outlier that I am aware of is crystals. Crystals are considered non-living but they also exhibit non-random behaviour.
Science writer Christopher Potter wrote a great descriptor of this fuzzy “grey-area” between life and non-life in his fantastic book You Are Here:
Life is not a hard boundary between the animate and the inanimate but something diffuse like the edge of the solar system or, indeed, the edge of the universe. Life begins to look like it may be some arbitrary label we impose on a phenomenon that is not entirely discrete, and whose meaning only gradually emerges out of an evolutionary process that must, ultimately, merge with whatever descriptors we have for the smallest structures in the universe.
Well said, Potter. But however this diffuse and gradual phenomena emerges there seems to be two general sides to the universe coin. Physical law dictates entropy and “Intelligence law” dictates anti-entropy. These two “laws” can be further unified/bridged by the force of evolution, which explains how things change. With these three phenomena together… time emerges.
I think this idea was also summarized poetically by Potter:
The universe is light that has evolved.
Back to intelligence. I think that within this framework, intelligence is infinite. the universe acts as a problem generator because of entropy and this fundamentally causes intelligence to become more complex. Life will generate order for a period of time by creating information. Competition emerges naturally (before cooperation) because there are scarce resources (i.e., scarce order to borrow) and agents in a system have overlapping problems. When their problems overlaps competition results.
Basically, the universe is always going to be a dick. It is always going to throw problems at you. This is inescapable. And in fact, the more complex your system becomes the more problems are going to be coming your way and the more information you’re going to need to stabilize the system, which is the pressure that will create a Global Brain.
Intelligence is Distributed
The way intelligent agents go about solving problems is always in a distributed fashion. Think about our system first and then we will use that in an example of how the brain works. Think about a collective species problem: the problem of global warming. So we have our initial state and then we have our goal state of stabilizing our climate. The information we need to stabilize our climate is distributed (i.e., one person does not have the information to stop global warming). We need to draw on the collective intelligence of the entire system. That is why we trained ecologists, biologists, engineers, and a whole range of other professionals. With their collective and self-organizing distributed intelligence – the problem of global warming can, in principle be solved. We can reach the “goal state”. And by reaching the goal state we get objectively more intelligent because we strengthen the links between important agents. We realize that building an energy economy on fossil fuels was a bad idea and we won’t do it again (although at the time the decision was made in the 18th century, it was necessary).
In the same way the brain solves problems in a distributed fashion. Even if you feel as though you are the centralized solver of problems for yourself… this is an illusion. The brains information uses you to solve problems, not the other way around. When there is input information the brain will, in a self-organizing fashion, decide which regions have the information necessary to reach the goal state.
It is interesting to note that all negative emotions (regardless of the severity) come from a failure of the brain to be able to find the information required to reach the goal state. And your brain collectively learns in the same way as the human system does, as links between different agents (neurons) in your brain become strengthened or weakened based on previous problems solved. In fact, just as all negative emotions in our brains emerge from a failure to reach “goal state”, so all pessimism about the state of the world emerges from our collective failure to reach the goal state (i.e., stop global warming).
In 2012, Francis Heylighen, the director of the Global Brain Institute, began work on developing a conceptual and mathematical model for the Global Brain. It has been called the theory of challenge propagation.
The theory acknowledges that not all intelligent behaviour is centered around problem solving. Agents also follow an “in-built” value system that is evolved based on the improvement of their own system (i.e., it’s good to exercise, to play, to go to a concern, to be creative, to travel, to build social connections, etc.). These aren’t “problems”. These are opportunities. When I was developing a similar model, I called them “options”. But it is the same thing.
Exploring options allow the agent to “progress” or “grow”. I have a feeling that options actually make you more conscious or self-aware. The ultimate cause of what options/opportunities an agent wants to explore is determined by the evolution of that system. For most of us some form of socialization is seen as an opportunity (to varying degrees) because we are evolved social animals.
And as intelligent agents on this continuum between problems/opportunities, we don’t like remaining at “zero” (i.e., doing nothing). If we aren’t solving a problem (challenge relaxing), we are seeking an opportunity (challenge seeking). This is why the theory is called “challenge propagation”. All intelligent agents “propagate” challenges in the system they are contained within.
I’ll give you an example from my own life. Out of necessity I spent most of my time trying to get out of the negative (i.e., I’m trying to solve problems to reach relaxation). These problems are usually related to small challenges like writing a good paper or script or narration or acquiring a new job. They are part of larger problems I have set for myself like trying to get a Ph.D. or become a professor or start a successful YouTube channel. I’m trying to reach the “goal state”. If all of these challenges were met I wouldn’t stay at “zero”. I would challenge seek. Perhaps I would go on a trip through Asia (just to experience it – to challenge myself in a positive way).
Of course, in the past I have tried to both relax a challenge and seek a challenge simultaneously through my research field trips to Cameroon and St. Catherines Island. For me, combining challenges is ideal.
But this all depends on the agent. Some people will prioritize challenge seeking (opportunities) over challenge relaxing (solving problems). But remember that challenge relaxing must take priority when it comes to problems related to natural selection. So if I decide to say “fuck it I don’t want to do a Ph.D.”… that is perfectly fine and I immediately relax that challenge. But if I say “fuck it I don’t want to pay rent or buy groceries”… well that just just going to create more problems for me and would first lead to my homelessness and second would lead to my death (unless other agents relaxed the challenge for me… which is what parents do for children or what modern society does for adults who can’t take care of themselves).
Our neurons actually work in the same way in regards to challenges. And when you give it any degree of thought, how could it be otherwise? Thoughts never stop. They just simply keep coming. If our neurons just solved problems, thoughts would stop coming when the challenge had been relaxed. In this bizarre world you would just be conscious while solving problems. But that is not the case. Neurons also seek challenges as well, they want to strengthen links between each other, for their own sake, just as we do. As a result, the thoughts never stop. We have actually come up with an interesting concept for when the thoughts stop… death.
So challenge propagation can actually make a system more intelligent. This is kind of what I meant earlier when I said that it doesn’t matter what city you use as a picture because they follow their own evolutionary logic. And it is basically summed up by saying that agents in order to solve problems and explore opportunities, will over time find the right information and get it to the right agents. This makes the system objectively more intelligent (i.e., able to solve more problems and explore more opportunities).
Let’s explore challenge propagation a little deeper with another personal example. Think about what is happening right now. I have a problem. My problem is establishing myself as a credible scientific authority on the Global Brain. I want to become a recognized expert on the subject. But I need to challenge propagate to do this. In order to do this successfully, I need you, another intelligent agent.
But you (presumably) don’t care about me becoming an expert on the Global Brain. This is not a “problem” for you. So why are you reading this article? Why would you share it for me? Well, the reason is because you would be exploring an opportunity. It is an opportunity to perhaps learn information you hadn’t encountered before. You may “grow” as a person a little bit from reading. If you did “grow” maybe you will be more inclined to “share” it. To continue propagating my challenge in the system.
It is important to remember that this challenge propagation is weighted within the system. What this means is that different agents have differing access to other agents which could increase their problem solving abilities or increase their opportunity having abilities. That is indeed the difference between me writing and publishing this, and someone like Richard Dawkins writing and publishing this. He has a more advanced challenge propagation network established than me. So when his brain propagates a challenge, more agents will pay attention. I am still building my challenge propagation network, so fewer will pay attention. I have to earn the attention of the system. How am I doing so far?
You can do this with anything – I am simply using myself as an example. But you could do use this theory and apply it your own life as well. It describes everything quite perfectly. Also, remember that neurons are doing the same thing. They are organized in weighted networked connections. Some neurons have action potential to reach millions of other neurons and some neurons have the action potential to reach a few thousand or less. This will be important to remember when we talk about the actual pathway to the Global Brain.
Finally, this theory allows us to build a real model of the Global Brain. We can model our systems intelligence. We can model our problems and opportunities in the system, and we may be able to predict when the Global Brain should emerge.
From this one thing should already be obvious… the Global Brain will be non-local.
Not only will the Global Brain not be Big Brother (has has been suggested by some)… but the Global Brain cannot be Big Brother because that is not how higher ordered intelligence emerges. In fact, we may have culturally constructed fear of a “Big Brother” like entity out of the fundamental premise that such a system would be highly unintelligent. Big Brother would suppress the action potential of us all.
All you have to do is look at North Korea (outlined in image) and South Korea to see what distributed intelligence can do. We have a naturally occurring human science experiment right there.
And look at how unintelligent it is. It is literally a dying brain. North Korea is suffocating its neurons, all because of the selfishness of one ultra-centralized challenge propagation center.
More thoughts on North Korea later…
And more thoughts on the Global Brain later as well. Stay tuned for parts 2-5.
Cadell Last is a science writer and evolutionary theorist. He is currently working on a animated science channel with PBS Digital Studios and attempting to merge evolutionary anthropology and cybernetic theory. You can contact him on Twitter @cadelllast or Facebook fb.com/TheAdvancedApes