An Interview by Ben Goertzel
Francis Heylighen started his career as yet another physicist with a craving to understand the foundations of the universe – the physical and philosophical laws that make everything tick. But his quest for understanding has led him far beyond the traditional limits of the discipline of physics. Currently he leads the Evolution, Complexity and COgnition group (ECCO) at the Free University of Brussels, a position involving fundamental cybernetics research cutting across almost every discipline. Among the many deep ideas he has pursued in the last few decades, one of the most tantalizing is that of the Global Brain – the notion that the social, computational and communicative matrix increasingly enveloping us as technology develops, may possess a kind of coherent intelligence in itself.
I first became aware of Francis and his work in the mid-1990s via the Principia Cybernetica project – an initiative to pursue the application of cybernetic theory to modern computer systems. Principia Cybernetica began in 1989, as a collaboration between Heylighen, Cliff Joslyn, and the late great Russian physicist, dissident and systems theorist Valentin Turchin. And then 1993, very shortly after Tim Berners-Lee released the HTML/HTTP software framework and thus created the Web, the Principia Cybernetica website went online. For a while after its 1993 launch, Principia Cybernetica was among the largest and most popular sites on the Web. Today the Web is a different kind of place, but Principia Cybernetica remains a unique and popular resource for those seeking deep, radical thinking about the future of technology, mind and society. The basic philosophy presented is founded on the thought of Turchin and other mid-century systems theorists, who view the world as a complex self-organizing system in which complex control structures spontaneously evolve and emerge.
The concept of the Global Brain has a long history, going back to ancient ideas about society as a superorganism, and the term was introduced in Peter Russell’s 1982 book “The Global Brain”. However, the Principia Cybernetica page on the Global Brain was the first significant online resource pertaining to the concept, and remains the most thorough available resource for matters Global-Brain-ish. Francis published one of the earliest papers on the Global Brain concept, and in 1996 he founded the “Global Brain Group“, an email list whose membership includes many of the scientists who have worked on the concept of emergent Internet intelligence.
In the summer of 2001, based partly on a suggestion from yours truly, Francis organized a workshop at the Free University of Brussels – The First Global Brain Workshop (GBrain 0). This turned out to be a fascinating and diverse collection of speakers and attendees, and for me it played a critical role, in terms of helping me understand what other researchers conceived the Global Brain to be. My own presentation at the workshop was based on my book Creating Internet Intelligence, which I had submitted to the publisher the previous year, which outlined my own vision of the future of the Global Brain, centered on using powerful AI systems to purposefully guide the overall intelligence of global computer networks.
In our discussions before, during and after the GB0 workshop, Francis and I discovered that our respective views of the Global Brain were largely overlapping yet significantly different, leading to many interesting conversations. So when I decided to interview Francis on the Global Brain for H+ Magazine, I knew the conversation would touch many points of agreement and also some clear issues of dissension – and most importantly, would dig deep into the innards of the Global Brain concept, one of the most important ideas for understanding our present and future world.
The global brain means many things to many people. Perhaps a good way to start is for you to clarify how you conceive it – bearing in mind that your vision has been one of those shaping the overall cultural evolution of the concept in the last decades….
The global brain (GB) is a collective intelligence formed by all people on the planet together with their technological artifacts (computers, sensors, robots, etc.) insofar as they help in processing information. The function of the global brain is to integrate the information gathered by all its constituents, and to use it in order to solve problems, as well for its individual constituents as for the global collective. By “solving problems” I mean that each time an individual or collective (including humanity as a whole) needs to do something and does not immediately know how to go about it, the global brain will suggest a range of more or less adequate approaches. As the intelligence of the GB increases, through the inclusion of additional sources of data and/or smarter algorithms to extract useful information from those data, the solutions it offers will become better, until they become so good that any individual human intelligence pales in comparison.
Like all complex systems, the global brain is self-organizing: it is far too complex to be fully specified by any designer, however intelligent. On the other hand, far-sighted individuals and groups can contribute to its emergence by designing some of its constituent mechanisms and technologies. Some examples of those are, of course, the Internet, the Web, and Wikipedia.
What about the worry that the incorporation of the individual mind into the global brain could take away our freedom? Many people, when they hear about these sorts of ideas, become very concerned that the advent of such a “cognitive superorganism” above the human level would reduce their personal freedom, turning them into basically slaves of the overmind, or parts of the borg mind, or whatever.
One standard counterargument is that in the presence of a global superorganism we would feel just as free as we do now, even though our actions and thoughts would be influenced on a subtle unconscious level by the superorganism — and after all, the feeling of freedom is more a subjective construct than an objective reality.
Or if there is a decrease in some sorts of freedom coming along with the emergence of the global brain, one could view this as a gradual continuation of things that have already been happening for a while. It’s not clear that we do – in every relevant sense — feel just as free now as our ancestors did in a hunter-gatherer society. In some senses we may feel more free, in others less.
Or, you could argue that the ability to tap into a global brain on command gives a massive increase in freedom and possibility beyond the individually-constrained mental worlds we live in now.
What’s your take on all this?
For me the issue of freedom in the GB is very simple: you will get as much (or as little) as you want. We do not always want freedom: often we prefer that others make decisions for us, so that we just can follow the lead. In those situations, the global brain will make a clear recommendation that we can just follow without too much further worry. In other cases, we prefer to think for ourselves and explore a variety of options before we decide what we really want to do. In such a case too, the GB will oblige, offering us an unlimited range of options, arranged approximately in the order of what we are most likely to prefer, so that we can go as far as we want in exploring the options.
A simple illustration of this approach is how a search engine such as Google answers a query: it does not provide a single answer that you have to take or leave, it provides an ordered list of possibilities, and you scroll down as deep as you want if you don’t like the first suggestions. In practice, the search technology used by Google is already so good that in many cases you will stick with the first option without even looking at the next ones.
In practice, this means an increase in individual freedom. The global brain will not only offer more options for choice than any individual or organization before it, it will even offer the option of not having to choose, or of choosing in a very limited, relatively unreflective way, where you look at the first three options and intuitively take the third one, thinking “that’s the one!”.
Of course, in such decisions you would have been influenced to some degree at an unconscious level, but only because you didn’t want to make the effort to become conscious of it.
In principle, the GB should be able to explain or motivate its ordering and selection of options, so that you can rationally and critically evaluate it, and if necessary ignore it. But most of the time, our bounded rationality means that we won’t investigate so deeply. This is nothing new: most of our decisions have always been made in this way, simply by doing what the others do, or following the implicit hints and trails left by others in our environment. The difference characterizing the GB is that those implicit traces from the activity of others can in principle be made explicit, as the GB should maintain a trace of all the information it has used to make a decision.
Well, we don’t actually have time or space resources to become conscious of everything going on in our unconscious (even if we become mentally adept and enlightened enough to in principle extend our reflective consciousness throughout the normally unconscious portions of our minds). A person’s unconscious is influenced by all sorts of things now; and as the GB gets more and more powerful and becomes a more major part of our lives, our unconscious will be more and more influenced by the GB, and we don’t have the possibility of being conscious of all this influence, due to resource limitations.
So it does seem we will be substantially “controlled” by the GB — but the question is whether this is merely in the same sense that we are now unconsciously “controlled” by our environments.
Or alternately, as the GB gets more mature and coherent and organized, will the GB’s influence on our unconscious somehow be more coherent and intentional than our current environment’s influence on our unconscious? This is my suspicion.
In short, my suspicion is that we may well FEEL just as free no matter how mature, reflective and well-organized and goal-oriented is the GB that nudges our unconscious. But if so, this will be because our minds are so good at manufacturing the story of human freedom. In reality, it seems to me there is a difference between a fairly chaotic environment influencing our unconscious, and a highly conscious, reflective, well-organized entity influencing our unconscious. It seems to me in the latter case we are, in some sense, less free. This may not be experienced as a problem, but still it seems a possibility worth considering.
Interesting thought. My comment: our present environment, especially in the way that it is determined socially, i.e. by the reigning culture, government, market and morals, is much less chaotic than it may seem. In a given culture, we all drive on the right (or left in other countries) side of the road, speak the same language, use the same rules for spelling or grammar, follow the law, greet people by shaking their hands, interpret a smile as a sign of good will or cheerfulness, walk on the sidewalk, follow the signs, use a fork to bring food to our mouths, etc. etc.
Most of these things we do without ever thinking about it. On the cognitive level, 99,9… % of our beliefs, concepts and attitudes we have gotten from other people, and it is only exceptionally that we dare to question such seemingly obvious assumptions as “dogs bark”, “murder is bad”, “eating shit is disgusting”, “the sun comes up in the morning”, “I shouldn’t get too fat”, “1 + 1 = 2″, “a house has a roof”, etc.
All these things are implicit decisions for one interpretation or reaction rather than an infinite number of possible other ones. After all, if you think about it, it is possible to eat shit, to build houses without roofs, or to find dogs that don’t bark. “Implicit” means unconscious, and unconscious means that we cannot change them at will, since will requires conscious reflection. Therefore, culture very strongly limits our freedom, without hardly anybody being aware of it.
I myself became aware of the existence of these subconscious biases (or “prejudices” as I called them) when I was 14 years old. This led me to develop a philosophy in which anything can be (and from time to time should be) questioned, including “1 + 1 =2″ and “the sun comes up in the morning”.
Culture in that sense is already a collective intelligence or GB, except that it reacts and evolves much more slowly than we one we envisage as emerging from the Internet. As you hint at, the risk of having a more interactive GB is that people will have less time to question its suggestions. On the other hand, the GB as I envisage it is by design more explicit than the subconscious conditioning of our culture, and therefore it is easier (a) to remember that its opinions are not our own; (b) to effectively examine and analyse the rationale for these opinions, and if necessary reject them.
Again, I come to the conclusion I mentioned in my first series of answers: the degree to which the GB will limit your freedom will depend on how much you are willing to let it make decisions for you. Given my nearly lifelong habit of questioning every assumption I hear (or develop myself), I have little fear that I will turn into an unwitting slave of the GB!
Hmmm… even if that’s true, it brings up other issues, right? Your personality, like mine, was shaped when the GB was much less prominent than it is now. Maybe a society more fully dominated by the GB will be less likely to lead to the emergence of highly willful, obsessive assumption-questioners like you and me. But I hasten to add that’s not clear to me – so far the Net hasn’t made people more conformist, particularly. It’s encouraged some forms of conformism and trendiness, but it’s also fostered eccentricity to a great extent, by giving “outlier people” a way to find each other.
For instance there’s an online community of people who communicate with each other in Lojban, a speakable form of predicate logic. Before the Net, there was no practical way for such a community to thrive (even though Lojban was invented in the late 1950s). On the other hand, if you look at the trending topics on Twitter on a random day, it’s easy to conclude that the GB is becoming a kind of collective imbecilic mind.
It might be that the GB will give more freedom to those few who want it, but will also urge the emergence of psychological patterns causing nearly all people not to want it.
Actually this reminds me of one comment you made: “In principle, the GB should be able to explain or motivate its ordering and selection of options, so that you can rationally and critically evaluate it, and if necessary ignore it.”
But the principle isn’t necessarily the practice, in this case. Google, for instance, doesn’t want to provide this kind of explanation, because this would reveal its proprietary formulas for search ranking. So, as long as the GB is heavily reliant on commercial technologies, this sort of transparency is not likely to be there. And of course, most people don’t care that the transparency’s not there – the number of people who would make good use of an explanation of the reasoning underlying Google’s search results would be fairly small (and would consist mainly of marketers looking to do better Search Engine Optimization of their web pages). Do you see this lack of transparency as a problem? Do you think the GB would develop in a more broadly beneficial way if it could somehow be developed based on open technologies?
Commercial secrecy is indeed a major obstacle I see to the emergence of a true GB. Just as Google doesn’t reveal its justification for search results, similarly the algorithms Amazon et al. use to make recommendations are closely guarded. This implies:
- that it is difficult to detect self-serving manipulation (e.g. Google or Amazon might rank certain items higher because their owners have paid for the privilege),
- that it is difficult for the GB to improve itself (I have the strong suspicion that the collaborative filtering algorithms used by YouTube etc. could be made much more efficient, but I cannot show that without knowing what they are).
Again, Wikipedia, together with all the other open source communities, stands as a shining example of the kind of openness and transparency that we need.
Well, when you dig into the details of its operation, Wikipedia has a lot of problems, that I’m sure you’re aware of. It’s not particularly an ideal to be aspired to. But, I agree, in terms of its open, collaborative nature, it’s a fantastic indication of what’s possible.
Mention of transparency naturally reminds me of David Brin’s book The Transparent Society, by the way – where he develops the notion of surveillance versus sousveillance (the latter meaning that everyone has the capability to watch everyone, if they wish to; and in particular that the average person has the capability to watch the government and corporate watchers). Currently Google, for instance, can surveil us — but we cannot sousveil them or each other, except in a much more limited sense. Do you think the GB would develop in a more broadly beneficial way if it were nudged more toward sousveillance and away from surveillance?
Seems a step in the good direction, but I must admit I haven’t thought through the whole “sous-veillance” idea…
Ah – you should read Brin’s book, it’s really quite provocative. I gave a talk at AGI-09 exploring some of the possibilities for the intersection between sousveillance and AI in the future.
Moving on, then – we’ve dealt with the GB and free will, so I guess the next topic is consciousness. What about consciousness and the GB? Setting aside the problem of qualia, there’s a clear sense in which human beings have a “theater of reflective, deliberative consciousness” that rocks lack and that, for instance, worms and fish seem to have a lot less of. Do you think the Internet or any sort of currently existing “global brain” has this sort of theater of reflective consciousness? If so, to what extent? To what extent do you think a global brain might develop this kind of reflective consciousness in the future?
As the term “consciousness” is very confusing, I would immediately want to distinguish the three components or aspects that are usually subsumed under this heading: 1) subjective experience (qualia); 2) conscious awareness and reflection, as best modelled by the theory of the “global workspace” in which the brain makes decisions; 3) self-consciousness, as being critically aware of, and reflecting about, one’s own cognitive processes.
(1) is in my view much less mysterious than generally assumed, and relatively easy to implement in the Global Brain. For me, subjective experience is the implicit anticipation and evaluation that our brain makes based on (a) the presently incoming information (sensation, perception); (b) the associations we have learned through previous experience in which similar perceptions tended to co-occur with particular other phenomena (other perceptions, thoughts, emotions, evaluations…). This creates an affectively colored, fuzzy pattern of expectation in which phenomena that are associated in this way are to some degree “primed” for possible use in further reflection or action.
So you’re basically equating qualia with a certain informational process…. But doesn’t this ignore what Chalmers has called the “hard problem” — i.e. the gap between subjective experience and physical reality? Or are you saying the qualia are associated with an abstract process, which is then instantiated in physical reality in a certain way? A little more clarification on your view toward the so-called “hard problem of consciousness” might be helpful for understanding your remarks….
I’d rather not get into that, or we will be gone for hours of discussion. I consider the “hard problem” as merely a badly formulated problem. Of course, things feel different from the inside and from the outside: I cannot feel what you can feel, but as long as you behave more or less similarly to how I might behave in similar circumstances, I will assume that you have feelings (qualia) similar to mine. It really does not matter whether you are “in reality” a human being, a zombie, a robot, or a GB: what counts is how you behave…
If you really want to go deeper into this, here are some of my recent writings in which I discuss the “hard problem”: Cognitive Systems: a cybernetic perspective on the new science of the mind, and Self-organization of complex, intelligent systems: an action ontology for transdisciplinary integration .
Yes, I see…. Like our dear departed friend Valentin Turchin, you basically make the hard problem go away by assuming a monist ontology. The “action ontology” you describe is quite similar to things Val and I used to talk about (and I’m sure you guys evolved these ideas together, to some extent). You assume action as a primary entity, similarly to Whitehead with his process metaphysics (or Goethe, whose Faust said “In the Beginning was the Act!”), and then you think about states and objects and people and so forth ultimately as collections of actions.
This quote from your second link seems a critical one:
The ontology of action has no difficulty with subjective experience, and therefore it denies that there is an intrinsically “hard” problem of consciousness. First, it is not founded on the existence of independent, material objects obeying objective laws. Therefore, it has no need to reduce notions like purpose, meaning or experience to arrangements of such mechanical entities. Instead, it takes actions as it point of departure. An action, as we defined it, immediately entails the notions of awareness or sensation (since the agent producing the action needs to sense the situation to which it reacts), of meaning (because this sensation has a significance for the agent, namely as the condition that incites a specific action), and of purpose (because the action is implicitly directed towards a “goal”, which is the attractor of the action dynamics).
I can buy that, though I may interpret it a little differently than you do – to me it feels like a form of panpsychism, really. Mind is everywhere, matter is everywhere, and qualia are an aspect of everything.
Francis: Indeed, as I point out in that paper, panpsychism is a possible interpretation of my position. So is animism, the belief associated with “primitive” cultures, according to which all entities, including rocks, clouds and trees, are intentional agents. But I find such interpretations, while logically not incorrect, misleading, because they come with a baggage of irrational, mysterious, and semi-religious associations. The action ontology is really very simple, concrete and practical, and is intended for application in everyday life as well as in advanced agent-based technologies. In principle, it can even be formulated in mathematical form, as Valentin Turchin had started to do in his papers.
But what does this tell us about the qualia of the GB?
The process by which qualia emerge in the brain is not essentially different from the way collaborative filtering algorithms, after watching the choices we make (e.g. examining a number of books and music CDs on Amazon), produce an anticipation of which other items we might be interested in, and offer those as a recommendation. This is a purely subjective, fuzzy and constantly shifting list of potentially valuable options, which changes with every new choice we make. It may at this moment not “feel” anything like the qualia of our own concrete experiences, but that is mainly because these qualia reside inside our own brain, while the ones of the GB by definition are distributed over an amalgam of databases, hardware and people, so that no individual agent can claim to have direct access to them.
OK — so then if we deal with qualia in this sort of way, we’re still left with the problem of the theater of reflective consciousness – with the “global workspace” and with self-reflection.
The global workspace is based on the idea that difficult problems require full attention (i.e. maximal processing power) in which all specialized modules of the brain may need to be mobilized to attend to this particular problem. Reaching or interconnecting all modules at once requires a “global” (at the level of the individual brain, not at the planetary level) workspace through which such an important problem is “broadcasted”, so that all modules can work on it. This implies a bottleneck, as only one problem can be broadcasted at a time in the human brain. This explains the sequential nature of conscious reflection, in contrast with the fact that subconscious processing in the brain is essentially parallel in nature.
At this time, I don’t see any particular reason why the GB would develop such a bottleneck: its processing resources (billions of people and their computers) are so huge that they can deal with many difficult problems in parallel.
Hmmm…. The current GB is not organized in such a way as to explicitly attack problems massively harder than those individual humans could attack. (Though it may implicitly attack them.) But I wonder if a future GB could explicitly try to solve problems significantly bigger and harder than those that any human can solve. These would then give rise to bottlenecks such as those you describe….
This is indeed an area worth of further investigation….
On the other hand, whether or not serious bottlenecks ever arise in GB information processing, the GB does seem to have use for some form of broadcasting: some problems may be so universal or urgent that ALL parts of the GB may need to be warned of it simultaneously. An example would be a terrorist attack of the scale of 9/11, an emerging pandemic, or contact made with an alien civilisation.
In practice, the level of broadcasting will scale with the relative importance of the problem. A revolution in a Middle Eastern country, e.g., will catch the attention of most people in the Middle East, and of political and economic decision makers in most other parts of the world, but probably not of Latin American farmers. This selective broadcasting is what news media have been doing for decades, but their selection is rather biased by short-term political and economic motives. Hopefully, the emerging GB will do a better job of attending us to events and problems outside our immediate realm of interest… One example of how this may happen is how Google or other search engines select the “most important” websites or news items (as pointed out to me by Rome Viharo).
Right – but this kind of broadcasting seems fairly heterogeneous, rather than having a common hub like the global workspace, like the brain’s executive networks., at the moment. But as the GB evolves and deals with more complex problems on the global level, it seems possible some sort of global workspace might arise. Related to this, an idea I had some time ago – and presented at the GB0 workshop – was to use an advanced AI system as basically an engineered global workspace for the GB.
But it’s probably best not to diverge onto my AGI schemes and visions! So let’s proceed with the aspects of consciousness and their manifestation in the GB. You’ve talked about qualia, broadcasting and the global workspace — what about self-reflection in the GB?
Certainly, self-reflection appears like a useful feature for the GB to have. Again, this does not seem to be so tricky to implement, as we, in our role of components of the GB, are at this very moment reflecting about how the GB functions and how this functioning could be improved… Moreover, decades ago already AI researchers have developed programs that exhibited a limited form of self-improvement by monitoring and manipulating their own processing mechanisms.
Any specific thoughts about how self-reflection might be implemented in the GB?
Not really, except that in an older paper I sketched a simple methodology for “second-order learning”, i.e. learning not only the best values for associations between items, but the best values for the different parameters that underly the learning algorithm, by comparing the predictions/recommendations made for different values of the parameters and seeing which fit best with reality/user satisfaction:
Another possible approach may be Valentin Turchin’s approach of “metacompilation,” a direct application of metasystem transitions to programming languages (which may be extendable to sufficiently powerful AI inference engines).
Metacompilation takes a computer program and represents its run-time behavior in a certain abstracted form, that lets it be very powerfully optimized. As you know I worked with Val and his colleagues a bit on the Java supercompiler, which was based on these principles. But to apply that sort of idea to the GB would seem to require some kind of very powerful “global brain metacompiler” oriented toward expressing the dynamics of aspects of the GB in an explicit form. Maybe something like what I was talking about before, of making a powerful AI to explicitly serve as the GB’s global workspace….
But one thing that jumps out at me as we dig into these details, is how different the GB is from the human brain. It’s composed largely of humans, yet it’s a very very different kind of system. That brings up the question how you might compare the degree of intelligence of a global brain to that of a human? How smart is the Internet right now? How can one devise a measure of intelligence that would span different levels in this way — or do you think that’s a hopeless intellectual quest?
I rather consider it a hopeless quest. Intelligence, like complexity, is at best represented mathematically as a partial order: for two random organisms A and B (say, a hedgehog and a magpie), A may be more intelligent than B, less intelligent, or equally intelligent, but most likely they are simply incomparable. A may be able to solve problems B cannot handle, but B can find solutions that A would not have any clue about.
For such a partial order, it is impossible to develop a quantitative measure such as an IQ, because numbers are by definition fully ordered: either IQ (A) < IQ (B), IQ (A)>IQ (B), or IQ (A)=IQ (B). IQ only works in people because people are pretty similar in the type of problems they can in principle solve, so by testing large groups of people with questions that do not demand specialized knowledge you can get a relatively reliable statistical estimate of where someone is situated with respect to the average (avg(IQ)=100), in terms of standard deviations (sigma(IQ)=15).
There is no average or standard deviation once you leave the boundaries of the human species, so there is no basis for us to evaluate the intelligence of something as alien as a Global Brain. At most, you might say that once it is fully realized, the GB will be (much, much) more intelligent than any single human…
In Shane Legg and Marcus Hutter’s definition of Universal Intelligence, they define intelligence by basically taking a weighted average over all possible problems. So the intelligence of a creature is the average over all problems of its capability at solving that problem (roughly speaking; they give a rigorous mathematical definition). But of course, this means that intelligence is relative to the mathematical “measure” used to define the weights in the weighted average. So relative to one measure, a hedgehog might be more intelligent; but relative to another, a magpie might be more intelligent. In some cases system A might be better than system B at solving every possible problem, and in that case A would be smarter than B no matter what measure you choose.
This definition will not only be relative to the measure you choose, but also to the set of “all possible problems” to which you apply that measure. I do not see any objective way of establishing what exactly is in that set, since the more you know, the more questions (and therefore problems) you can conceive. Therefore, the content of that set will grow as your awareness of “possible problems” expands…
Well, from a mathematical point of view, one can just take the set of problems involved in the definition of intelligence to be the space of all computable problems – but indeed this sort of perspective comes to seem a bit remote from real-world intelligence.
But the practical key point is, you think the human brain and the GB right now are good at solving different kinds of problems, right? So in that case the assessment of which is more intelligent would depend on which problems you weight higher – and their intelligences aren’t comparable in any objective sense….
OK, so if that’s a hopeless quest, let’s move on to something else – let’s get a little more practical, perhaps. I’m curious, what technologies that exist right now do you think are pushing most effectively toward the creation/emergence of an advanced global brain?
I would mention three technologies that have been deployed extremely quickly and effectively in the last ten years:
- wikis (and related editable community websites) provide a very simple and intuitive medium for people to develop collective knowledge via the mechanism of stigmergy (activity performed by individuals leaving a trace on a shared site that incites others to add to that activity). Wikipedia is the most successful example: in ten years time it developed from nothing into the largest public knowledge repository ever conceived, which may soon contain the sum of all human knowledge.
- collaborative filtering or recommendation systems. This is the technology (based on closely guarded algorithms) used by sites such as YouTube and Amazon to recommend additional books, videos or other items on the basis of what you liked, and what others like you have liked previously. Unlike wikis, this is a collective intelligence technology that relies on implicit data, on information that was rarely consciously entered by any individual, but that can be derived relatively reliable from what that user did (such as ordering certain books, or watching certain videos rather than others). If wiki editing is similar to the rational, conscious reflection in the brain, collaborative filtering is similar to the subconscious, neural processes of selective strengthening of links and spreading activation.
- smartphones such as the iPhone, that make it possible to tap into the global brain at any time and any place. From simple person-to-person communication devices, these have morphed into universal, but still simple and intuitive interfaces that connect you to all the information that is globally available. This adds a very practical real-time dimension to GB problem-solving: when you need to get from A to B at time T, you want to know which means of transport you should take here and now; you are not interested in a full bus schedule. Thanks to in-built sensing technologies, such as a GPS, a compass, a camera and a microphone, a smart phone can first determine your local context (e.g. you are standing in front of the Opera building at sunset facing West while hearing some music playing in the background), then send that information to the GB together with any queries you may have (e.g. what is that melody? who designed that building? where can I get a pizza around here?), and finally relay the answer back to you.
Such ubiquitous access to the GB will not only help you to solve problems more quickly, but help the GB to gather more detailed and realistic data about what people do and what they need most (e.g. if many people wonder who designed that building, it may be worth installing a sign with the name of the architect, and if many people come to watch that building around sunset, it may be worth setting up bus lines that reach that destination just before sunset, and go back shortly afterwards).
Note that I more or less forecast the spread of technologies (2) and (3) in my first (1996) paper on the Global Brain, but somehow neglected the (in hindsight pretty obvious) contribution of (1). On the other hand, I forecast the spread of something more akin to Semantic Web supported, AI-type inference, but that has as yet still to make much of a splash…
Hmmm … so why do you think the Semantic Web hasn’t flourished as you thought it would?
My own suspicion is that not many Web page authors were willing to mark up their web pages with meta-data, lacking any immediate practical reason to do so. Basically the semantic web is only useful if a large percentage of websites use it. The more websites use it, the more useful it is, and the more incentive there is for a new website to use it — but no incentive existed to get enough of a critical mass to use it, so as to start off an exponential growth process of increasing semantic web usage.
On the other hand, if we had sufficiently powerful AI to mark up Web pages automatically, thus making the Web “semantic” without requiring extra human effort, that would be a different story, and we’d have a different sort of semantic Web.
What’s your take on the reasons?
I think you are partly right. Another reason is that semantic web people have generally underestimated the difficulty of building a consensual, formally structured ontology. The world tends to be much more contextual and fuzzy than the crisp categories used in logic or ontology.
Ontologies only work well in relatively restricted formal domains, such as names, addresses and telephone numbers. It already becomes much more difficult to create an ontology of professions, since new types of occupations are constantly emerging while old ones shift, merge or disappear. But if you stick to the formal domains, the semantic web approach does do not much more than a traditional database does, and therefore the added intelligence is limited.
I see the solution in some kind of a hybrid formal/contextual labelling of phenomena, where categories are to some degree fuzzy and able to adapt to changing contexts. An example of such a hybrid approach are user-added “tags”, where the same item may get many different tags that are partly similar, partly overlapping, partly independent, and where tags get a weight simply by counting the number of people who have used a particular tag. But reasoning on tag clouds will demand a more flexible form of inference than the one used in semantic networks, and more discipline from the users to come up with truly informative tags…
And what sort of research are you working on these days? Anything global brain related, and if so in what sense?
I am presently working on three related topics:
- the paleolithic, hunter-gatherer lifestyle as a model of what humans have evolved to live like, and thus a good starting point if you want to understand how we can optimize our physical and mental health, strength and well-being;
- the concept of challenge as the fundamental driver of action and development in all agents, human as well as non-human;
- the problem of coordination in self-organization: how can a collective of initially autonomous agents learn to collaborate in the most productive way without any central supervisor telling them how to do it?
The three topics are related in that they are all applications of what I call the “ontology of challenge and action”, which sees the world as being constituted out of actions and their agents, and challenges as situations that elicit those actions. The life of a hunter-gatherer is essentially a sequence of (mostly unpredictable) challenges–mostly minor, sometimes major. In contrast, our modern civilized life has tried to maximally suppress or exclude uncontrolled challenges (such as accidents, germs, hot and cold temperatures, wild animals…). Without these challenges, the various human subsystems that evolution has produced to deal with these challenges (e.g. the immune system, muscles, fast reflexes…) remain weak and underdeveloped, leading to a host of diseases and mental problems.
The link with self-organization is that the action of one agent will in general change the environment in such a way as to produce a challenge to one or more other agents. If these agents react “appropriately”, their interaction may become cooperative or synergetic; otherwise it is characterized by friction. In the best case, patterns of synergetic interaction propagate via the challenges they produce to the whole collective, which thus starts to act in a coordinated fashion.
This topic is obviously related to the Global Brain, which is such a self-organizing collective, but whose degree of coordination is obviously still far from optimal. I don’t yet know precisely how, but I am sure that the notion of challenge will help me to better envision the technologies and requirements for such a collective coordination. One relevant concept I have called “mobilization system”: a medium that stimulates people to act in a coordinated way by providing the right level of challenge. Again, Wikipedia is a prime example. The challenge here is: can you improve in some way the page you have in front of you?
Hmmm. The notion of coordinating the GB reminds me of a broader issue regarding the degree of human coordination and planning and engineering required to bring about a maximally intelligent GB.
At the GB0 workshop in 2001, there seemed to be two major differences of opinion among participants on this (as well as very many smaller differences!). The first was whether the global brain was already present then (in 2001) in roughly the same sense it was going to be in the future; versus whether there was some major phase transition ahead, during which a global brain would emerge in a dramatically qualitatively stronger sense. The second was whether the emergence of the global brain was essentially something that was going to occur “spontaneously” via general technological development and social activity; versus the global brain being something that some group of people would specifically engineer (on top of a lot of pre-existing technological and social phenomena). Of course I’ve just drawn these dichotomies somewhat crudely, but I guess you understand the ideas I’m getting at. What’s your view on these dichotomies and the issues underlying them?
My position is nicely in the middle: either position on each of the dichotomies seems too strong, too reductionistic to me. I believe that the GB to some degree is already there in essence, to some degree it still reserves a couple of spectacular surprises for us over the coming decades. Similarly, it will to some degree emerge spontaneously from the activities of many, relatively clueless people, to some degree be precipitated by clever engineering, inspired by the ideas of visionary thinkers such as you or I!
Another, related question is the connection between the GB and the Singularity. I take it you’re familiar with Ray Kurzweil’s and Vernor Vinge’s notion of the Singularity. What’s your current take on this notion? Is the Singularity near? As I vaguely recall, when we discussed this once before you were a little skeptical (but please correct me if I’m wrong). Max More likes to talk about a Surge rather than a Singularity — a steady ongoing growth of advanced technology, but without necessarily there being any point of extremely sudden and shocking advance. His Surge would ultimately get us to the same (radically transhuman) point as Kurzweil’s Singularity, but according to a different slope of progress. Are you perhaps more friendly to Max’s Surge notion than Ray and Vernor’s Singularity? Or do you find them both unjustifiably techno-optimistic?
I have just been invited to write a paper for a special volume that takes a critical look at the Singularity. I do not know what exactly Max More means by his Surge, but it does sound more realistic than a true Singularity. In the paper, I plan to argue that the transition to the Global Brain regime is more likely to resemble a logistic or S-curve, which starts to grow nearly exponentially, then slows down to a near linear expansion (constant growth), in order to finally come to a provisional halt (no more growth).
In my own (decidedly subjective experience), we may already be in the phase of constant growth, as I have the feeling that since about the year 2000 individuals and society are to such a degree overwhelmed with the on-going changes that their creativity and capacity for adaptation suffers, thus effectively slowing down further innovation. This doesn’t mean that we should no longer expect spectacular innovations, only that they will no longer come at an ever increasing speed.
That may seem defeatist to Singularitarians and other transhumanist enthusiasts, but I believe the basic infrastructure, technical as well as conceptual, for the Global Brain is already in place, and just needs to be further deployed, streamlined and optimized. We have only glimpsed a mere fraction of what the GB is capable of, but realizing its further potential may require fewer revolutionary innovations than one might think…
Yes, I see…. You recently wrote on a mailing list that
In summary, my position is:
- I believe in the Singularity as a near-term transition to a radically higher level of intelligence and technological power, best conceived of as a “Global Brain”
- I don’t believe in the Singularity as a near-term emergence of super-intelligent, autonomous, computers
- I don’t believe in the Singularity as a near-term acceleration towards a practically infinite speed of technological and economic progress
I think that puts it pretty clearly.
And as you know, I don’t fully agree. I agree that a Global Brain is emerging, but I see this as part of the dawning Vingean Singularity, not as an alternative. I think superintelligent computers will emerge and that eventually they will be able to operate quite autonomously of humans and human society – though initially our first superintelligent computers will probably be richly enmeshed with the Global Brain. And I do think we’ll have acceleration toward an incredibly rapid speed of technological and economic progress – though I also think that, from the perspective of human society, there’s a limit to how fast things can progress, because there’s a limit to how fast human beings can absorb and propagate change. There’s also probably a limit to how MUCH things can change for humans, given the constraint of humans remaining humans. The way I see it, at some point future history is likely to bifurcate – on the one hand you’ll have advanced AIs integrated with humans and the Global Brain, advancing at an impressive but relatively modest pace due to their linkage with humans; and on the other hand you’ll have advanced AIs detached from the human context, advancing at a pace and in a direction incomprehensible to legacy humans. Some people fear that if AIs advance in directions divergent from humanity, and beyond human ken, this will lead to the destruction of humankind; but I don’t see any clear reason why this would have to be the case.
In practical terms, I think that in the next few decades (possibly sooner!), someone (maybe my colleagues and me) is going to create human-level (and then transhuman) artificial general intelligence residing in a relatively modest-sized network of computers (connected into and leveraging the overall Internet as a background resource). Then, I think the integration of this sort of AGI into the GB is going to fundamentally change its character, and drastically increase its intelligence. And then after that, I think some AGIs will leave the Global Brain and the whole human domain behind, having used humanity and the GB as a platform to get their process of self-improvement and learning started….
I’m particularly curious for your reaction to this possibility…
My personal bias is to consider the “background resource” of knowledge available on the Internet more important than the localized AGI. Such an AGI would definitely be very useful and illuminating to have, but without the trillions of (mostly) human generated data available via the net, it wouldn’t be able to solve many real-life problems. This perspective comes from the situated and embodied cognition critique on AI (and by extension AGI): real intelligence only emerges in constant interaction with a truly complex and dynamic environment. The higher the bandwidth of that interaction, the more problems can be solved, and the more pragmatically meaningful the conclusions reached by your intelligent system become.
The only practical way I see at the moment to maximize that bandwidth is to use all the globally available sensors and effectors, i.e. all human individuals supported by their smartphone interfaces, plus a variety of autonomous sensors/effectors built into the environment, as envisaged by the “ambient intelligence/ubiquitous Internet” paradigm. That means in effect that your intelligent system should be firmly rooted into the GB, extending its “feelers” into all its branches and components.
Whether your AGI system runs locally on a modest size network of computers, or in distributed form on the Internet as a whole seems rather irrelevant to me: this is merely a question of hardware implementation. After all, nobody really cares where Google runs its computers: what counts are the way they sieve through the data…
By the way, when discussing these issues with my colleague Mark Martin, he mentioned Watson, an IBM system perhaps not unlike what you envisage. While the IBM website is extremely vague about how Watson is supposed to answer the questions posed to it, I suspect that it too is firmly rooted into an Internet-scale database of facts, texts and observations gathered by millions of people.
Of course, IBM has reason to downplay the role of those (publically available) data, and to emphasize the great strides they made on the level of hardware (and to a lesser degree) software, just like you would rather focus on the advanced AGI architecture underlying your system. But my impression is that neither system would be of much practical use without that humongous database of human-collected information behind it…
Well, as you know Watson is not an artificial general intelligence – Watson is just a fairly simple question-answering system, that responds to questions based on looking up information that’s already present on the Web in textual form. So, for sure, in the case of a system like Watson, the AI algorithms play a secondary role to the background knowledge. But that’s because Watson is not based on a serious cognitive architecture that tries to learn, to self-reflect, to create, to model the world and its place in the world and its relationship to others.
Systems like Watson are relatively easy to build and fun to play with, precisely because they’re just tools for leveraging the knowledge on the Web. But they’re a completely different animal from the kind of AGI system we’re trying to build in the OpenCog project, for example (or from the human brain, which also is capable of wide-ranging learning and creativity, not just matching questions against a database of previously-articulated answers).
The knowledge available on the Web will also be of tremendous use to real AGI systems – but unlike Watson these systems will do something besides just extract knowledge from the Web and respond with it appropriately. They will do more like humans do – feed the knowledge from the Web into their own internal thought processes, potentially creating ideas radically different from anything they read. Like you or me, they will question everything they read and even whether 1+1=2. What happens when systems like this become very powerful and intelligent and interact intensively with the GB is an interesting question. My view is that, at a certain point, AGI minds will come to dominate the GB’s dynamics (due to the AGIs eventually becoming more generally intelligent than humans); and that the GB will in essence serve as an incubator for AGI minds that will ultimately outgrow the human portion of the GB.
But, I know you don’t share that particular portion of my vision of the future of the GB – and I’d be the first to admit that none of us knows for sure what’s going to happen….