Cliff Joslyn on the Global Brain

Interview with Ben Goertzel

Cliff Joslyn is one of the most truly and thoroughly interdisciplinary researchers I know – and that’s saying a bunch!

Currently Cliff serves as the Chief Scientist for Knowledge Sciences at the Pacific Northwest National Laboratory (PNNL) in Seattle, Washington, and also as an Adjunct Professor of Systems Science at Portland State University. He came to PNNL from the Los Alamos National Laboratory where he also served for many years.

I first became aware of Cliff via his role in creating the Principia Cybernetica website, which in the early years of the Internet was one of the biggest and best sites online, containing wonderfully structured information about complex systems of all sorts, from basic definitions to in-depth cutting-edge research. The other co-creators of Principia Cybernetica were Francis Heylighen, whom I interviewed for H+ Magazine last year — and the late great Valentin Turchin, an amazing Russian pioneer of AI, supercompilation, the Global Brain, futurism, cybernetics, and many other things, whom I profiled for the German newspaper Frankfurger Allgemaine (and to whom I dedicated my Cosmist Manifesto).

I then intersected with Cliff at the Global Brain 0 Workshop in Brussels in 2001, where he and Francis and I and the other participants enthusiastically and wide-rangingly explored the region of idea-space surrounding the “Global Brain” concept. And shortly after that that we had some wonderful opportunities for face-to-face brainstorming, when I was a professor at University of New Mexico in 2001-2002, not such a far drive from Los Alamos where Cliff was a researcher.

Cliff’s research approaches to semantic information systems and database analysis, but including computational semiotics, qualitative modeling, and generalized information theory, with applications in computational biology, infrastructure protection, homeland defense, intelligence analysis, and defense transformation.” That’s quite a mouthful, but the essence is fairly simple – he’s above all a systems theorist, who is concerned with complex systems of all sorts, especially intelligent systems. He’s working with some specific mathematical tools, as a way of modeling complex systems – including aspects of the branches of math called order theory and information theory. And he’s applying his ideas to some specific areas of importance in the world today, especially biology and national security.

Systems theory, and the closely related field of cybernetics, have contributed greatly to the development of modern science and philosophy. They are reasonably well known in Europe, but are relatively obscure in the US, at least in the sense of their role as distinct and coherent intellectual disciplines. However, systems thinking has played a big role in the development of contemporary, international understanding of the world albeit often with slightly different nomenclature – e.g. the “complex systems” research themes promoted by the Santa Fe Institute, Stephen Wolfram and others. Modern systems theory – as practiced by Cliff and others — often combines the theoretical bent of the traditional European systems theorists with the computational approach of the American complex systems researchers.

After interviewing Francis Heylighen regarding his current take on the Global Brain and related ideas, I decided it would be fun and appropriate to interview Cliff as well, touching on the same themes. Here goes:

Ben:
The global brain means many things to many people. How do you conceptualize the notion of the global brain? Both in its nearest-term manifestations, and in its most ambitious future manifestations?

Cliff:
While the Global Brain (GB) has a number of roots in modern thought, our conceptualization of it began through work I was doing with colleagues in the Principia Cybernetica project (PCP) primarily in the 1990’s, leading up to the 2001 Global Brain workshop. We were advancing a general evolutionary cybernetic philosophy based around the concept of the Meta-System Transition (MST) and the origins of levels of control in complex systems spanning from individual organisms to the collective global system.

In that context, multiple concepts of organization above the human level were apparent, based on this analysis in terms of the emergence of control relations, and a number of terms like “meta-organism” and “super-organism” were offered.

Thus we had a lot of discussions about collections of different types, with their fundamental distinctions being that between different modes and levels of structure. A completely unorganized collection is just a set, while in a population there can be a differentiation of labor , income, social roles, etc., but still lacking in a sufficient identity at the level of the whole, in our philosophy, to manifesting specifically control relations at this new collective level.

Certainly political organizations exercise control relations over citizens to an extent through their leaders, and economic and other dynamical systems can be seen as having limited control relations through feedback effects (e.g. free-market pricing and attendant economic activitity). But the fundamental concept of the global brain would be of a (global) human collective, which had its own agency, that is, its own ability to exercise “willful” (a term with some attendant philosophical development) control. In our cybernetic model, this in turn requires the ability to represent goals, take measurements, take actions to bring about those goals, and then finally judge the value of those actions towards those goals in order to potentially change those actions.

As a general matter, it is less than clear whether such relations are possible in any social collective. They clearly are present in the individual organism and human, which function is precisely the central nervous system, and the brain in particular.

The universal forces of evolution have acted on organisms to create e.g. metazoan multicelluarity and its attendant specialization of cells, tissues, and organs. Similarly, human social roles are differentiated, and there are many specialized human “organs” (firms, governments,etc.) And eusociality in insects (bees, ants, and termites) and even in mammals (mole rats) is a model of a different form of meta-organism which produces a form of agency.

But such internal differentiation is only a necessary, and not a sufficient, step for the MST to bring about a global brain. The ability of a metazoan, even a simple worm, to engage in distinct relations of perception, decision, and action in the world is clear, but this is much less so for a beehive, and hotly debated, let alone a human social organization or whole society. These particular phenomena would have to be identified in a collective entity in order to posit the emergence of a true collective intelligence.

Ben:
Many people, when they hear about the notion of a global brain, are concerned that the advent of such a “cognitive superorganism” above the human level would reduce their personal freedom (or their sense of personal freedom). One standard counterargument is that in the presence of a global superorganism we would feel just as free as we do now, but our actions and thoughts would be influenced on a subtle unconscious level by the superorganism — and after all, the feeling of freedom is more a subjective construct than an objective reality. On the other hand, it’s not clear that we do feel just as free no was our ancestors did in a hunter-gatherer society. In some senses we may feel more free, in others less. Or, you could argue that the ability to tap into a global brain on command gives a massive increase in freedom and possibility beyond the individually-constrained mental worlds we live in now. What’s your take on all this?

Cliff:
The theory of Meta-System Transitions is fundamentally concerned with the interplay of freedom and constraint amongst systems and the entities they comprise. This interplay is conceived of in information theoretical terms, such that variety or variation is an expression of freedom, as distinct from selection, which is an expression of constraint. All organization involves both of these factors, usually interleaved in complex ways at multiple levels. An example is, in communications systems, where it’s the selection (constraint) of the range of possible (various) symbols, which produces meaningful utterances as opposed to random strings.

In an MST, a new level of control emerges at the level of the whole from the interaction of a variety of parts. This mechanism itself is both necessary and sufficient to see a great proliferation both in the number and in the functional roles (specializations) of those parts. Examples are the relationship between cell types in a multi-cellular organisms, neurons in a brain, and individual social functions in a society.

Thus in the transition to the global brain we will see both trends operating simultaneously at different levels. On the one hand, we are seeing a vast increase in the variety of kinds of activities and information available to individuals. This does not just facilitate human freedom, it effectively is human freedom in the sense of there being an increasing variety of possible states of human experience and actions. But on the other hand, constraints are introduced from the global level through economic and technological processes, and “canalization” and self-organizing around norms, protocols, and economic structures. One can start to see this already as the IT landscape evolves and forces of centralization e.g. doom MySpace while crowning Facebook. Note that this is not new specifically to the GB in the sense of the penetration of global IT, but rather is a general feature of social evolution greatly exaggerated by the GB experience.

Ben:
What technologies that exist right now do you think are pushing most effectively toward the creation/emergence of an advanced global brain?

Cliff:
Technology forecasting is, of course, fraught with peril, and my observations about current change are probably pretty plebeian, with e.g: ubiquitous wireless and social computing, etc. (I eagerly await my heads-up contact lenses, and later my plugin to the matrix). More concretely, building on my prior work with PCP, for my recent technical work I’ve been developing mathematical methods for working with Semantic Web technologies. The ability to encode meaningful information into Web data is one of the necessary conditions to drive the GB. But there are many such necessary conditions, and the sufficient conditions are unknown. More to the point, like natural language understanding and other methods, semantic information technologies threaten to join the flying car and fusion power on the ever-receding asymptotic horizon. . .

Ben:
How might one compare the degree of intelligence of a global brain to that of a human? How smart is the Internet right now? How can one devise a measure of intelligence that would span different levels in this way — or do you think that’s a hopeless intellectual quest?

Cliff:
I think this begs the question as to whether or not the global IT system can be considered to be an agent at all, or not. If so, then give it a Turing test. . .

Ben:
OK, so how would we test empirically whether it makes sense for the global IT system could be considered an agent? Even if a rigorous test is difficult, how would we evaluate this matter intuitively? What would a global IT infrastructure with agency look like, specifically? How likely is it that such agency might exist, yet we humans be unable to recognize it? Could there be special tools we might develop, that would help us to recognize it better (and if so what might they be)?

Cliff:
As a general matter, tests for agency, like tests for life or consciousness, are not decidable in the same way that the presence of physical properties would be. It’s not like there’s a “consciousness meter”, or some device you can stick in a system to measure the presence of agency. So like the Turing test for intelligence, any results will be relative to a given interpretational frame, or model. But in this case, cybernetic principles are available to provide at least necessary, if not sufficient, conditions for the presence of agency. Here we can turn to specifically to cybernetic control theory, and say that any agent will be acting as a control system. In turn, when a control system is present, then it takes actions into the environment to maintain, in the face of perturbations, some property at some state far from what would otherwise be understood to be its equilibrium. Basically, control systems take variable means to achieve constant ends, as my kitten Lulu will resist me physically as I try to push her off the counter. My equilibrium model of her is that if I push her one way, she’ll move that way and fall off the counter. But no, she has a mind of her own, and literally squirms and pushes back to maintain her perch, and thus her own internal representations of her goal state to remain there. No doubt such properties can be hypothesized as candidates in the global socio-economic-ecological milieu, and if so, then relative to the model of equilibrium employed, that can be strong evidence for the presence of a deciding agent which interprets signals from its environment and selects variable actions based on knowledge in order to steer the system in the desired direction.

Ben:
You mentioned consciousness – so what about consciousness? Setting aside the problem of qualia, there’s a clear sense in which human beings have a “theater of reflective, deliberative consciousness” that rocks lack and, for instance, worms and fish seem to have a lot less of. Do you think the Internet or any sort of currently existing “global brain” has this sort of theater of reflective consciousness? If so, to what extent? To what extent do you think a global brain might develop this kind of reflective consciousness in the future?

Cliff:
I regret that I’m not a student of consciousness, although I enjoy discussions and speculations about it. Certainly the prior conditions for autonomy and agency are necessary before this can be considered seriously.

Ben:
OK, so switching gears back to the concrete, then … what sort of research are you working on these days? Anything global brain related, and if so in what sense?

Cliff:
As I mentioned above, I’ve been working on mathematical methods for specifically semantic information processing in systems. I do believe that there is potential significance for the global brain, in that the ability to represent meaningfulness is a necessary condition for anything like cognitive agency at the global level. That said, I have been thinking recently about the value, perhaps even the necessity, of agency at the global level in order to save ourselves from apparent creeping doom. The ability of people to be able to consciously (there’s that word again) value future states over current optimization and exploitation of resources may be able to be mediated by the kinds of cybernetic feedbacks which we’ve been discussing, and which the GB also anticipates at a massive scale.

Ben:
That brings us back to some basic questions about the Global Brain, I suppose. At the Global Brain 0 workshop in 2001, there seemed to be two major differences of opinion among participants (as well as very many smaller differences!). The first was whether the global brain was already present then (in 2001) in roughly the same sense it was going to be in the future; versus whether there was some major phase transition ahead, during which a global brain would emerge in a dramatically qualitatively stronger sense. The second was whether the emergence of the global brain was essentially something that was going to occur “spontaneously” via general technological development and social activity; versus the global brain being something that some group of people would specifically engineer (on top of a lot of pre-existing technological and social phenomena). Of course I’ve just drawn these dichotomies somewhat crudely, but I guess you understand the ideas I’m getting at. What’s your view on these dichotomies and the issues underlying them?

Cliff:
I don’t believe that the first distinction is that well founded as a question, especially not back in 2001. The definition and nature of the GB is not sufficiently advanced yet (to my knowledge) to be able to appreciate whether it could engender either a continuous or discrete transition. On the latter, I’m of the firm opinion that it will be both: in our cybernetic model, the collective intelligence of the whole is exactly composed of the informed, intelligent, and free actions of its components.

Ben:
And what about the Singularity – how does that tie in with the Global Brain in your view?  You’re familiar with Ray Kurzweil’s and Vernor Vinge’s notion of the Singularity, obviously.  When we discussed this years ago you seemed skeptical and talked about how development tends to follow an S-curve rather than an exponential curve.  Ray’s standard refutation to this is that if you pile a lot of S-curves on top of each other in the right way you get an exponential, and that’s how he perceives human history.  What’s your current take on this?  Is the Singularity near?   Max More likes to talk about a Surge rather than a Singularity — a steady ongoing growth of advanced technology, but without necessarily there being any point of extremely sudden and shocking advance.  His Surge would ultimately get us to the same (radically transhuman) point as Kurzweil’s Singularity, but according to a different slope ofprogress.  Are you perhaps more friendly to  Max’s Surge notion than Ray and Vernor’s Singularity?

Cliff:
I recall that discussion, but didn’t follow through on the math of it, how an ascending sequence of logistic curves can act. I don’t know More’s work, but I’ve been impressed by what I’ve heard of Kurzweil’s theory. A relatively nearer-term prediction of his which could really be significant, would be the development of effective photo-voltaic technology. And it would be good to identify precursors of his predictions concerning biological nanobots. I think it’s a good observation that engineering of the human form may be instrumental in advancing a GB. We’ve certainly been seeing how , recent IT advances are quite dependent on human form factors (next stop: iWatch). Vinge’s recent book – Rainbows End – suggests a world approaching the singularity, and one completely consistent with current IT: I do want my heads-up contact lenses, now!

Ben:
Hmmm… so, hearing you talk about these things now, it seems to me that you’ve become more amenable to the Singularity hypothesis over the last decade? Would you say this is accurate? If so, what would you say have been the main reasons for the shift in your opinion?

Cliff:
Well, the basic structure of the argument is hard to avoid, that exponential efficiency improvements have been dominant over decades, a situation which cannot be projected forward indefinitely. But it’s a quantitative issue, if you’re on a slope being measured at a time scale where noise is a dominant factor, how can it be known whether you’re observing exponential, polynomial, or logistic behavior over the long run? Or to say it another way, how fast is Moore’s law approaching physical limits (quantum, information theoretical), say barring a breakthrough in e.g. quantum computing? No doubt something “different” will be happening in a few more decades, but whether we push through a singularity, bounce (or crash!) off a limit, or ease into a logistic deceleration, I think does not admit to any easy models.

Ben:
Fair enough. As you know I assign a significantly higher odds to something like a Singularity than to those other options, but I admit we can’t know that with anything like certainty, at this stage. So now for the last question, I’ll indulge myself by poking a bit into your views on my own main research interests…..

As you know my own main research focus is on trying to engineer human-level AGI systems. If my own work succeeds it will lead to human-like AGI systems that interact with the global brain, perhaps with greater flexibility and facility than humans can — and then grow (in ways that would be hard for humans to predict now) based on what they learn from the global brain.

Sooo…. when I interviewed Francis Heylighen, he expressed a fairly skeptical attitude about AGI work like this, figuring that the first advanced AGIs will largely be “wrappers” [my paraphrase] for the knowledge and wisdom in the Global Brain. On the other hand I think the first AGIs are going to be their own autonomous agents, interacting with the GB but also separate from it much to the extent that individual humans are (and meaningful even in the absence of any explicit infusion of knowledge from the GB). What are your thoughts on this friendly difference of opinion between Francis and myself?

Cliff:
I wish I had a more informed response, as I regret I’ve not tracked the AGI arguments as well as I would have liked over the years. I suppose that my prejudice is more towards an embodiment perspective, and thus that high-level cognitive phenomena such as consciousness need not be dependent on large information stores such as the GB. To the extent that one’s AI model leans towards Cyc and Watson, then they would. But to the extent that one’s AI model leans towards Lulu’s manic response to just the sound of the switch of the laser pointer being depressed, and all that entails about her internal knowledge model of what rewards her pleasurably, itself deeply rooted in her evolutionary genetic-cognitive mechanisms to better catch mice, then they would not. In other words, I do believe that intelligence depends on very large information stores. And the GB may provide one such information store for one kind of AI, perhaps an AGI. But stores at the neural or genetic levels may be sufficient, and I don’t see how it can be argued on principle that those at the socio-linguistic level are necessary for either. It’s certainly not for my cat Lulu. And she’s getting smarter every day.

Ben:
Yes, I agree. And I’m sure Francis wasn’t trying to argue against the possibility or excitingness of making robot cats either! I guess he was just positing that, given where our technologies are at today, it may be easier to work toward semantic Web powered GB flavor AGI systems, than robot cats or android robots, etc. (or virtually embodied AGI systems like we’re building in the OpenCog project). But I have to come back here to my oft-repeated notion that there may be many different kinds of AGI – and with many different kinds of relation to the emerging Global Brain. And to your comments about the difficulties of detailed technology forecasting! I’m certainly happy there’s research going on in multiple directions now, including semantic analysis, the the Global Brain, and robotic animals and people… We live in interesting times etc. etc. … And if we’re lucky maybe systems theory can help us understand and shape them!

Cliff:
Agreed! Thanks so much for such an intelligent and entertaining dialog, and I hope that your readers will find it rewarding!

1 Comment

  1. “he’s applying his ideas to some specific areas of importance in the world today, especially biology and national security.”

    hmph. This is exactly what we need…yet another AI researcher applying his rare and vital talents to the noble goal of helping governments more efficiently spy on their own citizens.

    You say “national security” but what I hear is “Ethics be damned, I need grant money!”

Leave a Reply