What Will Come After Language?

· December 27, 2012

A few weeks ago now I gave a talk, via Skype from Hong Kong, at the Humanity+ San Francisco conference….  Here are some notes I wrote before the talk, basically summarizing what I said in the talk (though of course, in the talk I ended up phrasing many things a bit differently…).

I’m going to talk a bit about language, and how it relates to mind and reality … and about what may come AFTER language as we know it, when mind and reality change dramatically due to radical technological advances. Language is, obviously, one of the main things distinguishing humans from other animals.   Dogs and apes and so forth, they do have their own languages, which do have their own kinds of sophistication — but these animal languages seem to be lacking in some of the subtler aspects of human languages.  They don’t have the recursive phrase structure that lets us construct and communicate complex conceptual structures.Dolphins and whales may have languages as sophisticated as ours — we really don’t know — but if so their language may be very different.  Their language may have to do with continuous wave-forms rather than discrete entities like words, letters and sentences.  Continuous communication may be better in some ways — I can imagine it being better for conveying emotion, just as for us humans, tone and gesture can be better at conveying emotion than words are.  Yet, our discrete, chunky human language seems to match naturally with our human cognitive propensity to break things down into parts, and with our practical ability to build stuff out of parts, using tools.I’ve often imagined the cavemen who first invented language, sitting around in their cave speculating and worrying about the future changes their invention might cause.  Maybe they wondered whether language would be a good thing after all — whether it would somehow mess up their wonderful caveman way of life.  Maybe these visionary cavemen foresaw the way language would enable more complex social structures, and better passage of knowledge from generation to generation.  But I doubt these clever cavement foresaw Shakespeare, William Burroughs, Youtube comment spam, differential calculus, mathematical logic or C++ ….   I suppose we are in a similar position to these hypothetical cavemen when we speculate about the future situations our current technologies might lead to.  We can see a small distance into the future, but after that, things are going to happen that we utterly lack the capability to comprehend…The question I want to pose now is: What comes after language?  What’s the next change in communication?

My suggestion is simple but radical: In the future, the distinction between linguistic utterances and minds is going to dissolve.

In the not too distant future, a linguistic utterance is simply going to be a MIND with a particular sort of cognitive focus and bias.

I came up with this idea in the course of my work on the OpenCog AI system.  OpenCog is an open-source software system that a number of us are building, with the goal of  eventually turning it into an artificial general intelligence system with capability at the human level and beyond.  We’re using it to control intelligent video game characters, and next year we’ll be working with David Hanson to use it to control humanoid robots.

What happens when two OpenCog systems want to communicate with each other?  They don’t need to communicate using words and sentences and so forth.  They can just exchange chunks of mind directly.  They can exchange semantic graphs – networks of nodes and links, whose labels and whose patterns of connectivity represent ideas.

But you can’t just take a chunk of one guy’s mind, and stick it into another guy’s mind.   When you’re merging a semantic graph from one mind, into another mind, some translation is required — because different minds will tend to organize knowledge differently.  There are various ways to handle this.

One way is to create a sort of “standard reference mind” — so that, when mind A wants to communicate with mind B, it first expresses its idiosyncratic concepts in terms of the concepts of the standard reference mind.   This is a scheme I invented in the late 1990s — I called it “Psy-nese.”   A standard reference mind is sort of like a language, but without so much mess.  It doesn’t require thoughts to be linearized into sequences of symbols.  It just standardizes the nodes and links in semantic graphs used for communication.

But Psynese is a fairly blunt instrument.  Wouldn’t it be better if a semantic graph created by mind A, had the savvy to figure out how to translate itself into a form comprehensible by mind B?  What if a linguistic utterance contained, not only a set of ideas created by the sender, but the cognitive capability to morph itself into a form comprehensible by the recipient?  This is weird relative to how language currently works, but it’s a perfectly sensible design pattern…

That’s my best guess at what comes after language.  Impromptu minds, synthesized on the fly, with the goals of translating particular networks of thought into the internal languages of various recipients.

If I really stretch  my brain, I can dimly imagine what such a system of thought and communication would be like.  It would weave together a group of minds into an interesting kind of global brain.  But we can’t foresee the particulars of what this kind of communication would lead to, any more than a bunch of cavemen could foresee Henry Miller, reddit or loop quantum gravity.

Finally, I’ll pose you one more question, which I’m not going to answer for you.  How can we write about the future NOW, in a way that starts to move toward a future in which linguistic utterances and minds are the same thing?



For more videos of lectures and interviews with thought leaders please
Subscribe to Adam Ford’s YouTube Channel


You may also like...

10 Responses

  1. Dave W Baldwin says:

    Good article.

    You need to include inner mind images in the language due to that being where our attempts at language resulted.

    Postulating that matched with the “2 brains merging” would need to include a trigger that allows the “thinker” to know whichever vagary is backed up by proof. This would eliminate obstacles that would retard both speed and magnitude of new thought. This will come as we move toward the “assistant” that searches for us matching what we’re vaguely after along with determining whichever result matters.

    No matter what, establish that “comforting” trigger signalling the info from new source (brain, machine, whatever) is factual, then you can add brains.

    Last, I think it is important to establish parameters related to what is commonly wished among the population. I know this breaks into fragments, but if it is agreed that all of us would want to “hook into” the common brain that speeds up progress toward a peaceful world placing the relative good of all as top priority, then we can get somewhere.

    On that note, something like that can probably get backing from whichever kickstarter.

  2. Bruce Jakeway says:

    Could it be that human language as we know it is the best way of transferring the contents of one mind to another?

    It’s interesting to think of mind-melds a la Spock in Star Trek, but then you’d have to have a common semantic framework on which to place your new data. Imagine a mind meld from someone today who understands quantum mechanics and relativity to someone from the early 1800s before the wave-particle nature of matter was understood. The receptor may very well miss most of the transference because his/her mental framework is not ready for it. I wonder if our language, clumsy though it may be, is what is necessary for us to transmit ideas from one mind to another.

  3. That’s a thought-provoking article.

    I reckon any symbols that convey information make up a language, no? I mean, by “language,” we should include the machine code itself (manner in which, for example, brain cells interact with themselves). Neural patterns themselves (virtual or in brains) reflect an internal language of some sort.

    Anyhow, maybe you could conceptualize a scale. On one end, you have brain-states being communicated through macroscopic symbols (symbols other than the machine code itself, i.e. what we’d normally understand as “language”). On the other end, you have brain states being communicated without any translation whatsoever … that is, in the brain’s own machine code. (It sounds like the partial transfer of brain-state information generally (always?) requires some sort of translation.) If two brains were totally linked, they’d be in essence a single brain and therefore a single mind.

    And then we could think about how those two things would subjectively feel. With language, when you receive a communication, it feels like you’re receiving information from the outside. On the other hand, with direct linking of brain states, we’d just feel like we were having those thoughts ourselves.

  4. JoukoSalonen says:

    re: new interfaces+adapting brains
    — Era: …”you want to match at least the bandwidth of corpus callosum…”
    — Joscha: …”For the time being, we cannot follow Eray’s suggestion…”

    just 2 reminders:
    20 years ago
    http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1992/WildPalms.html

    6 years ago – on nerve cell integration http://cacophone.blogspot.fi/2006/04/first-hybrids.html

    re: “what comes after language?” – existential graphs? semantic graphs? – hmm… you can use graphs as analytic & mathematical tools to understand better the expanding polyphonic dimensionality and geometry of existing natural languages – I believe this is a better approach than building “better” languages like Ithkuil as Joscha is proposing. How to “blend” these graphs? – I think it is not the same question as how to transfer them from one mind to an other, as Ben seems to think. I think that morphing graphs is more like bringing gravity and motion into the nodes and expanding fields, interaction and curved surfaces into links of the graph-theory. Therefore Ben’s idea of some kind of standard reference graph feels strange to me.

  5. Sanjeev says:

    I can think of a few possible scenarios:

    1- The human and non-human AGIs continue on their own separate paths. In this case the humans would enhance their bodies (brain-brain interfaces or super-vocal-chords/ears) with the help of devices which enables faster and better communication, while retaining the privacy (separation). For example, the languages will evolve into a more complex form with thousands of sounds instead of a few that we can now utter and with millions of symbols instead of a tiny number of letters we can manage at present.
    There will be eye/ear/vocal chord enhancements capable of receiving/transmitting all sounds and symbols, and converters to directly stimulate the corresponding neural firing pattern (or whatever causes the knowing). With such tech a newscast of a whole day of world news will take a few seconds, say.

    2- Human and machine AGI merge. The machines become too similar to biology using molecular level hardware and the human bodies are enhanced so much that they resemble these machines and the difference is only for namesake. Still there is a separation and individuals exists. When there is a separation, there is a need to communicate. In such case, something similar to Ben’s idea can happen. The biological part of human brain will simply share its states via a shared machine-mind, which the machines and other humans will immediately access as a whole(knowledgegram). There will be no need of a language, only a standard-mind.

    3- Humans go extinct and machines manage to survive and evolve (at superfast rates). In this scenario it is only an assumption that the machines will prefer to remain separated, or that it will be even possible to retain some sort of individual and private “life”. The most probable thing is that they will be always connected (via some sort of field or q-entanglement, may be) and there will be no need to communicate. All parts of the AGI-world will be always and instantly aware of all other parts. There cannot be any parts or separation here actually. Even if a machine evolves with a preference to remain separate, the other variants will figure out to read its “mind” in a matter of seconds.

  6. Paul Tiffany says:

    Hey Dr. Goertzel, great food for thought!

    A couple things come to my mind…

    Firstly, language as the inner voice. When athletes transcend this they go into “the zone.” I know a common speed reading technique is to push your eyes past the rate of ones inner voice, and try to shut it off. There are apps that will force this by flashing words or groups of words at you at a specific rate. I can see the dystopia now… Friendly machines strap us into some serious sensory overload while misconstruing our twitch response.
    Secondly, language as the memetic replicator. Some species used their ability to hunt or their cuteness to coevolve with humans. Language, like art, dance, and music, used the ability to help humans correlate. If we are to see something new, it must be useful to us humans, and it must evolve with us.

  7. Joscha Bach says:

    In a way, we already make use of a reference mind; languages themselves are a semi-static part (the translator between universal mental representations and one-dimensional strings of discrete symbols), and shared knowledge (including tacit) serves as contextual references of our utterances.

    For the time being, we cannot follow Eray’s suggestion and will have to stick with one dimensional symbol strings as a medium of exchange. I wonder if we could have at least better languages. Sapir-Whorf might be a little unfashionable, but at least programming languages demonstrate that learning languages does indeed enable new thoughts and improves cognitive performance.

    Within and around the linguist community, people occasionally invent new languages, and sometimes even with the goal of improving cognition. An example might be Ithkuil, which, while being a philosophical experiment, compares to an ordinary natural language like Scala to Basic. Ithkuil combines many nifty features from human languages (and even comes up with new ones), and makes statements super-compact, rich and concise. On the other hand, not even its inventor can fluently think and dream in it.

    The only domain where new languages to think in are systematically invented, compared and improved is computing. I think that the profoundness of what programming language inventors are doing is largely overlooked by linguists and cognitive scientists: programming language development is an empirical discipline that tries not to make computers more powerful, but give more expressive and analytical power to our poor little human programmer minds.
    The main quality of a programming language is how well it adapts to the limitations and properties of human analytical cognition.

    Of course, computing applications have different demands than human world knowledge management, mental reflection, physical construction, social cognition, aesthetics etc., so a computer science won’t give us a better everyday language.

    I would love to get my hands at a better language than English, German, French, Japanese etc. Not Esperanto (which is like the Dvorak replacement for QUERTY), but a radical improvement. I want mental just-in-time compilation systematically utilized, with sub-routines that are adapted to the properties of my working memory properties. I want vastly improved parseability. I’d like to see tail-recursion and generative patterns. I want standard mnemonics libraries included with the language’s dictionary, and better hierarchical chunking for an increase of the phonetic loop. I want better separation of content and concern. I want adequate representation of psychological and cognitive parameters… Oh, imagine all the goodness that could come with a better natural language!

  8. Eray Ozkural says:

    If we can create a sufficiently high bandwidth interface between two brains (I’m thinking gigabit or more), then I think the brains may adapt to sharing semantic content on their own. If the right regions are connected, we might find that content in all modalities may be shared, it would be a big improvement to be able to just share sensory/actuator maps, perhaps an AI program could help tune all the codes to a common format/scheme. It’d be great if you could calibrate once, and then just use it with anyone :)

    In the end, a terabit/sec interface may be required (I once made a serious calculation even, yes :) . Logically you want to match at least the bandwidth of corpus callosum.

    If all modalities could be shared, *then* it would be a simple matter to share natural language semantics, syntax, or semantic context, but also things like giving the control of your arm to another person, or looking through the eye of another, or gauging his emotional state, or accessing his memory, however freaky these might sound. Obviously, the ultimate brain2brain interface would need much privacy control (luckily such complex software can’t be crafted by web hobos).

    OTOH, simpler modes of communication will be possible long before such ultimate interfaces. Telepathy through coupling of vocalization decoders and projection onto audio cortex would be norm, but visual interfaces may also be possible, and it is possible that using such interfaces complex computer data may be interchanged using appropriate visual UI’s.

    Furthermore, going beyond two brains would likely require artificial neural space that would allow multiplexing of neural code, perhaps combining them in a kind of “ensemble system” in which a neuro-based artificial intelligence would form the top-level control, so that a coherent “self” would emerge from the co-operation of n biological brains.

    More simply, such a system can be thought of the extension of voting based ensemble systems often used in machine learning. Possibly, the participants would only expose some parts of their wetware to the system, and would like to monitor both collective and individual decision making.

  9. Richard Nordin says:

    In the 1950′s, the French priest, Pierre De Chardin a paleontologist, who was also a firm believer in evolution from his experiences, whos beliefs were not embraced by his own church. He maintained that we have not stopped evolving which is the norm nowadays. He maintained that the next big thing ( Thank you Steve Jobs) would be our ability to communicate through each others thought processes.

  10. Robert Gold says:

    Ben,

    I am comforted by your thinking and expression. Practically emotion is a domain that we comprehend and measure effectively, so its utility as a reference for communication is obvious. My work while it parallels yours might lend itself to what is next.

    If there were three distinct and dissimilar emotional planes for all life and let’s label what occurs as discernible appearing to an extent in language, then maybe a very simple and new model of “communication” has appeared.

    If at the same time, in not only influencing this one ancient paradigm, two others were influenced–technology and accounting, then an environment and global society might be radically influenced as well. I look forward to your response.

    Best regards,

    Robert Gold

Share Your Thoughts