Sign In

Remember Me

WEB, AWAKE!

In his paper on the technological Singularity, Vernor Vinge outlined several pathways that could lead to superhuman intelligence. The one which gets talked about most often is the ‘AI scenario: We create superhuman artificial intelligence (AI) in computers’. In fact, this pathway is referred to so often, it has become synonymous with the Technological Singularity in the minds of a lot of people. This seems a pity, as there are other ways in which the Singularity could be brought about and we aught to be aware of these alternative routes to superintelligence.

THE INTERNET SCENARIO

The first half of Vinge’s 1999 paper on the Technological Singularity concentrates on the familiar AI pathway, but the second part focuses on IA, that is, ‘Intelligence Amplification’ or combining human intelligence and other cognitive abilities with computing and information processing in order to produce something we would class as ‘superintelligent’. Vinge cites one particular form of Human-computer symbiosis as being most likely to achieve sufficient levels of intelligent capability:

 

“Exploit the worldwide Internet as a combination human/machine tool. Of all the items on the list, progress in this is proceeding the fastest and may run us into the Singularity before anything else”.

 

This is the ‘Internet Scenario’, defined thus:

 

“The Internet Scenario: Large computer networks (and their associated users) may ‘wake up’ as a superhumanly intelligent entity”.

 

(In a later paper for IEEE Spectrum, Vinge added another pathway. “The Digital Gaia Scenario: The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being”. In what follows, I shall consider these two scenarios as if they were one.)

People often labour under a false impression when considering this scenario. They think it is suggesting that if we connect enough computers together and write or breed enough of the right kind of software, a ‘critical mass’ will be achieved and, behold! The Internet comes alive. But the scenario is not concerned with computer networks alone, but rather how they are used as a part of human groups. It is those humans, after all, that help create the link structure Google depends upon for its trawling of the Web for relevant searches, that engage in the ongoing arguments from which Wikipedia’s articles are created and revised, and which organise social-media lead revolutions like the Arab Spring.

Also, that network of digital devices can only function thanks to the existence of other, older networks. We plug our devices (or their battery chargers) into electric outlets, drawing power from electric grids. The hardware we buy comes from production plants, all of which rely on other factories and mines from around the world to supply them with parts they need, and a global network of transportation to ship those parts to required destinations. The skills needed to design the software and hardware rely on networks like the education system and scientific research (without which, for example, we would not have the laws of electromagnetism which underpins so much of the modern world). All this requires capital, provided by economic systems, and full bellies, provided by a global agricultural system.

 

Greg Stock believes that, when we consider all the physical and intangible networks woven throughout the world today, we can indeed perceive the existence of a planet-sized super-organism. He refers to it as ‘Metaman’:

 

“Metaman processes huge amounts of information by combining human thought and computer calculation within the various organised networks of human activity”.

 

 

THE EVOLUTION OF NETWORKS

As the complexity and number of problems a growing populace faces grows, it becomes increasingly necessary to divide tasks up into specialised skills. In today’s world, especially in 1st world countries, people rely on the skills of others to provide nearly all of which they need. Because of the complexity of most modern products, hierarchical organisation is required, in which the manufacturing process is broken down into a series of micro-tasks overseen by layers of management.

 

But, hierarchical organisations must also face the problem of increasing complexity and the ultimate solution is to fundamentally alter the way in which society is organised, and how we think about technological and economic systems. In a hierarchy, there is always a ‘head’ who must make final decisions, but once complexity grows too large for any individual to try and get their head around the whole thing, hierarchies have to give way to distributed decision-making facilitated by networks. As Kevin Kelly observed:

 

“We find you don’t need experts to control things. Dumb people, who are not as smart in politics, can govern things better if all the lower rungs, the bottom, is connected together. And so the real change that’s happening is that we’re bringing technology to connect lots of dumb things together”.

 

By the way, when Kelly calls people dumb he does not mean they are stupid. Instead, he means networks of human activity and the technological networks facilitating it can handle problems and make decisions beyond the capabilities of any individual. As Greg Stock put it:

 

“When I speak not of ‘humans’ or ‘society’ but of ‘Metaman’ accomplishing something, I do so to acknowledge the role played by these immense and complex collaborations that are ubiquitous in the developed world”.

 

THE INTERNET OF THINGS

The technologies we are relying on to connect ‘dumb’ things together in order to expand and deepen the sensory awareness of the planetary super-organism are mostly digital technologies. The emergence of digitisation had a profound effect on how technology, and the socio-economic systems supporting (and supported by) them, are perceived.
Walk through any urban area, and the prevalence of digital devices is apparent. Almost everyone you pass is either holding a smartphone to their ear or gazing at its screen. If current rates of consumption are maintained, by 2015 there should be some 4.5 billion smartphones in the world. And this is but one example of the plethora of digital devices that are expected. As the cost of computing, sensing and communicating decreases, it becomes feasible to add connectivity to more and more everyday things.

 

To give some idea of the scale of this ‘Internet of Things’, consider the number of addresses the latest revision of the Internet’s primary communications protocol is designed to handle. IPV6 will provide up to 340 trillion, trillion, trillion addresses, enough to give every atom on Earth its own unique IP address.

 

OK, so we probably will not go quite as far as turning every atom into a web-enabled object. But we should definitely expect a future in which the Internet expands to cover more and more of the globe, and its web becomes increasingly tightly woven as more and more nodes are added.

 

The increasing presence of the Web and the ubiquity of digital devices is altering our perception of a great many things. One such change was anticipated back in 1995 by Eric Schmidt, the CTO of Sun Microsystems:
“When the network becomes as fast as the processor, the computer hollows out and spreads across the network”.

 

This phenomenon is now happening with ‘cloud computing’ in which more and more of the files and apps once stored locally are instead kept in datafarms like the ones Google operate, streamed to personal digital devices as and when needed. Google’s services require its growing cluster of servers to act as one machine, and that requires many parallel operations to be carried out at once. This move can be likened to the shift in manufacturing ushered in by the industrial age, in which factories broke up production into thousands of parts to be performed simultaneously, rather than relying on workers in separate shops turning out finished products step by step.

 

Kevin Kelly reckoned that, some time around 2015, desktop operating systems will become obsolete. He wrote:

 

“The Web will be the only operating system worth coding for. You will reach the same distributed computer whether you login by phone, PDA, laptop or HDTV”.

 

TECHNOLOGY BECOMING BIOLOGICAL

The act of turning objects into digital devices will dramatically speed up recombination. Recombination has always been the essence of invention. No new technology ever appeared out of thin air but was instead created by combining bits and pieces that already existed. When devices become digital they are all, at heart, objects of the same type. That is, data-strings. Therefore, as Brian W. Arthur (author of ‘The Nature Of Technology’) pointed out:

 

“Digitisation allows functionalities to be combined, even if they come from different domains”.

 

Moreover, the fact that these devices communicate over networks means that recombinations can happen remotely. The effect of all this is likely to be a very rapid increase in the rate of invention, as we configure and reconfigure various digital objects into new combinations. The economics of the past were built on assumptions of predictability and order, befitting a world in which mechanical systems behaved with clockwork predictability. The digital age is ushering in a perception of technology as a kind of chemistry, one always recreating itself in new combinations. According to Brian W. Arthur:

 

“Economics is beginning to respond to these changes and reflect that the object it studies is not a system in equilibrium, but an evolving, complex system whose elements- consumers, investors, firms, governing authorities- react to patterns those elements create”.

 

When talking about digital devices one finds oneself using words like ‘communicating’, ‘sensing’, and in some cases ‘self-configuring’ and ‘self-healing’. These are terms that used to apply exclusively to biological systems. Perhaps, though, it is not surprising that we need to use more and more biological terms in order to describe the behaviour of our networks of digital devices. After all, we have learned, from studies of the origin of life, that there is no fundamental divide between the animate and the inanimate. There are only systems of increasing complexity that gradually acquire more and more lifelike characteristics. We should therefore expect that, as technology becomes more sophisticated, it will become less mechanistic and more biological, sensitive and cognitive to its surroundings.

 

However, this increase in the number of digital devices comes with a cost. This increase, along with the growth of high-speed communications networks and high-capacity storage systems, has resulted in vast amounts of data being generated every second. This results inevitably in a decrease in human attention, as it becomes impossible for us to even scratch the surface of such vast quantities of data.

 

More and more we must turn to machine assistance. One way of dealing with the data deluge is to automate the process of scientific discovery as far as possible. The popular image of astronomers looking through telescopes is not a particularly accurate portrayal of modern astronomy. Instead, we use robotic instruments with sufficient intelligence to, say, tell a star from a galaxy and which can detect phenomenon too subtle for human senses (such as a star blinking for a nanosecond due to an asteroid passing in front of it). We also rely on automated processes. Most of the galaxy images collected by the Sloan Digital Sky Survey were never viewed by humans but were instead extracted from wide-field images reduced in an automatic pipeline.

 

So, modern astronomy employs autonomous, semi-intelligent instruments which relay data to datacenters, and those datacenters use various techniques to further filter the data before finally relaying it to the computer monitors which are what professional astronomers look at.

 

DATA-INTENSIVE SCIENCE

It has been argued by some that science itself is undergoing dramatic change thanks to the petabyte age, giving rise to ‘Data-Intensive Science’. Traditionally, science has been built around testable hypotheses, and crucial to this method are models that determine underlying mechanisms. With that in hand, correlation can be confidently connected with causation. But Chris Anderson of Wired Magazine argued:

 

“Petabytes allow us to say: ‘Correlation is enough’…We can analyse the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot”.

 

It should be emphasised that we are not talking about AIs pushing human experts towards obsolescence here. Rather, we are talking about an approach to ultra-intelligence involving cooperation between networks of machines with ‘non-humanlike intelligence’ capable of exploring datasets in ways impossible for humans, and humans employing skills like pattern recognition that machines struggle with. The trick is for these to interoperate effectively, such that the strengths of one compensate for the weaknesses of the other.

 

No human, for example, can comprehend an equation with several hundred million variables, but Google’s clusters handle such datasets no problem (Google converts the entire web into a big equation with several hundred million variables, which are the page ranks of all the web pages, plus billions of terms that are all the links). But, equally, the web contains lots of information humans comprehend easily- such as the context of visual images- which are profoundly hard for machines to make sense of. So, collectively, Google and its associated users form an entity that can mine vast sets of data for relevant information and extract useful knowledge from it.

 

KNOWLEDGE-MANAGEMENT

The most important contribution computers and software tools can bring in this context is not intelligence per se, but rather knowledge management. This is necessary because science is becoming increasingly reliant on large, distributed teams of specialists collaborating around the world.

 

The ‘Human Brain Project’, for instance, will rely on collaborations from teams in Switzerland, Germany, Spain, France (to name a few of the countries involved) drawing on expertise in areas like ‘clinical neuroscience’, ‘pharmacology’, ‘numerical analysis’, ‘animal physiology’ and ‘robotics and mechatronics’.

 

Multidisciplinary science faces a grand challenge, in that science throughout the 20th century fragmented into more and more specialised disciplines, with vocabularies largely incomprehensible to outsiders. This ultra-specialisation means that a scientist in one field might need to access the same data as another scientist, but from a very different perspective. The challenge, then, is to organise the world’s data so that it is easily accessible and simple to share across boundaries of specialised knowledge.

 

Fundamental to this approach is a drive to ‘objectify knowledge’, organising it into standard, machine-understandable representations. Whereas today’s cloud computing services chiefly focused on scalable platforms for computing, tomorrow’s will be much more concerned with the management of knowledge, driven by semantic approaches such as machine encodings of terms, concepts, and relationships. Contemporary examples of this ‘knowledge layer’ include the ‘Open Web Alliance’, which is an “open collaborative community (seeking) to organise the massive amounts of information flooding the biological sciences and other sciences”. Another example is Wolfram Alpha, an “online service at computes answers and relevant visualisations from a knowledge base of curated, structured data”.

 

Ultimately, the goal is to organise the world’s data so that it is a simple matter to look at some data and find all the information relevant to it, and gain insights by fusing data from multiple disciplines and domains.  Jeanette Wing, professor of computer science at Carnegie Mellon University, has talked about how computer science techniques and technologies are being applied to different disciplines, resulting in ‘computational thinking’. So, we have ‘computational ecology’ (concerned with simulating ecologies) and ‘eco-informatics’ (concerned with collecting and analysing ecological information). We have ‘computational biology’ (concerned with simulating biological systems) and ‘bioinformatics’ (concerned with the study of methods for storing, retrieving, and analysing biological data). Jeanette Wing wrote:

 

“Computational methods and models give us the courage to solve problems and design systems that no one would be capable of tackling alone”.

 

Today, if you search images on Google, it does a pretty good job of finding relevant results. This is not thanks to AI alone, but a combination of human knowledge, choices about that knowledge recorded in simple acts like clicking on a hyperlink or altering a search query, and computer networks mining that data so as to organise it more effectively.

 

Whereas before we relied upon hierarchical organisations to produce things like vast collections of images and encyclopaedias, now we can rely on a kind of automatic pooling of knowledge in which patterns of user activity lays down trails and systems of knowledge self-organise into categories richer and more complex than the relatively simplistic categories we used to order our knowledge by. We see the rise of ‘meganiches’ in which social networking enables individuals with rare and specialised interests to find like-minded souls, organising into groups as large as any previously achieved by mainstream media.

 

A lot of this collaborative effort is conducted freely, without expectation of extrinsic reward. Kevin Kelly noted:

 

“One study found that only forty percent of the web is commercial. The rest runs on duty or passion”.

 

One result of this freely-given effort is a reduction in the cost of failure. By and large, organisations that have employees are biased toward steady producers. But with something like Wikipedia we see a huge imbalance in participation. A typical article will have hundreds contributing one edit each and only a few contributing a substantial portion of the main body of text. But, since nobody is being paid that is absolutely fine and there is no temptation to try and address this inequality. Individually, of course, single edits would amount to negligible improvement. But those simple acts accumulate. Wikipedia harnesses different levels of effort and different skills and organises it all into what is probably the top source of reference of our time. Remember, it is not the technology of Wikipedia alone that achieved this, but that technology and the society of human users it supports.

 

Similarly, to ask Google something is not simply to rely on large clusters of computers in some data farm somewhere. It is also to rely on human effort, much of it negligible when considered individually but producing powerful effects once those individual efforts are pooled together.

 

THE NEED FOR INTERDISCIPLINARY KNOWLEDGE

At some point in history, we crossed a threshold, from designing technologies that could, in principle, be undertaken by individuals, to those that absolutely require interdisciplinary knowledge spread across a great many people. Compare the Large Hadron Collider to the Great Pyramid. Obviously, the construction of both was of a scale no individual could undertake. But I do believe an individual could draw up a complete blueprint of the Great Pyramid. But no single person could ever design a machine as complex as the LHC. Such machines absolutely require collaborative creation supported by networks of communications and information technologies.

 

So, if we now have technologies whose complexity rules out their being designed by a single human mind, are they not, by definition, the result of superhuman effort? In a private conversation, J. Storrs Hall told me:

 

“I think it should be clear that the Internet is already a superhuman entity. Hell, even a ten-person company is a superhuman entity. The question is, is it one that can cause a singularity?”.

WHY THE INTERNET SCENARIO LEADS TO SINGULARITY

I think so, for a couple of reasons. One was described by Luis Van Ann (the inventor of CAPCHTA) in a TED talk called ‘Massive-Scale Online Collaboration’. You might have heard of ‘Dunbar’s Number’, which refers to the maximum number of individuals with whom one can maintain stable social relationships. If you look at the number of people involved in large-scale projects such as the Panama Canal or the Apollo Moon Landing, they all involved roughly the same number of participants- somewhere in the region of 120,000. This is because it has always been impossible to coordinate- let alone pay- teams whose number of participants exceeded the hundreds of thousands.

 

However, the Internet is enabling us to assemble teams numbering in the hundreds of millions. It is likely that you yourself have been part of some such massive-scale online collaboration. Every time you type a RE-CAPCHTA, for instance, you are one of hundreds of millions of people helping to digitise the world’s books.

 

Equipped with the right technological aids, ordinary people can achieve great things. It took teams of gamers playing ‘Foldit’ just ten days to model the Mason-Pfizer Monkey Virus Retroviral Protese- a feat that had eluded scientists for fifteen years.

 

If a hundred thousand people working together can put a man on the moon, what might a hundred million, working together along with vast computing resources and ‘Data-Intensive Science’- be capable of?

 

The other reason that this could lead to a Singularity is because the plethora of objects entering the digital domain not only enables a dramatic speedup in the recombination of things. Thanks to an ever-denser communications network and increasingly efficient search technologies, group formation is becoming increasingly easy. Moreover, a machine-curated knowledge-layer would go some ways to meeting Vernor Vinge’s challenge:

 

“We need to extend the capabilities of search engines and social networks to produce services that can bridge barriers created by technical jargon and forge links between unrelated specialities, bringing research groups with complementary problems and solutions together”.

 

With many of the costs of group formation greatly reduced, it would be viable to pursue real blue-sky thinking and explore multiple possibilities. Mega-teams with interdisciplinary expertise would form, break apart, reform in different combinations, as the projects they are involved in fail to take off or show signs of advancing toward some goal. As Clay Shirky reasoned:

 

“Open systems, by reducing the cost of failure, enable their participants to fail like crazy, building on the successes as they go”.

 

When we combine this more rapid exploration of possibility space via recombinations of specialised knowledge with an increasingly efficient assessment of worldviews against an objective reality we can so powerfully measure (thanks to the network of sensors monitoring the planet’s various systems) that should result in more paradigm shifts in scientific theory happening faster.

 

It will not just be scientific research that will be improved by increasing effectiveness of group formation, data analysis and sensing of global systems. In a private correspondence, David Brin told me:

 

“One important aspect is that we will see better and better tools for discourse that allow more rapid building of ad-hoc teams of humans and AI that directly solve problems in real time: “Smart mobs” that bypass slower tools like corporations and governments”.
There are multiple pathways to a technological singularity, from building artificial superintelligence to genetically engineering humans to be super-geniuses. But it seems to me that the ‘Internet Scenario’ is the one most likely to get us there first, because it relies on trends well underway, driven by basic human needs to organise into groups and communicate knowledge. This scenario does not rely on designing machines to do everything people are good at (a profoundly difficult challenge) nor does it involve turning people into machines (a moral and ethical minefield if ever there was one). It relies only on the further co-evolutionary development of humanity and its technology. Human brains are particularly suited to this form of symbiosis.

 

NATURAL-BORN CYBORGS

One reason why this is so can be found by considering vision. The strange thing about vision is that there is a contradiction between the world that we see and what we should see given the construction of the eye. Our daily experience is of a full colour, highly-detailed scene. But the structure of the eye is such that what we actually see must be a visual scene in which the centre is sharp-focus and full colour and the edge is blurry and devoid of colour.

 

It is believed that the visual system does not construct a detailed model of what is ‘out there’ at all, but settles instead on encoding a rough gist of the scene. But, at any moment, by repositioning the fovea via sequences of rapid-eye movements known as saccades, we can acquire detailed information from any particular point ahead of us at any particular time. According to Andy Clark, where possible the brain prefers to rely on ‘meta-knowledge’ which basically means ‘knowing how to find out’. In his own words:

 

“Having a super-rich, stable inner model of the scene could enable you to answer certain questions rapidly and fluently, but so could knowing how to rapidly retrieve the very same information as soon as the question is posed”.

 

In Clark’s view, the belief that the brain is the source of human intelligence is only partially correct. In fact, human intelligence can only be understood by considering interactions between the brain, the body, and cultural and technological environments. Clark explained:

 

“What the human brain is best at is learning to be a team player in a problem-solving field of nonbiological props, scaffoldings, instruments and resources- natural-born cyborgs ever-eager to dovetail their activity to the increasingly complex envelopes in which they develop, mature and operate”.

 

Human brains are poised to incorporate ubiquitous, invisible-in-use technologies into mental models. To illustrate this point, Clark pointed out that, when asked “do you know the time?” a person with a watch would say “yes”. But if you ask someone if they know what such-and such a word means, they would reply “no, but I can find out” and go consult a dictionary. Notice though, how both scenarios appear the same. A person is asked something they do not know, and they consult some tool in order to find out.

 

The difference lies in the ease at which that information can be retrieved. The more ‘invisible-in-use’ a tech becomes, the more akin to our neural substrates it is. While writing, for example, an author is using the prosterior parietal subsystems, which make appropriate adjustments to hand orientation and finger placement. Only, nobody uses such systems in any conscious sense. Similarly, if you asked me, “can you define the word ‘happy’?” I would not reply, “no, but I can retrieve the information from my memory systems”. I would just tell you.

 

Equipped with a watch, then, a person is a hybrid biotechnological system whose conscious self represents a fairly thin layer, sitting between unconscious neural subsystems ‘below’ and cultural/technological systems ‘above’ and these systems all operate harmoniously to enable ‘you’ (this system that includes the wristwatch and knowledge of how to use it) to know the time. It seems reasonable to assume, then, that if a dictionary could be accessed as easily, we would incorporate that into our mental models of who we are, and what we are capable of doing.

 

YOUR EXO-CORTEX

Increasingly, of course, we are inhabiting cultural and technological environments that enable us to access all kinds of information whenever we need it. When asked how we would know if the Internet and its human users had ‘woken up’ as a superorganism, Valkyrie Ice told me:

 

“The creation of a ubiquitous device that contains a personal tutor/ assistant/memory manager/researcher…Oh, wait, that’s what smartphones are becoming. Gee, looks like the scenario is already underway. It’s just going to take a few more years to improve upon. Once Watson and Siri develop into something more akin to [John] Smart’s ‘digital twin’, and enable every individual to have all-the-time access to the full realm of human knowledge, along with an interface that optimises to fit each individual’s learning and thinking patterns, this will be the most likely outcome”.

 

Valkyrie is talking about mobile or wearable devices that offer near-constant access to cloud-based apps. Knowledge-management software that ‘learns to be me’. In other words, learns how best to complement an individual’s strength and weaknesses. It has long been known that the brain is highly plastic. Neural Constructivists believe the brain’s adaptability extends beyond merely fine-tuning existing circuitry and involves the actual construction of new neural circuitry. This would make the brain a constructive learning system, in which the basic computational resources alter and expand (or contract) as the system learns. As it is experience that drives this process, it would mean we come to have designer brains purpose-built to dovetail to reliable problem-solving systems.

 

At the same time, those external systems are also becoming increasingly adaptable, ‘learning’ from human users so as to provide better services. Google captures the search behaviour of its users, using everything from how we punctuate, how often we click on the first result, and many other patterns of behaviour, in order to guide future improvements to the system. We are progressing from external cognitive systems that evolved over a period of generations, to systems evolving in near realtime as petabytes of data from a plethora of networked sensors capture user behaviour to be analysed by Google-sized computing resources.

 

TURNED INTO A COMMODITY?

We are offloading more and more aspects of our thinking to external systems. But, who really benefits? The individual? Or those vast systems we are plugged into? It is rightly pointed out that services which appear free are actually paid for in data about ourselves. As media theorist Douglas Rushkoff pointed out, a Facebook user is not really a consumer. Rather, the user is the commodity in which the company ‘Facebook’ trades. In ‘The Blind Giant’, Nick Harkaway wrote:

 

“Being a consumer, a customer, implies a measure of control over the relationship…The commodity, on the other hand, gets the minimum necessary attention to keep it in a marketable state”.

 

In this context, being in a marketable state means being somebody who is a good target for advertisement. The more the individual can be pigeonholed into categories, the more effective advertising will be. Are the friends recommendations you receive and the search results you get serving to expand your horizons and open your mind, or are they serving to put you in a bubble that narrows your view, making you a more convenient commodity?

 

It must surely be the case that companies like Google, fed daily with petabytes of data on social behaviour, and the combined computing and brain power to analyse it, know far more about what influences us to buy, what psychological drives push us to that final decision, than the individual does. In a world in which we will depend so much on services like Google Now to help organise our lives, it would behoove us to learn more about what influences us, so we can apply those systems in ways that help us make better, more informed decisions.

 

We need to know what can safely be unlearned, what knowledge that was once vital but which is now irrelevant in the digital age. We need to be sure which aspects of cognition can be offloaded to external systems and which should remain ‘within the brain’ if we do not wish to grow less intelligent. Perhaps most importantly, we need to encourage use of social networks to create smart mobs, to become a member of groups who are truly much more than the sum of their parts, rather than trap ourselves in bubbles that merely reinforce our prejudices.

 

WEB, AWAKE!

At the macroscale, where do we stand right now? Mike Wing, IBM’s Vice President of Strategic Communications, reckoned, “the planet itself- natural systems, human systems, physical objects- has always generated an enormous amount of data, but we weren’t able to see it, hear it, capture it. Now we can, because all of this stuff is instrumented. And it’s all interconnected… So, in effect, the planet has grown a central nervous system”.

 

This central nervous system is enabling us, as components of a superorganism, to tune in on the heartbeat of nations, to organise smart mobs that can help bring down corrupt regimes, that can track weather patterns and help reduce the human cost of hurricanes. It is bringing the world and its people into our homes, and exposing us (for better, for worse, and certainly both) to the world.

 

When, though, will the final push that sends us over the threshold to a post-singularity era, happen? More importantly, when will we know this has happened? If we consider that the Internet scenario involves a symbiotic relationship, an alliance of mutual benefit between human and technological systems, I would say that Michael Chorost provided the best answer. He wrote:

 

“There may come a day when we start to see behaviour that simply does not make sense in terms of what we know about hardware, software, and human behaviour”.

 

That would indeed be a sign that the Web had ‘woken up’ as a fundamentally new kind of entity.