There may be a bio-nano-IT-cognitive revolution in the making but most of us who were used to the relative good life in the last cycle are not in these businesses. The disruption caused by the internet will continue to intensify. We are in a chaotic system for some time to come. The globalisation revolution of the last cycle has resulted in our human brain capacity being completely out-run by ‘complexity’. Any mathematics of control that we attempt are still dependent on systems being linear yet nearly all the systems that we have created are chaotic and non-linear. As tool-using creatures, we can make things happen but that does not mean that we understand the tools we use. No engineer actually understands how the Dreamliner works in total but only how each component of it works (I am indebted to Dr. Peter Cochrane, former BT CTO, for that insight). Business and ‘wealth creation’ are definitely non-linear and they require non-linear ‘chaotic’ and intuitive ways of thinking. The CIA will spend $1bn on just one key AI-related application for precisely that reason but the chaos will always exceed the amount of resource available to manage it.
Our biggest cultural problem now is what Dr. Peter Cochrane has called ‘stove pipe’ education. This teaches linear thinking (without understanding) as a preparation for a world that is chaotic. Such an approach might work in rule-based systems for a while but not forever or for long in chaotic ones. The result is sclerosis. Our entire further education system may not be fit for purpose, especially when ‘facts as facts’ (as opposed to problem-solving) become wholly unimportant in our memories so long as we can retrieve them as true from machine memory. Cochrane’s solution was to ‘trust in the machine’ and not be frightened of it – only machine intelligence can be expected to develop sufficient computational power to deal with the levels of the computer modeling, war gaming and artificial intelligence requirements of a world that is necessarily probabilistic and not deterministic.
But the fundamental problem with all existing proto-AI is this problem of veracity of inputs – in other words, for the internet (for example) to be a reliable AGI, it needs to be cleaned up (which, of course, will not happen to data in an interactive system in which human beings are the providers of data). A future AGI that is fundamentally useful has to be based on a ‘veracity machine system’, combining its innate algorithmic base with an experiential base – in other words, effectively reproducing the non-linear nature of the human being and becoming, eventually, a ‘person’. But that is a very long way away.
This is why we are now subject to so many de-stabilising and unexpected system collapses and crashes. Our technological and socio-political complexity is beyond human capacity to control so that we will have to use increasing computing power (machine intelligence) to even approximate an understanding of what is going on. And yet these same advanced machine systems are likely to be just as flawed as we are. Human beings, markets, politics and natural systems are always non-linear and become more so to the degree that the numbers of individuals increase, live longer, acquire more data themselves and become self-reflexive about themselves as subjects of control and management. The conspiracy theory, for example, may be argued to be a rational mode of resistance to any attempt at totalising control of humanity. The more pure reason is imposed on the masses, the more grimoires and conspiracies become reasonable. A world of AI might see an ever-increasing irrationality simply for that reason.
Similarly, sclerosis and a failure to innovate take place when there is an attempt to make non-linear systems linear. This may be the classic result (leading to collapse) of ‘bean counter’ attempts to limit creativity in technology-driven corporations but it is also the error of States when they try to respond to the human desire for order with planning that makes things worse. The current economic crisis is going to see many attempts to put breaks on chaos but these may simply slow down the return to growth. Since anything human is certainly chaotic and non-linear, we may have to encourage more human chaotic responses if we are going to see things settle more quickly into more stable patterns.
If transformations include ‘shipping designs and not goods’ or algal and bacterial conversion into energy or the use of thorium, it is more than possible that global trade will be massively disrupted by loss of demand for export manufacturing and the improved ability to manufacture and produce sufficient energy on site. We, as humans, are about to enter a territory where we are at the interface between major changes in bio-engineering, nano-technology and AI/information technology. Is it, for example, inevitable that the internet system will become to all intents and purposes a form of free-floating AGI into which we are inserted? We are not so sure that we should have such faith in machine intelligence. Why? Precisely because the chaos extends to the data from which machine intelligence makes its analyses, creating naturally error-driven calculations that become part of the problem rather than part of the solution. The internet of things and the mobile internet apparently makes matters different because data can be measured ‘autonomically’ in real time and in movement. Subject to differential privacy terms, a mobile-based AGI could become reflective of the chaotic non-linear state of the general population. But this is not a total system by any means. German-style privacy obsessions and fears might result in competitive advantage for both American-style freedom and Chinese lack of interest in privacy and yet German obsessions might result in administrative arrangements that challenge the ability of Americans and Chinese to exploit those theoretical advantages.
The point is that machine intelligence merely adds to the complexity rather than resolving it. It might be extremely useful for the improved organisation of material science – notably healthcare and information access (insofar as information is bytes) – but it may be worse than useless in dealing with social organisation. Ironically, in the German case, where all are private, the consequent data might be more reliable than where none seem to be private but where many remove themselves from the system or manipulate their privacy in response to lack of privacy so as to skew the data. Some analysts now speak of a ‘conscious’ machine intelligence emerging within a very few years but the essence of human processing power is its economy of effort. Evolution has taught it to avoid useless data to maximise the interpretation of the data that it needs. Too much data, even analysed at close to light speed, is still not all data in real time – better but not perfect. Many ‘futurists’ and philosophers are driven here by their horror of irrational cognitive biases in humanity (ascribing war and many horrors to such flaws) and they look forward to machines that do not have these biases but it is these very cognitive biases that also make humans such formidable survivors, innovators, creators and chancers. And I take ‘being a chancer’ to be a positive attribute of humanity in terms of survival.
When it comes to ‘value’, Cochrane has suggested that human value to itself lies in three zones – problem solving (using tools to hand such as machine intelligence) where humans both define the problem and make intuitive leaps that ask the right questions of the tools, risk analysis along similar lines and quality control to ensure that technological intelligence meets human needs. He has spoken eloquently against ‘optimisation’ strategies as precisely those strategies, like ‘bean counter’ strategies in business, that result in collapse. The best is, truly, the enemy of the good insofar as we solve problems in multiple ways in time where all variables can never be understood – in other words, functional pragmatism is the way forward. Tools are amplifications of muscle, amplifications of intellect and amplifications of mind and this last in particular is much exercising the philosophers interested in human enhancement. Amplification of minds will be the most challenging in terms of what it is to be a human being but whatever may happen in the long term, intelligence is always a tool and not an end in itself. There is, however, a broader consensus emerging about technology and the current crisis. The crisis in our existing socio-economic system is no simple adjustment to technology but a massive shift closer to that caused by printing – but bigger. As we have seen in Italy this week, our parliamentary democracy is no longer fit for purpose but neither are our legal structures and educational systems.
Our managerial and administrative class is out of its depth. Technology offers immense structural benefits in terms of ‘sustainability’ (more effective use of resources) and delivery of freedom, knowledge and health but the disruption of huge vested interests on which so many depend could break apart social cohesion.
But most disturbing of all perhaps, while some serious thought is going on about this in the US and China – thoughts that should, perhaps, scare us but are thoughts nevertheless – in Europe, and the UK in particular, our political class is sleepwalking into a future which may have no place for them. And how will we cope without our esteemed political class if you will excuse a tinge of sarcasm?
This posting derives from a critical analysis of Dr. Peter Cochrane’s recent talk at the London Futurists. Needless to say, these are very much our views and not Cochrane’s except where I have referred to him directly and we take responsibility for any unintended misinterpretation. We have no connection to Cochrane except through membership of the London Futurists and you are referred to www.cochrane.org.uk and www.ca-global.org.
This article originally appeared here: http://tppr-analysis.blogspot.co.uk/2013/02/thinking-about-future-technology-crisis.html
For more information see also http://www.tppr.co.uk/