Thinking About the Future – Technology & Crisis

It is becoming clearer now that the tactic of simply holding on and hoping that an ‘inevitable’ up-turn in economic conditions will come just in time to stop massive dislocation of the political culture that was created in the 1980s is no longer working. 

The 1930s and the Twenty First Century

Something more serious is going on – this is not just a ‘blip’ which will permit us to return to ‘business as usual’. The analogy with the 1930’s is not a foolish one – we see the same fundamental inability to get out of the mire, similar shifts of regional power and similar doubts about the competence of the ruling order to handle matters wisely. What we do not have is the option of rearmament and war in the age of the internet – or tolerance of creative-destructive inflation within a total war economy. An attempt, for example, to close borders to capital, conscript labour or sequester assets in the national interest would soon become very much part of the problem for elites rather than the solution. So this is both the same and it is different. If we think back to what brought us out of the misery of the Great Depression, it was not so much the State as the State’s central role in mobilising innovation through war – something which we forget that Ronald Reagan did as well or better than FDR. The war is not the issue, the mobilisation is.
The reason to be ultimately optimistic lies in the fact that the new technologies that are emerging will be truly transformative – that is, if the dead weight of NGO conservatism, liberal hysteria and environmentalist anxiety do not increase the power of the State in order to block what it once enabled. But this will not be an easy revolution. 3D-printing will do to manufacturing what the internet has done to services. Robotics, nanotechnology and bio-engineering will transform our lives – but none of this may happen on the time-scale of our own short to medium term commercial ambitions.

 

The Age of Disruption

There may be a bio-nano-IT-cognitive revolution in the making but most of us who were used to the relative good life in the last cycle are not in these businesses. The disruption caused by the internet will continue to intensify. We are in a chaotic system for some time to come. The globalisation revolution of the last cycle has resulted in our human brain capacity being completely out-run by ‘complexity’. Any mathematics of control that we attempt are still dependent on systems being linear yet nearly all the systems that we have created are chaotic and non-linear. As tool-using creatures, we can make things happen but that does not mean that we understand the tools we use. No engineer actually understands how the Dreamliner works in total but only how each component of it works (I am indebted to Dr. Peter Cochrane, former BT CTO, for that insight). Business and ‘wealth creation’ are definitely non-linear and they require non-linear ‘chaotic’ and intuitive ways of thinking. The CIA will spend $1bn on just one key AI-related application for precisely that reason but the chaos will always exceed the amount of resource available to manage it.

 

Stove Pipe Education

Our biggest cultural problem now is what Dr. Peter Cochrane  has called ‘stove pipe’ education. This teaches linear thinking (without understanding) as a preparation for a world that is chaotic. Such an approach might work in rule-based systems for a while but not forever or for long in chaotic ones. The result is sclerosis. Our entire further education system may not be fit for purpose, especially when ‘facts as facts’ (as opposed to problem-solving) become wholly unimportant in our memories so long as we can retrieve them as true from machine memory. Cochrane’s solution was to ‘trust in the machine’ and not be frightened of it – only machine intelligence can be expected to develop sufficient computational power to deal with the levels of the computer modeling, war gaming and artificial intelligence requirements of a world that is necessarily probabilistic and not deterministic.

But the fundamental problem with all existing proto-AI is this problem of veracity of inputs – in other words, for the internet (for example) to be a reliable AGI, it needs to be cleaned up (which, of course, will not happen to data in an interactive system in which human beings are the providers of data). A future AGI that is fundamentally useful has to be based on a ‘veracity machine system’, combining its innate algorithmic base with an experiential base – in other words, effectively reproducing the non-linear nature of the human being and becoming, eventually, a ‘person’. But that is a very long way away.

 

Back to the Total System

This is why we are now subject to so many de-stabilising and unexpected system collapses and crashes. Our technological and socio-political complexity is beyond human capacity to control so that we will have to use increasing computing power (machine intelligence) to even approximate an understanding of what is going on. And yet these same advanced machine systems are likely to be just as flawed as we are. Human beings, markets, politics and natural systems are always non-linear and become more so to the degree that the numbers of individuals increase, live longer, acquire more data themselves and become self-reflexive about themselves as subjects of control and management. The conspiracy theory, for example, may be argued to be a rational mode of resistance to any attempt at totalising control of humanity. The more pure reason is imposed on the masses, the more grimoires and conspiracies become reasonable. A world of AI might see an ever-increasing irrationality simply for that reason.

Similarly, sclerosis and a failure to innovate take place when there is an attempt to make non-linear systems linear. This may be the classic result (leading to collapse) of ‘bean counter’ attempts to limit creativity in technology-driven corporations but it is also the error of States when they try to respond to the human desire for order with planning that makes things worse. The current economic crisis is going to see many attempts to put breaks on chaos but these may simply slow down the return to growth. Since anything human is certainly chaotic and non-linear, we may have to encourage more human chaotic responses if we are going to see things settle more quickly into more stable patterns.

 

Cautious Faith in Technology

If transformations include ‘shipping designs and not goods’ or algal and bacterial conversion into energy or the use of thorium, it is more than possible that global trade will be massively disrupted by loss of demand for export manufacturing and the improved ability to manufacture and produce sufficient energy on site. We, as humans, are about to enter a territory where we are at the interface between major changes in bio-engineering, nano-technology and AI/information technology. Is it, for example, inevitable that the internet system will become to all intents and purposes a form of free-floating AGI into which we are inserted? We are not so sure that we should have such faith in machine intelligence. Why? Precisely because the chaos extends to the data from which machine intelligence makes its analyses, creating naturally error-driven calculations that become part of the problem rather than part of the solution. The internet of things and the mobile internet apparently makes matters different because data can be measured ‘autonomically’ in real time and in movement. Subject to differential privacy terms, a mobile-based AGI could become reflective of the chaotic non-linear state of the general population. But this is not a total system by any means. German-style privacy obsessions and fears might result in competitive advantage for both American-style freedom and Chinese lack of interest in privacy and yet German obsessions might result in administrative arrangements that challenge the ability of Americans and Chinese to exploit those theoretical advantages.

 

The Conscious Machine

The point is that machine intelligence merely adds to the complexity rather than resolving it. It might be extremely useful for the improved organisation of material science – notably healthcare and information access (insofar as information is bytes) – but it may be worse than useless in dealing with social organisation. Ironically, in the German case, where all are private, the consequent data might be more reliable than where none seem to be private but where many remove themselves from the system or manipulate their privacy in response to lack of privacy so as to skew the data. Some analysts now speak of a ‘conscious’ machine intelligence emerging within a very few years but the essence of human processing power is its economy of effort. Evolution has taught it to avoid useless data to maximise the interpretation of the data that it needs. Too much data, even analysed at close to light speed, is still not all data in real time – better but not perfect. Many ‘futurists’ and philosophers are driven here by their horror of irrational cognitive biases in humanity (ascribing war and many horrors to such flaws) and they look forward to machines that do not have these biases but it is these very cognitive biases that also make humans such formidable survivors, innovators, creators and chancers. And I take ‘being a chancer’ to be a positive attribute of humanity in terms of survival.

 

The Best as Enemy of the Good

When it comes to ‘value’, Cochrane has suggested that human value to itself lies in three zones – problem solving (using tools to hand such as machine intelligence) where humans both define the problem and make intuitive leaps that ask the right questions of the tools, risk analysis along similar lines and quality control to ensure that technological intelligence meets human needs. He has spoken eloquently against ‘optimisation’ strategies as precisely those strategies, like ‘bean counter’ strategies in business, that result in collapse. The best is, truly, the enemy of the good insofar as we solve problems in multiple ways in time where all variables can never be understood – in other words, functional pragmatism is the way forward. Tools are amplifications of muscle, amplifications of intellect and amplifications of mind and this last in particular is much exercising the philosophers interested in human enhancement. Amplification of minds will be the most challenging in terms of what it is to be a human being but whatever may happen in the long term, intelligence is always a tool and not an end in itself. There is, however, a broader consensus emerging about technology and the current crisis. The crisis in our existing socio-economic system is no simple adjustment to technology but a massive shift closer to that caused by printing – but bigger. As we have seen in Italy this week, our parliamentary democracy is no longer fit for purpose but neither are our legal structures and educational systems.

 

Our managerial and administrative class is out of its depth. Technology offers immense structural benefits in terms of ‘sustainability’ (more effective use of resources) and delivery of freedom, knowledge and health but the disruption of huge vested interests on which so many depend could break apart social cohesion.

But most disturbing of all perhaps, while some serious thought is going on about this in the US and China – thoughts that should, perhaps, scare us but are thoughts nevertheless – in Europe, and the UK in particular, our political class is sleepwalking into a future which may have no place for them. And how will we cope without our esteemed political class if you will excuse a tinge of sarcasm?


This posting derives from a critical analysis of Dr. Peter Cochrane’s recent talk at the London Futurists. Needless to say, these are very much our views and not Cochrane’s except where I have referred to him directly and we take responsibility for any unintended misinterpretation. We have no connection to Cochrane except through membership of the London Futurists and you are referred to www.cochrane.org.uk and www.ca-global.org.

This article originally appeared here: http://tppr-analysis.blogspot.co.uk/2013/02/thinking-about-future-technology-crisis.html

For more information see also http://www.tppr.co.uk/