Humans need not apply: The economics of AI

An interesting series of discussions have appeared throughout the blogosphere over the last few days focused on the question of economics in a future dominated by Artificially Intelligence. An article by Kevin Drum in Mother Jones magazine ignited the debate claiming the human race would soon be out of work.

In his article – – Welcome, Robot Overlords. Please Don’t Fire Us? – Drum put forward the case that as humans we are increasingly forced to compete against machines designed to perform better and smarter than us. The idea that machines may one day take over is not new but the article argues that this trend has already started and it is merely the blindness of statisticians and economists that has prevented it from being seriously discussed so far.

The economics community just hasn’t spent much time over the past couple of decades focusing on the effect that machine intelligence is likely to have on the labor market. Now is a particularly appropriate time to think about this question, because it was two centuries ago this year that 64 men were brought to trial in York, England. Their crime? They were skilled weavers who fought back against the rising tide of power looms they feared would put them out of work.

The crux of the article is that the trend for human unemployment has already started and that while we may discuss issues such as job training or access to opportunities we are ultimately on a one way path that can only lead to less and less jobs for humans.

There was general consensus that the rise of AI capable of doing human jobs was inevitable but the solutions were varied. In his article for Forbes – Inequality In The Robot Future – Karl Smith, writes that no matter what sophisticated machines are developed there remains hope in the form of legal frameworks that recognize humans as fundamentally different from intelligent machines.

Birth is something that happens to a minority of beings who are special, flesh and blood humans.

An article in The Atlantic by Noah Smith – The End of Labor: How to Protect Workers From the Rise of Robots –  pointed to the potential for inequality between the largely unemployed masses with little hope of earning money and the owners of the machines who would have a virtual license to print money. He suggests a solution that would endow all humans with rights to the robotic means of production.

What if, when each citizen turns 18, the government bought him or her a diversified portfolio of equity? 

An article on the topic published in The Economist – In the long run, we are telepathic androids –echoed many of these concerns and suggested that the solution to the problem lay with clear delineation of intelligence in a distributed system. While the article predicted that there was scope for humans to survive financially the question was raised as to what exactly separates a human from a machine in a world where brains and servers are super-connected.

What the rise of artificial intelligence means for economies around the world has driven some vigorous debate and looks unlikely to recede, with many definitions and areas of legislation still to be explored.


Lochlan Bloom lives in London and does not have a cat or a dog. He is a writer of fiction and non-fiction and has completed recent projects for BBC Radio Scotland, H+ Magazine , Ironbox Films and Calliope, the official publication of the Writers’ Special Interest Group (SIG) of American Mensa, amongst others.

The BBC Writersroom describe his writing as ‘unsettling and compelling… vivid, taut and grimly effective work’. He currently has a feature length script in production with Porcelain Film. His novella, Trade, is out now.

For more details visit www.lochlanbloom.com.

24 Comments

  1. Norbert Weiner in 1950 said “Let us remember that the automatic machine [computer driven production machinery] …is the precise economic equivalent of slave labour. Any labour which competes with slave labour must accept the economic conditions of slave labour. It is perfectly clear that this will produce an unemployment situation in comparison with which …even the depression of the 1930’s will seem a pleasant joke”.

    We have recently announced HIE. Are you aware of these comments?

    http://www.kurzweilai.net/thought-experiment-build-a-supercomputer-replica-of-the-human-brain#comment-151460

    http://blog.limetreeonline.com/2013/05/07/human-intelligence-has-been-replicated-the-singularity-is-here-3963

    Apologies for the brevity, but I welcome comment, even skeptical comment.

    • Interesting stuff – only had a quick look just now but plan to read in more detail tomorrow.

      I think on the topic of slave labour the argument is valid to an extent but one of the questions we will have to face is how we define labour, employment and other related concepts.

      In general most people would want to be ’employed’ doing something useful with their days. This does not necessarily have to equate to industrial labour. If machines the basics sufficient for human life then our definitions may be forced to change. If robotic factories can produce what we need it may that we do not need to have people whose job it is to work in factories any more that doesn’t necessarily mean people can’t be ’employed’ doing anything.

  2. Madness. Income inequality is a must for a functioning market! What everyone is missing is that increased automatization of labour will lead to lower and lower prices.

    That in turn, will enable a family on welfare to buy more and more for the money they receive.

    In the long term, what will happen naturaly is that the distribution of the remaining jobs will change. People will switch from working 5 days per week, to 4 days or 3 days, just like people switched from working 7 days per week in the middle ages, to 5 days a week today, thanks to the industrial revolution.

    So there you have it. The remaining jobs might be fewer in total, as a percentage of the world population, but the compensatory factor is that people will transition to 3 or 4 day work weeks. And thus, there will be jobs for most people who want them.

    Controls and regulations lead straight to h*ll, and I think it is time we should have realized that, since it is political control and regulations that brought on the financial crisis.

    On the other hand, humanity does love to repeat its mistakes forever and ever, so let’s see if we can learn this time.

  3. I, for one, welcome our new robot overlords.

  4. The problem of AI, robots, and all technology, is inequality. Though in the long term it becomes unemployment for many, the problem we face today is less about unemployment (not to degrade our current unemployment issues), and more about ever growing levels of inequality enabled by advancing technology. Technology enables the best of the best, to reach every wider markets with the skills, and their technology, and in so doing marginalize all their competition. Though a technology like the internet gives the small guy every greater market reach, it also gives the best of the best, ever greater market share. More people get to complete, but fewer are able to win big.

    The industrial revolution was highly disruptive to society as human and animal muscle power was displaced by steam, gasoline, and electricity. But we moved forward without too much distribution other than a few major world wars and economic depressions that millions killed, due to the fact that all humans still had something essential to sell to each other – our brain power. Because we have our brains to share with each other, we transformed from a world of labors, to a world of thinkers. But now, we are about to lose that last power, and it means society is going to flip upside down in a way never before seen. Due to the fact that most people have no clue this is happening, and little mental and emotional ability to cope with such a major paradigm change, it’s going to lead to massive amounts of social turbulence as mankind adjusts to the changes ahead.

    The economic problem, is inequality – with an increasing minority controlling the wealth of the automated machine economy as well as the bulk of natural resources, while billions are left out of the wealth. The economic solution is actually very easy in concept, but very hard in practice because it will be met with such distrust. The solution, is a basic income guarantee for everyone in the world. Those lucky enough to have the skill, and timing, to strike it rich, must be forced to share some of the wealth, with the rest of the world. The fix to economic inequality, is to tax everyone fairly, and share a percentage of the wealth with everyone evenly. It’s a simple small government fix, to all the problems created by AI and robots both today, and for the future.

    Everyone must understand that this is a HUMAN society, and that our contribution to society, is nothing more than the fact that we are alive. We are a society of living humans, one of many animal species that lives on this planet, and that we have the power and need, to take care of each other.

    Our worth is not how strong we are, or how beautiful we are, or how rich we are, or how many people like us, or what we “do” in life. Our value in society, is, and should be, nothing more, and nothing less, than the fact that we are human, and we are alive. The one and only purpose of our society, should be to take care, of all the living humans – to make the lives of every human, as full, and as happy, as we can.

    A Basic Income Guarantee for every human on the planet, is the simple, and fair, way to implement this.

    • I agree on every point. However, I don’t believe that this change will be immediate in any way. I reckon that the future holds many promises, especially technologically, but we are lightyears from achieving true AI. So far, we haven’t invented a computer that actually changes itself (something that biological beings do regularly), a crucial part when creating a true artificial intelligence.

      Innovations in education, neuro-science, cognitive augmentation and biotech will almost certainly delay this, maybe even prevent the day from ever coming.

      “Normal” machines (machines designed for a singular task) will continue to outsource human labour though, and will in my opinion turn out to be a bigger problem in the coming decades.

      • Don’t unsupervised Machine Learning systems “change themselves”? I think they do.

        FWIW: I use such a system to produce the CLIO infographics and links that are posted here from time to time.

        • I agree with the point that there is scope for inequality but not sure that the advent of intelligent machines will create any more need or desire within society for income sharing.

          Wealth is an inherently human invention and will remain so even if machines are doing work or controlling markets. Machines may be able to run organizations or amass money according to algorithms but the economy as we understand it ultimately stems from human demand.

          There are certain basic human needs – food, water, heat, etc – that will continue to drive economies and these certainly can be controlled by machines and their owners but this does not imply ongoing control. If we imagine an agribusiness that can produce food more efficiently than human farmers, one could imagine that the human owner of this business might set up a chain of farms entirely run by robotic harvesters that could till the land and produce enough food to feed a continent.

          The owner of these machines would get very wealthy as they could produce the cheapest food. However this would only hold as long as people had income to spend. Without income there may be desire for food but no economic demand and the business would have to downsize. However efficient the business it will always be cheaper for people to grow their own food for free creating a cut off point where even a highly efficient agribusiness cannot compete.

          The argument may be that these humans would not have any land to grow their own food on but this is a question of resources rather than technology. Throughout human history there has been struggle over access to resources and in this respect technology is unlikely to change it. Human society has created numerous situations where only certain people could own lands- e.g. serfs or servants have had limited rights as to the resources they could own.

          In a market controlled by machines all we might expect is that efficiency will be higher such that competition will drive prices lower at a greater speed than in a market run by humans. More abstract economies – say Hollywood or the Music industry – can also ultimately be controlled by intelligent machines but the motivation for the product still stems from people. If no one has any money then there may be desire but not economic demand.

          The argument seems to boil down to the fact that the people who own the intelligent machines will be the only ones well placed to have wealth while everyone else will be left outside in the cold but the level of control is likely to be significantly less than say a medieval king. Such a king could make any arbitrary decision he wanted whereas machines are more likely to follow protocols that have at least some rational basis.

          • Machines can be optimized for physical violence, too. When the machine controllers no longer need human servants, workers, guards, they can outsource violence to machines as well.

            In such a world, outside of charity, ownership of resources will be contingent on ability to control machine violence, not just machine productivity.

            • It could be argued that it would be easier to maintain control over a human army than robotic soldiers or weapons.

              In historical times entire nations were motivated to form armies to fight for abstract ideals on behalf of their leaders. Citizens could be conscripted to fight wars that enriched aristocracy while retaining very few rights over land or wealth.

              In a future world where intelligent machines can fight each other or humans the rules of battle are likely to be more rational at least. It is unlikely that robot warfare would lead to such pointless waste of life as 1915 Western Front for example.

              Having said that if machines were turned on human soldiers they may well have advantage for their owner. A brutal dictatorship could certainly be formed with the use of machine violence but then again it wouldn’t be the first time in history.

              • It could be argued that it would be easier to maintain control over a human army than robotic soldiers or weapons.

                If true, then humanity will probably go extinct or become completely marginalized very rapidly. We are talking about intelligent machines, after all. There is no reason why the ultimate control to make choices could not end up in their (virtual) hands.

                Already, machines are consumers and producers (yes, machines do consume). Once they can plan strategically, and use physical violence in its various forms, what is left for humans to retain control?

                What reason is there to believe that, once even those abilities are best performed by machines, humans will not be killed off or marginalized to near extinction?

                • In the case where intelligent machines control the weaponry or machinery involved in meting out violence then we have to ask what the motivation for that violence is.

                  Humans – as beings with finite life spans – have definite motivations to gain power and resources whatever the cost within a finite time period. A dictator for example needs to gain power and retain it within the period of their own lifetime. Would intelligent machines have the same outlook? Would it make sense for an intelligent machine to drop a nuclear bomb or its equivalent and potentially damage its own environment for a long time into the future?

                  If machinery is controlled by humans we could expect the motivations might be similar to those we currently see in human warfare but if warfare is instigated by intelligent machiens themselves then there may be entirely different motivations.

                  • It doesn’t have to be as drastic as a nuclear bomb (even though it could be). It could be a slow process that involves buying stuff by entities that don’t use it to make more humans, but instead make more machines. Theoretically, all humans could die of old age, but the human-to-intelligent-machine ratio could marginalize humans (and their practical power to affect the future) very quickly.

                    Furthermore, there doesn’t have to be one singular machine motivation. There could be a radiation of machine minds, planning modules, quasi-emotional simulations and goal systems that mix and mingle, are copied and pasted, hacked and mutated, and so on.

                    Unless humanity plans all this meticulously from the outset, it would be extremely dumb luck if that Darwinian process didn’t create highly intelligent competition for humans – competition that places little to no intrinsic value on the mere existence or happiness of humans.

                    • I think that is certainly true. The biggest problem is that while certain human quality of life indicators – say access to xx kj of nutritious food per person per day – could be built into a machine system other more abstract terms are likely to be problematic.

                      it is clear that we ourselves have very little grasp on what constitutes happiness for example – just look at the plethora of self-help books with wildly different contradictory ideas. We might feel we know what makes us happy but the evidence does not suggest so – certainly from an empirical point of view. Therefore it is unlikely we could hardwire a respect for human happiness into any intelligent system from the outset.

                      What would a machine measure? What can we, as humans, measure to determine happiness?

                      As you say, whatever is not explicitly included will tend to be eroded in favour of greater efficiency or indeed any other aims that an intelligent machine decide are important.

                    • say access to xx kj of nutritious food per person per day
                      Even if you do this well, for all machines, it can lead to human extinction if it is defined only for existing humans, and then the machines somehow end up blocking human reproduction. But of course this is just one example.

                      In theory, I think you could give an intelligent machine the instruction to “maximize the difference between happiness over suffering in quantity and duration” or some such. The algorithm may well be smart enough to extrapolate something meaningful from the definitions and scientific data. But any such attempt will leave out other human values (which could be a good thing, if you care only about a subset, but of course most people don’t).

                      But the core issue, imo, is that the transition will probably be chaotic and unplanned – humanity has never successfully planned something like this without error – and the long-term result may look very alien and unsatisfying to most of us (if we could see it).

                    • I think you could give an intelligent machine the instruction to “maximize the difference between happiness over suffering in quantity and duration”

                      I’m not convinced this could be done, even in theory. There is very little to suggest that happiness is something that can be quantified. We can get rough approximations but there is pretty much no baseline. Lack of food or sleep will generally reduce happiness very rapidly but not always (the case of somebody willingly on hunger strike for example). Equally it is generally in the more comfortable Western world where resources are easily gotten that depression is more rife.

                      Furthermore we cannot assume that a machine could just ask a person if they are happy as societal structures will tend to distort answers. Take certain current work environments for example where admitting unhappiness or mental dissatisfaction is taken as weakness.

                      At core I think is the issue that no human on the planet really understands what makes them happy and the glimpses they get are always in retrospect.

                    • I agree happiness is tricky and maybe impossible to quantify. But this doesn’t mean it’s a complete figment of our imaginaion either.

                      I conjecture that a truly intelligent algorithm could find scientific proxies that are very close to human happiness indicators. Human scientists certainly can, AI scientists may lack human intuitions but could probably make up for it in analytic skills. There could also be neuroscientific approaches; we already know that agony and pleasurable laughter look differnt in brain scans etc. So at least approximations of non-zero value would plausibly be achievalbe, i.e. there is a difference between a maximizer who doesn’t care at all about what we typically express as moral values, and a maximizer who cares at least approximatively. (Even though the latter approach could go awry in unforseen catastrophic ways)

                      Generally, I think people are way too optimistic about the post-scarcity/robot paradise idea.

              • Yes – it is certainly possible to get first order approximations but this may do more harm than good if the motivation is not correct.

                For example there is a difference between a first order approximation of something empirical and of an abstract concept.

                I can estimate that the distance from Mumbai to New York and say it is more than 1km and less than 500,000km and be confident this is correct (if somewhat vague)

                It is different for example to estimate which people are members of of the communist party. The communist party is an abstract concept and as such first order approximation may be worse than no knowledge at all. What’s worse this ‘little knowledge’ can lead to totalitarian tendencies as seen with McCarthyism or indeed any other movement to boost or eradicate abstract concept held by a populace.

                • I agree that faulty motivations can be problematic – e.g. it would be great if sadists couldn’t tell who suffers and who doesn’t (it would practically neutralize their malicious intent).

                  But I think whenever we navigate complexity, such as in neurosciene, politics, but also geography, we are forced to use abstract concepts. If you think about it, even the “distance from New York to Mumbai” is an abstract concept. You have to define where the boundaries of the cities end, whether you count only land distance or direct distance between the points in space, and so on. This increases with complexity, but when motivations are good, I think approximative information is usually better than nothing.

                  Let’s say you have two methods to produce pork, one in which the pigs are boiled alive and one in which they are stunned, bled out and then boiled. If will never have perfect knowlege what the animals feel, but if you have benevolent motivations, it is still better to have approximative models of their suffering than no models (consider flipping a coin as the alternative).

                  • I agree absolutely and I would argue that the ‘motivations’ that drive intelligent machines will mirror our own intentions in building them or their predecessors.

                    Any organism or indeed object is shaped by its environment. In the early stages of AI development it is humans that will to a large extent decide what the parameters of that ‘environment’ are.

                    It doesn’t take much to realise that if we invest in designing intelligent machines to fight then those machines will develop to become better and better at certain activities all designed to destroy their target. If we spend similar time developing machines to clean up pollution, cultivate food or write operas then those machines will grow to have different skills.

                    In a sense every generation of humans is another wave of intelligent machines that starts from scratch and the values we imbue on them can clearly be seen to be a mix of good and bad – why should we expect any different from AI?

    • I like the emotional appeal of treating humans as an end rather than a means, but I can’t really share it consciously.

      First, it’s speciesism. If we discovered an alien race of intelligent, self-aware beings, would you treat them like cattle? I hope not. In fact, we are inflicting suffering on non-human animals by action and omission, and a moral society should stop that.

      Second, from a personal perspective, I can’t say I share the motivation to work for strangers just because they are human and they exist. Humans are highly inefficient in creating happiness per resource, they often fiercely hate each other and many of them would torture me just for fun if they secretly could, or maybe for a small personal advantage. Knowing this fact does not motivate me to share my wealth, or to call for universal sharing of wealth.

Leave a Reply