The Corporatization of AI is a Major Threat to Humanity
By Ben Goertzel
This “editorial” article represents the personal views of the author, shaped by his 3 decades in the AI research community and 2 decades as an AI entrepreneur. It absolutely does NOT represent an official position of H+ Magazine or Humanity+.
I’ve been working in the AI field since the late 1980s, and hoo boy have things changed!
A joke I sometimes tell in conference talks is as follows: “5 or 10 years ago, when I told people I was working on building human-level thinking machines, people told me I was crazy. Now they just reply ‘Wait a minute, didn’t Google already do that last year? Why are you still working on it?'”
It’s not the best joke in the world, or even the best joke I’ve ever told, but it’s based soundly on reality. And it packs up two highly relevant facts about the present era:
1) Artificial General Intelligence (AGI) is increasingly taken seriously
2) AI is increasingly being corporatized — sucked into big companies, which in turn are sometimes closely cooperative with governments
The first of these facts, I think is predominantly a Very Good Thing. AGI has tremendous potential to help people in so many different ways — to cure aging and disease, to enhance human intelligence and emotional satisfaction, to eliminate the need to work for a living, to open up unimaginable and exciting new frontiers.
The second of these facts, I am not terribly thrilled with. In fact I think it’s something we would do best to counteract as best we can.
Regarding the first fact — actually, for those of us who have been involved in the AI field for a long time, it’s incredibly striking how big a shift there has been. It wasn’t that long ago that the vision of human-level and transhuman AGI was purely the province of a few wild-eyed techno-seers and science fiction fanatics. Only a decade ago, when I first launched the AGI conference series, AGI was still quite the maverick pursuit. At that time, an AI professor with an interest in AGI needed to hide that interest in order to get tenure … they needed to pretend their main interest was instead in highly specialized algorithms and the solution of narrow domain-specific problems. Today most academic AI papers are still on highly specialized stuff, but it’s OK to talk about advanced AGI in the faculty seminar, not just at the bar after work when everyone is suitably hammered. At this point the world has almost forgotten that the AGI concept and dream was once so fully ostracized and marginalized.
And it’s not surprising or bad that, alongside this increasing acceptance of the viability of Artificial General Intelligence, there has recently evolved a dramatically accelerating interest in the commercial applications of AI … and even in the future commercial applications of AGI. This is largely for good reasons — due to advances in computer hardware and software and AI and cognitive science, AI software is now delivering dramatic business value across numerous industry sectors. Some of this is very visible to average people — like face recognition on Facebook or self-driving cars — and other applications are more back-end, like intelligent supply-chain management or fraud detection or machine learning in genomics for drug discovery. But the valuable practical applications are not in the dozens, they’re in the tens of thousands at least — and growing. There’s a fair bit of hype attached to AI in the business world today, but I see the situation as much like the Internet in the late 1990s. There was a lot of hype in the dot-com bubble, but underneath it all there was a fundamentally transformative technology.
What disturbs me a bit, though, is the way the commercial applications of AI are getting to be dominated by a fairly small set of large corporations — and the impact this is having on the directions and nature of AI research and applications.
Big Companies Are Increasingly Dominating the AI Field
Economies of scale exist throughout the tech business for fairly generic reasons; and they exist in AI for some additional, particular reasons as well. Contemporary AI technology is heavily based on training AI systems using large amounts of data, and this favors organizations that have been able to accumulate large amounts of data. There is an obvious increasing-returns phenomenon here: more data yields smarter AI applications, which gets more customers and more money, which gets more data, etc.
What we see in the tech world right now is a relatively small number of large companies gobbling up a very significant chunk of new AI PhDs and experienced AI developers. Furthermore, the default fate for an AI startup these days is to get bought by a big tech company. In effect, tech startups are serving as stealth recruiting tools for big companies, used to gobble up young developers and researchers who don’t particularly feel like big-company careers. These young nerds sign up to work for some exciting startup, but then the startup inevitably gets sold to a big company, and they wind up cashing out to a small degree (unless they were founders or very early employees, in which case they may actually get rich) and in the grips of a big-company job. Some escape and go back to some other sort of life; many stay in the big company world. Some bounce back and forth between big companies and startups over and over.
We haven’t yet seen a lot of exciting fundamental new AI innovations come out of all this corporate money that’s gone into the AI field. The key algorithmic and conceptual innovations still seem to be coming out of universities, mostly from the US and Western Europe, some from Japan or Russia. China is doing a tremendous amount of AI R&D but still largely refining and applying concepts that originated, and have been used before, in the West. What we have seen coming out of big corporate AI, though, is a lot of impressive, elegant, scalable practical applications of AI methods that diverse academics have created.
The channeling of AI expertise into big corporations has a significant impact regarding what kinds of problems AI gets primarily applied to. Advertising, for example, gets an awful lot of attention. Cambridge Analytica’s relatively crude methods of social media engineering, applied to political campaigning, got a lot of press in the last US presidential election cycle. But Google, Facebook and Baidu (among others) have vastly more sophisticated manipulation machinery, which is used not to elect candidates but rather to direct people to buy products and services.
The amount of technical brilliance funneled into the application of AI to advertising — as opposed to for instance medical research or combating agricultural disease or improving early childhood education — would be shocking if it weren’t so firmly in line with the general spirit of modern society.
The connections between the corporate AI community and the more dubious aspects of government are also right there to see. It’s very obvious in China, where Baidu, Tencent and Alibaba don’t have any reason to hide their government connections. In the US we have Palantir, which is applying Silicon Valley methods — and the expertise of various Silicon Valley veterans — to optimize the IT infrastructure of US intelligence agencies.
Don’t get me wrong — I don’t mind that these companies exist and do their things. I just would rather they didn’t encompass such a large percentage of the AI community.
Government Regulation of AI Would Likely Benefit Large Corporations
I was a bit bemused to read Elon Musk, one of everyone’s favorite tech entrepreneurs, calling recently for government regulation of AI development.
I don’t doubt Musk’s sincerity — he is clearly worried about the potential downsides of advanced AI, and hoping that somehow the collective mind of humanity can come together to squelch baddies questing to deploy AI for destructive ends.
On the other hand, it also comes across as a bit questionable for someone who is a former tech advisor to President Trump and is the CEO of a sizeable company that receives significant government subsidies, to call for government regulation.
I mean — if the US government were to regulate AI, who do you think would get permission to develop AI? Perhaps companies run by folks with inside contacts in Washington? Perhaps companies already receiving lots of government subsidies?
In principle, the idea of the people of Earth coming together to choose what direction AI development should go, obviously makes sense. Just as the people should come together to determine how animals should be treated, and what lands should be developed and what lands should be left wild, etc. Advanced AI development has sufficiently broad implications that, like nanotech and biotech and air and water pollution and climate change, it’s clearly a matter of public concern rather than something that can be left wholly to private individuals and organizations to deal with for themselves.
However, our current governmental systems have a very strong tendency to bend regulations toward whomever can pay for the best lobbyists. Look at the US’s financial regulations, which are substantially written by Wall Street firms to ensure their own benefit. Look at China’s Great Firewall, which serves a social regulation role and also ensures Baidu and Tencent and Alibaba (all firms with close government ties) their dominance over foreign competitors.
Can there really be much doubt that, if the government were to regulate AI, said regulations would get bent to favor those AI companies with the best-paid and smartest bands of lobbyists, and the best government connections? If the companies with the best government connections happened also to be the most ingenious and ethical firms, this wouldn’t be such a bad problem. But I see no reason to believe this is the case now, or will come to be the case in the near future.
So it’s not enough to say that we need some kind of government regulation of AI, as we move from AI toward AGI. Of course we do. But we need this in the context of a fundamentally less screwed-up sort of government. Putting the regulation of AI first, and the fundamental improvement of government after, is quite likely a recipe for disaster. It’s a recipe for putting the world’s most powerful technology in the hands of big companies that are focusing their efforts on things like advertising and optimizing government spy operations, which are surely not the species of human endeavor yielding maximum general benefit. It’s a recipe for using AI to increase global wealth and income inequality, and thus sowing suffering and conflict in the developing world.
The Risks of Global Wealth Inequality
Elon Musk, like Nick Bostrom and Eliezer Yudkowsky and others in the same community, seems especially concerned about the potential risks posed by massively superhuman AGIs once they appear. These risks are real and shouldn’t be ignored; I have given some views on this in a series of recent articles (see Infusing AIs with Humanlike Value Systems, and Superintelligence: Fears, Promises and Potentials). On the other hand, we should be at least equally worried about the risks posed by global wealth inequality and the anger and discontent it fosters.
As I’ve pointed out in a recent essay (How To Save The World), the developing world is home to an increasing number of tech geeks with solid science and engineering chops but a very justified sense of disenfranchisement from the modern world economy. This fact represents a large opportunity lost for humanity (everyone with a background in AI or mechatronics or biotech who ends up working in a low-tech job because of the country they were born in, is a loss for humanity’s progress toward a positive Singularity). And it also has all sorts of destructive potentials.
As technology advances, it becomes easier and easier for smaller and smaller groups of people to wreak more and more havoc. The reason more advanced-tech havoc is not sown in the world right now is that, by and large, the people with the most tech knowledge and chops don’t WANT to wreak havoc. But as there appear more and more marginalized science and tech geeks in the developing world — who can’t get visas to Silicon Valley, London or Beijing, and who get paid $4/hour on Upwork for doing the exact same work that someone in Silicon Valley gets paid $80/hour for — there may well appear more and more people with both the chops and the psychological/cultural/situational motivation to wreak havoc.
Government regulation that allows only well-connected corporations to create AI would not help mitigate this sort of risk. Rather, it would just cause wealth and opportunity to become more and more centralized and localized.
Some will argue that wealth inequality is just as it should be, because individuals of unequal ability and contribution deserve unequal compensation from society — just as individuals of varying ability may be able to reap varying rewards from the natural world. But still — it’s very hard to argue the morality or fairness of a new university graduate in, say, mechatronics earning 20x more if they happen to have been born in Los Angeles instead of Lalibela….
What we need, in order to avoid the immorality as well as the risk posed by increasing wealth inequality, is for AI to be developed in a way that
- focuses on tasks of deeper and broader importance to humanity, rather than tasks oriented to increase the differential wealth or status or military power of some particular social group
- includes a wide variety of humans around the world in its process of development.
At the current time, government regulation and close government control does not seem the best way to achieve these goals. Because the major national governments in the world, and most of the minor ones, appear to be largely under the thrall of relatively small numbers of big corporations and high net worth individuals.
Open source software and hardware development seems to be one route toward achieving the above goals. Democratizing educational media such as open online courseware is another part of the story. I have discussed the importance of these and other decentralizing technologies in the essay I mentioned above, “How To Save The World.”
But the modern corporatocracy makes even intrinsically democratizing phenomena like opensource complicated. Big tech companies now routinely release open source software, such as Google’s Tensorflow deep learning toolkit. Tensorflow is in many ways slicker and easier to use than comparable deep learning tools created by university teams — which is not surprising, as Google has used highly paid professionals to work on documentation, tutorials, user interfaces and other very useful bells and whistles that low-budget open source tools often lack. On the other hand, while Tensorflow is opensource code, it is developed in a rather “closed” manner. Periodically Google emits some fully-formed code embodying some new functionalities into the opensource Tensorflow codebase. The development process does not occur out in the open like with classic opensource projects. Insider Google developers and their manager get to make all the strategic decisions and core design choices — not random interested developers outside Google’s employ.
A cynic might argue that initiatives like Google’s Tensorflow, and Elon Musk’s OpenAI, are efforts to capture the opensource AI community into the orbit of certain big companies. These corporate-based opensource initiatives dangle shiny interfaces and slick APIs in front of opensource AI developers, thus cajoling them to work on problems and tools of interest to the corporations involved. These companies then have a large population of potential new hires to choose from, already trained in applying software created by said companies to problems of practical interest to said companies.
Another hallmark of current big-company AI efforts is that ambitious management talk that touches on AGI, is often coupled with technical work that focuses almost entirely on very narrow application-specific problems. There are a few islands of genuine AGI-oriented work within big corporations — but not nearly as much as a non-expert might think given the rhetoric bouncing around. IBM Watson is an expert system and a data mining framework, but from the marketing prose one might think it was a serious attempt to make a thinking machine. Google and Facebook and Baidu do have some fairly small teams doing directly AGI-ish work, but a vast majority of their AI staff are working on highly specific problems aimed at increasing corporate revenue.
The near disappearance of viable business models for journalism has seriously affected the ability of the general public to understand what kinds of progress big company AI divisions are and are not making. Since more and more people want to read news online now for reasons of convenience, and since most online ad revenue from news-reading individuals goes to big tech companies and not to news media companies, there is not much money around to pay journalists to carefully evaluate AI-related PR claims made by big companies. For instance, last year I read a bunch of incredibly glowing articles about Google’s revolutionary advances in machine translation, made via transfer learning between different languages. For a brief period I was almost snowed by the press-releases-cum-news-articles, and started to wonder if machine translation had been substantially solved. But a few examples sufficed to clarify that Google Translate’s English-Chinese translation remained pretty terrible. It even mangled “Long Live Chairman Mao”! But the average reader of the glowing articles probably would never have checked.
Of course media hype breeds more media hype. I myself have admittedly been complicit in various acts of media hype, in seeking wider attention for my own work on AI and robotics. But ultimately, while hyping technology can help out startups (especially if there is some reality underlying the hype, which has always been the case with my own tech publicity efforts), the big companies are almost always going to win due to their amply funded PR and marketing departments.
Some small companies hit the jackpot and become big companies. This is great and keeps the universe of big companies from becoming utterly stale. But even so, once a fresh and exciting startup becomes a megacorporation, it almost always become bureaucratized and starts acting a lot like the behemoths it displaced. Pete Townshend’s classic line “Meet the new boss, same as the old boss” applies in the tech-corporation world just like in politics.
Let’s Push For Free, Open and Beneficial Applications of AI and AGI
The world is complex and rapidly changing, and it’s not at all easy to predict which aspects of today’s society are going to seem positive, negative, ridiculous or beautiful in hindsight. On balance, though, it appears to me that the odds of a positive outcome for humanity — and for the creation of transhuman AGI minds that are positive according to human values — will be higher if we can nudge things so that advanced AI development is not heavily dominated by large corporations.
Big companies are part of our world and have contributed a lot as well as caused a lot of problems, and it’s unavoidable in context that they are going to play a big role in AI development. Big companies can do certain things really, really well; and some of these things are undeniably useful for AI.
However, big companies are not great at global inclusiveness; nor at increasing fairness and combating wealth inequality; nor at fostering creativity and imagination; nor at taking care of the Earth or the human psyche. To achieve these goals, we need our advanced AI development to be coupled with other forms of social organization.
Today’s “free and open source” software movement points in a promising direction — though obviously just open-sourcing of AI and AGI is not anywhere near a solution to the major challenges we face.
Applications of AI to domains of dramatic positive value such as medicine, education, elder-care and scientific research are also clearly pointing in the right direction.
I happen to be personally working on projects of this nature: an AI-based biomedical researcher’s assistant, an AI-powered teacher avatar for African children, and an AI-powered home health advisor for chronically ill elderly Chinese. And I can’t help mentioning here the “Loving AI” project I’m involved with, aimed at creating robots and avatars that will display and ultimately feel unconditional love toward humans.
But my own particular projects are not the main point here — the point is that if we want to increase the odds of a radically positive future, we want to have a large percentage of the world’s AI efforts to be put on this sort of project, not on advertising or killing or spying.
Also very positive are initiatives aimed at putting advanced AI on low-cost hardware that can be used throughout the developing as well as developed world (such as Raspberry Pi and other embedded boards).
In general, in terms of near term courses of action, it seems to me that if we want to create broadly beneficial outcomes for human beings and other sentiences, we should be developing AGI and other advanced AIs in a way that
- is open-source, and also open-process … i.e. open-source the way Apache projects are rather than the way Tensorflow is
- makes efforts to draw individuals from every part of the world and every socioeconomic class and culture into the process of developing and deploying AI
- focuses AI development energy on applications of broadly positive value (rather than on applications aimed at differentially increasing the wealth or status of relatively small groups).
This attitude does not imply government regulation has no place. But it does oppose types of government regulation that give differential rights for AI development to large corporations with big lobbyist budgets and tight government connections.
We don’t need AI of the big corporations, by the big corporations and for the big corporations. We need AI of the people, by the people and for the people. This is the best way to increase the odds that, as people are joined on the planet by AGIs with equal and greater intelligence, these AGIs will be for the people as well as for the AGIs.