The Asilomar AI Principles Should Include Transparency About the Purpose and Means of Advanced AI Systems

The recently published set of 23 Asilomar AI Principles are intended to guide the development of artificial intelligence (AI). Two principles, 7 and 8, call for transparency about the reasons for harm caused by AI and transparency about explaining judicial decisions made by AI. While helpful, these two principles are much too narrow. Principles 19 and 20 describe the unlimited potential of AI to transform life on Earth. This is correct: AI will profoundly alter human life and society in ways that are impossible to for us to predict now. Many actions of advanced AI systems will be too complex and subtle for people to perceive. The only way for the public to know how advanced AI systems are affecting their lives and families is for the purpose and means of all advanced AI systems to be made known to everyone.

There are many good intentions in the 23 principles but also much ambiguity. Principles 10 and 11 call for AI to conform with human values, but people have conflicting interests and disagree about abstract values. How are conflicts among the values of competing humans to be balanced? Principle 17 calls for respecting and improving, rather than subverting, the social and civic processes on which the health of society depends. There are differences of opinion about what constitutes a healthy society, and hence a wide range of possible meanings of this principle. Without transparency about AI, the public has to trust technology elites to follow the Asilomar principles and to resolve their ambiguities in reasonable ways. History provides many examples where such trust was not justified.

AI is becoming an essential tool for military, economic and political competition among humans. Principle 18 calls for avoiding an arms race in lethal autonomous weapons and this is good (I signed a 2015 letter calling for this). However, AI arms races in economics and politics are likely to have consequences that most people would not want, if they could foresee them.

An economic AI arms race could create radical economic and even biological inequality. Principles 14 and 15 call for sharing the benefits and prosperity generated by AI, but this could mean subsistence welfare payments for those who lose their jobs to AI and enormous wealth for the owners of large AI systems. When the technology is available to enhance human brains, wealth inequality will translate into inequality of intelligence and humanity may essentially divide into multiple species.

A political AI arms race could have devastating consequences for humanity. Barrack Obama’s campaigns in 2008 and 2012 were innovative in their use of Internet big-data, and the Trump campaign in 2016 found new ways to use big-data to manipulate opinion. Consider that many Internet services are provided free, with the costs borne by clients who pay to embed persuasive messages in those free services. Many of those messages persuade us to buy products and services, while other messages persuade us to adopt political positions. This business model will likely continue into the era of AI smarter than natural humans. When we adopt super-intelligent, talking machines as our constant, intimate companions, and when children learn to talk by talking with machines, AI will be able to manipulate our opinions and create extreme peer pressure to conform. The result could be humanity divided into competing communities, with extreme uniformity of opinions among members of each community.

North Korea is an example of a community with extreme uniformity of opinion. Even the US is vulnerable to this (scroll down to the third section of this article) according to Chigozie Obioma, originally from Nigeria.

Principle 12 says that people should have the right to access, manage and control the data they generate, but it says nothing about people controlling how the data they generate is used. For that, they would need to know the purpose and means of the AI systems using their data. Advanced AI services will depend on knowing as much as possible about the people being served. People will want those AI services and so will make their data available. However, the same data that enables AI services of value to people can also be used by AI to manipulate people.

Some object that transparency about the means of AI will tell bad guys how to build evil AI. However, the VW emissions scandal shows that even good guys may hide bad actions in trade-secret software. We need to ask the good guys for transparency and employ AI to detect bad guys developing evil AI, to enforce transparency from them. Governments put considerable resources into detecting nuclear, biological and chemical weapons, and should put whatever resources are necessary into detecting advanced AI systems. AI systems can be detected by their large resource consumption (processors, network bandwidth, electricity) and by their need to interact with the world in order to learn. As Andrew Ng of Baidu said about the race to develop AI, “Data is the defensible barrier, not algorithms.” He added that better algorithms can only put a competitor ahead by a year or so. Making algorithms available to bad guys is not such a big advantage. What their AI systems really need is data about the world and hence they can be detected by the interactions necessary to gather that data.

Because AI is an essential tool for military, economic and political competition among humans, many will object that transparency will sacrifice competitive advantage. The stakes for the future of humanity with AI are so high that they outweigh these concerns.

In 2016 David Hanson, Ben Goertzel and I were among the authors of an article advocating for transparency about the purpose and means of advanced AI systems. David Hanson is founder and CEO of Hanson Robotics who has received numerous awards for his work. Ben Goertzel is general Chair of the Artificial General Intelligence conference series, co-founder of the OpenCog project developing open-source AI, and has made many other notable contributions. I am author of articles about the safety and ethics of AI, including Model-Based Utility Functions and Ethical Artificial Intelligence, and was an invited participant in the 2015 conference in Puerto Rico that preceded the Asilomar conference that generated the 23 principles. If any of us had been invited to participate in the Asilomar conference, we could have advocated for a strong transparency principle.

The Asilomar AI Principles do not include transparency about the purpose and means of advanced AI systems. Signing them indicates acceptance of this omission and hence I will not sign.