Sign In

Remember Me

IA|AI – The Rise of Intelligence Amplification & Artificial Intelligence

[dropcap]I[/dropcap]t was recently my great pleasure to interview economist James Miller (who spoke at the Singularity Summit in 2008) – we spoke about Intelligence Augmentation and Artificial Intelligence, and the economic impacts. James said:
[quote ]Intelligence Amplification, either through getting smarter people or having really smart machines – It just seems overwhelmingly likely that this is going to be the dominant economic force once you get 20-30 years out into the future. Virtually no economists are studying it.[/quote]

James D. Miller, Ph.D., J.D. is Associate Professor of Economics at Smith College and writes the Game Theorist feature for TechCentralStation.com and the Random Walk Column for Better Investing Magazine. He teaches introductory microeconomics, introductory microeconomics with calculus applications, intermediate microeconomics, law and economics, game theory, and interpreting financial news.

James Miller’s presentation at the Singularity Summit in 2008

Below is the [highlight]videos of the interview[/highlight], and further below is the [highlight]introduction to his book ‘Singularity Rising‘[/highlight] Interview part 1

Interview part 2

Below is the introduction to James Miller’s book…

[heading]

Singularity Rising

[/heading] Chapter 1

Introduction

Economic prosperity comes from human intelligence. Consider some of the earliest and most basic human inventions — the wheel, the alphabet, the printing press — and later, more complex and advanced inventions such as indoor plumbing, automobiles, radio, television and vaccines. All are products of the human brain. Had our species been a bit less bright, these inventions might have escaped us. Yet we can imagine the many additional wondrous technologies we might now possess had evolution made us even smarter.

In the past, human intelligence was a gift of natural selection, but no more. We are now using our intelligence to figure out ways of increasing our brain power.

The rapidly falling cost of gene sequencing will soon let us unlock the genetic basis of intelligence. Combining this knowledge with already existing fertility treatments will allow parents to raise the average intelligence of their children, while merging this genetic data with future reproductive techniques might yield children smarter than have ever existed. Even if democratic countries reject these biotechs, the historically pro-eugenic Chinese won’t. As I had predicted , China has embarked on a program to identify some of the genes behind genius.

Artificial intelligence or AI offers an even more radical path to expanding the sum of intelligence available to mankind. Over the coming decades engineers may take advantage of continuous exponential improvements in computing hardware to either create stand-a-lone general purpose machine intelligences or to integrate AI into our own brains.

Vast increases in biological and machine intelligences will create what’s being called the Singularity—a period of time at which AIs at least as smart as humans and/or augmented human intelligence radically remake civilization.

11119747-human-brain-intelligence-grunge-machine-medical-symbol-with-old-texture-made-of-cogs-and-gears-repreA belief in a coming Singularity is slowly gaining credibility among the technological elite. As the New York Times wrote in 2010, “Some of Silicon Valley’s smartest and wealthiest people have embraced the Singularity.” These include two self-made billionaires: Peter Thiel, a financial backer of the Singularity Institute for Artificial Intelligence, and Larry Page, who helped found Singularity University. Peter Thiel was one of the founders of PayPal and used some of his money from its sale to eBay to become the key early investor in Facebook. Larry Page co-founded Google. Thiel and Page obtained their riches by successfully betting on technology. Famed physicist Stephen Hawking is so concerned about a bad Singularity-like-event that he warned that computers might become so intelligent that they could “take over the world”. Hawking also told the President of the United States that “unless we have a totalitarian world order, someone will design improved humans somewhere.”

Five facts you’re already aware of support the likelihood of the Singularity:

1. Rocks exist!

Strange as it seems, the existence of rocks actually provides us with evidence that it should be possible to build computers powerful enough to take us to a Singularity. There are around ten trillion, trillion atoms in a one-kilogram rock, and as inventor and leading Singularity scholar Ray Kurzweil writes:

[quote]Despite the apparent solidity of the object, the atoms are all in motion, sharing electrons back and forth, changing particles’ spins, and generating rapidly moving electromagnetic fields. All of this activity represents computing, even if not very meaningfully organized.[/quote]

If the particles in the rock were organized in a more “purposeful manner” it would be possible to create a computer trillions of times more computationally powerful than all the human brains on earth combined. Our eventual capacity to accomplish this is established by our second fact.

2. Biological cells exist!

The human body makes use of tiny biological machines to create and repair cells. Once mankind masters this nanotechnology we will be able to cheaply create powerful molecular computers. Our third fact proves that these computers could be turned into general purpose thinking machines.

3. Human brains exist!

Suppose this book claimed that scientists would soon build a human teleportation device. Given that many past predictions of scientific miracles—like cheap fusion power, flying cars or a cure for cancer—have come up short, you would rightly be suspicious of my teleportation prediction. But my credibility would jump if I discovered a species of apes that had the inborn ability to instantly transport themselves across great distances.

In some alternate universe that had different laws of physics it’s perfectly possible that intelligent machines couldn’t be created. But human brains provide absolute proof that our universe allows the construction of intelligent, self-aware machines. And, because the brain exists already, scientists can probe, dissect, scan and interrogate it. We’re even beginning to understand the brain’s DNA and protein-based ‘source code’. Also, many of the tools used to study the brain have been becoming exponentially more powerful, which explains why engineers might be only a couple of decades away from building a working digital model of the brain even though today we seem far from understanding all of the brains operations. Would-be creators of AI are already using neuroscience research to help them create machine learning software. Our fourth fact shows the fantastic potential of AI.

4. Albert Einstein existed!

It’s extremely unlikely that the chaotic forces of evolution just happened to stumble on the best possible recipe for intelligence when they created our brains, especially since our brains have many constraints imposed on them by biology: they must run on energy obtained from mere food; must fit in a small space; and can’t use useful materials such as metals and plastics, that engineers employ all the time.

We share about 98% of our genes with some primates, but that 2% difference was enough to produce creatures that can assemble spaceships, sequence genes, and build hydrogen bombs. What happens when mankind takes its next step, and births lifeforms who have a 2% genetic distance from us?

But even if people such as Albert Einstein and his almost-as-theoretically-brilliant contemporary John von Neumann had close to the highest possible level of intelligence allowed by the laws of physics, creating a few million people or machines possessing these men’s brain power would still change the world far more than the industrial revolution. To understand why, let me tell you a bit about von Neumann:

Although a fantastic scientist, a path-breaking economist and one of the best mathematicians of the twentieth century, von Neumann also possessed fierce practical skills. He was, arguably, the creator of the modern digital computer. The computer architecture he developed, now called “von Neumann architecture”, lies at the heart of most computers. Von Neumann’s brains took him to the centers of corporate power, and he did high level consulting work for many private businesses including Standard Oil, whom he helped extract more resources from dried-out wells. Johnny was described as having “the invaluable faculty of being able to take the most difficult problem, separate into its components, whereupon everything looked brilliantly simple, and all of us wondering why we had not been able to see through to the answer as clearly as he.”

During the Second World War he became the world’s leading expert on explosives and used this talent to help build better conventional bombs, thwart German sea mines, determine the optimal altitude for airborne detonations, and assist in the development of fission bombs. Johnny functioned as a human computational device at the Manhattan Project. Whereas atomic weapon developers today use computers to decipher the many mathematical equations that challenge their trade, the Manhattan Project’s scientists had to rely on human intellect alone. Fortunately for them (although not for the Japanese) they had access to Johnny, perhaps the best person on earth at doing mathematical operations quickly.

Unlike many scientists Johnny had tremendous people skills and he put them to use after the Second World War when he coordinated American defense policy among nuclear weapons scientists and the military. Johnny became an especially important advisor to President Eisenhower and for a while he was “clearly the dominant advisory figure in nuclear missilery.”

Johnny developed a reputation as an advocate of “first strike” attack and preemptive war because he argued that the United States should have tried to stop the Soviet Union from occupying Eastern Europe. When critics pointed out that such resistance might have caused a war Johnny said, “If we are going to have to risk war, it will be better to risk it while we have the A-bomb and they don’t.”

After Stalin acquired atomic weapons Johnny helped put in place incentives to prevent Stalin from wanting to start another war. By the atomic age Stalin had demonstrated through his purges and terror campaigns that he placed little value on the lives of ordinary Russians. Von Neumann made Stalin unwilling to risk war because von Neumann shaped U.S. weapons policy, in part by pushing the United States to develop hydrogen bombs, to let Stalin know that the only human life that Stalin actually valued would almost certainly perish in World War III.

Johnny helped develop a super weapon, played a key role in integrating it into his nation’s military, advocated that it be used, and then made sure that his nation’s enemies knew in a nuclear war they would be personally struck by this super weapon. John von Neumann could reasonably be considered to have been the most powerful weapon ever to rest on American soil.

Consider the strategic implications if the Chinese high tech sector and military acquired a million computers with the brilliance of John von Neumann, or even if through genetic manipulations they produced a few thousand von Neumann-ish minds every year. Contemplate how many resources the United States military would pour into artificial intelligence if it thought that a multitude of digital or biological von Neumanns would someday power the Chinese economy and military. The economic and martial advantages of having a von Neumann or above level intellect are so enormous that if it proves practical to mass produce them, they will be mass produced. A biographer of John von Neumann wrote “The cheapest way to make the world richer would be to get lots of his like.” A world with a million Johnnies, cooperating and competing with each other, has a decent chance of giving us something spectacular, beyond what even science fiction authors can imagine, at least if mankind survives the experience. Von Neumann’s existence highlights the tremendous variance in human intelligence and so illuminates the minimum potential gains of raising a new generation’s intelligence to the maximum of what our species’ phenotype can sustain.

John von Neumann and a few other Hungarian scientists who immigrated to the United States were jokingly called “Martians” because of their strange accents and seemingly superhuman intelligences. If von Neumann really did have an extraterrestrial parent, whose genes arose, say, out of an advanced eugenics program that Earth couldn’t hope to replicate for a million years, then I wouldn’t infer from his existence that we could get many of him. But since von Neumann was (almost certainly) human we have a good chance of making a lot more Johnnies.

One Possible Path To the Singularity: Lots of von Neumann-level minds

Before he died in 1957 von Neumann foresaw the possibility of a Singularity. We know this because in referencing a conversation he had with von Neumann, Mathematician Stanislaw Ulam wrote:
[quote]One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.[/quote] Von Neumann, not a modest man, knew what he could accomplish compared to what the average mortal was capable of. I bet that when contemplating the future destiny of mankind von Neumann tried to think through what would happen if machines even smarter than he started shaping our species’ affairs, which leads us to our fifth and final fact in support of a Singularity.

5. If we were smarter, we would be smarter!

Becoming smarter enhances our ability to do everything, including our ability to figure out ways of becoming even smarter because our intelligence is a reflective superpower able to turn on itself to decipher its own workings. Consider, for example, a college student taking a focus-improving drug such as Adderall, Ritalin or Provigil, to help learn genetics. After graduation, this student might get a job researching the genetic basis of human intelligence, and her work might assist pharmaceutical companies in making better cognitive enhancing drugs that will help future students acquire an even deeper understanding of genetics. Smarter scientists could invent ways of making even smarter scientists who could in turn… Now, throw the power of machine intelligence into this positive feedback loop and we will end up at technological heights beyond our imagination.

Further Implications of a Singularity

From the time of Alexander the Great down to George Washington the lot of the average person didn’t much change because there was little economic growth. But shortly after Washington’s death an industrial revolution swept England that married science to business. The Industrial Revolution was the most important event since the invention of agriculture because it created sustained economic growth arising from innovation—the creation of new and improved goods and services. Innovation, and therefore economic growth, comes from human brains.

Think of our economy as a car. Before the Industrial Revolution the car was as likely to move backward as forward. The Industrial Revolution gave us an engine powered by human brains. Technologies that increase human intelligence could supercharge this engine. Beyond human level AI could move our economy out of its human-brain-powered car into an AI propelled rocket. An ultra-intelligent AI might even be able to push our economy through a wormhole in which we end up God knows where.

Let me tell you a story, which might soon become true-to-life, which should put beyond all doubt the importance of the Singularity:

Imagine it’s the year 2029, and Intel has just made the most significant technological breakthrough in human history: The corporate tech giant has developed an AI that does independent scientific research. Over the past month the program wrote an article on computer design describing how to marginally improve computer performance.

What, you might ask, is so spectacular about this program? It wasn’t the superiority of the article it produced, because a human scientist would have taken only a month to do work of equivalent quality. The program, therefore, is merely as good as one of the many scientists Intel employs. Yet because the program succeeded in independently accomplishing the work of a single scientist the program’s designers believe that within a couple of decades technological progress will proceed at least 1 million times faster than it does today!

Intel scientists have such tremendous hope for their program because of Moore’s Law. Moore’s Law, a pillar of this book, was formulated by Intel co-founder Gordon Moore, and has an excellent track record of predicting computer performance. Moore’s Law implies that the quantity of computing power you can buy, for a given amount of money, doubles about every year. Repeated doubling makes things very big, very fast. Twenty doublings yields about a million-fold increase.

Let’s imagine that Intel’s AI program runs on a $1 million computer. Because of Moore’s Law, in twenty years a $1 million computer would run the program a million times faster. This program, remember, currently does the work of one scientist. So if the program is running on a computer one million times faster, it will accomplish the work of a million human scientists.

And of course in twenty years other businesses would eagerly use Intel’s program. A pharmaceutical company, for example, might buy a thousand copies of the program to replace a thousand researchers and make as much progress in one year as a thousand scientists would in a million years, and this doesn’t even include the enhancements the AIs would garner from improved software.

If Intel really does create this human-level AI program in 2029, then humans may well achieve immortality by 2049. Because of this, I sometimes end my economics classes at Smith College by telling students that if civilization doesn’t collapse, they probably won’t die. Intel’s breakthrough, unfortunately, wouldn’t necessarily go well for mankind, because to do stuff, you need stuff.

Regardless of your intelligence, to achieve anything you must use resources, and the more you want to do, the more resources you need. AI technologies would, at first, increase the resources available to mankind, and the AIs themselves would gain resources from trading with us. But a sufficiently smart AI could accomplish anything present-day people can do, but at much lower cost. If the ultra-AIs are friendly, or we upgrade ourselves to merge with them, then these machine intelligences will probably bring us utopia. If, however, the ultra-AIs view mankind the way most people view apes, with neither love nor malice but rather indifference, then they will take our resources for their own projects, leaving us for dead.

Why Read a Singularity Book By an Economist?

I hope that I have convinced you that learning about intelligence enhancement is well worth your time. But why you should read this particular book, given that its author is an economist rather than a scientist or engineer? One reason is that I will use economic analysis to predict how probable changes in technology will affect society. For example, the theories of nineteenth century economists David Ricardo and Thomas Malthus provide insights into, respectively, whether robots might take all of our jobs and why the creation of easy-to-copy emulations of human brains might throw mankind back into a horrible pre-Industrial Revolution trap. Economics also sheds light on many less significant economic effects of an advanced AI, such as the labor market consequences if sexbots cause many men to forgo competing for flesh-and-blood women.

Furthermore, the economics of financial markets show how stock prices will change on our road to the Singularity. The economics of game theory elucidate how conflict will affect the Singularity-inducing choices that militaries will make. The economic construction called the “Prisoners’ Dilemma” establishes that rational, non-evil people might find it in their self-interest to risk propelling mankind into a Singularity even if they knew that it had a high chance of annihilating mankind. Robin Hanson, one of the most influential Singularity thinkers, is an economist. Science shows us the possibilities; economic forces determine the possibilities we achieve.

But despite my understanding of economics I admit that sometimes I get confused when thinking about a Singularity civilization. In the most important essay ever written on the Singularity, Vernor Vinge, a science fiction writer and former computer scientist, explained that accelerating technology was making it much harder for him to write science fiction because as we progress towards the Singularity “even the most radical will quickly become commonplace.” Vinge has said that just as our models of physics break down when they try to understand what goes on at the Singularity of a black hole, so might our models fail to predict what happens in a smarter world. Vinge told me that in the absence of some large-scale catastrophe, he would be surprised if there isn’t a Singularity by 2030.

A father can much better predict how his six year old will behave in kindergarten than this child could predict what the father will do at work. If the future will be shaped by the actions of people much smarter than us then to predict it we must know how people considerably brighter than us will behave. While challenging, economics might make this possible. A high proportion of economic theory is premised on the assumption that individuals are rational. If decision makers in the future, human or otherwise, are smarter and more rational than we are, then economic theory might actually better describe their behavior than our own.

Former United States Secretary of Defense Donald Rumsfeld famously said:
[quote][T]here are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.[/quote] The Singularity will undoubtedly deliver many unknown unknowns. But economics still has value in estimating how unknown unknowns will affect society. Much of economic behavior in capitalist countries is based on an expectation that property rights will be valuable and respected in the future. If you’re saving for a retirement that you don’t expect will start for at least another 30 years you must believe that in 30 years money will still have value, your investments won’t have been appropriated, and our solar system will still be inhabitable. But these three conditions simultaneously hold only under extremely special circumstances, and for only a miniscule percentage for all the possible arrangements of society and configurations of molecules in our solar system. The greater the number of unknown unknowns you expect to occur the less you should save for retirement and the fewer investments businesspeople should make, showing that expectations of certain types of Singularity will damage the economy.

We don’t know how mankind will increase its available intelligence but, as I will show, there are so many paths to doing this and such incredible economic and military benefits of intelligence enhancements that we will almost certainly enter a smarter world. This book will serve as your guide to this world. We will discuss the economic forces that will drive intelligence enhancements and consider how intelligence enhancements will impact economic forces. Along the way you should pick up some helpful advice for how to live in the age of the coming Singularity.

This book has one recommendation that, if followed, could radically improve your life. It’s a concrete, actionable recommendation, not something like “seek harmony through becoming one with Creation.” But the recommendation is so shocking, so seemingly absurd that if I tell you now without giving you sufficient background you might stop reading.

References
Vinge (1993).
Miller (2007).
Frank (2001).
Vance (2010).
Walsh (2001); Hawking, (2000).
Kurzweil (2005), p. 131.
Kurzweil (2005), p. 131.
Legg (2010).
See Elango (2005). Sentence structure similar to sentence in Anissimov (2007).
MaCrae (1992),p. 4.
MaCrae (1992),p. 309.
MaCrae (1992),p. 334.
MaCrae (1992), p. 29. Quoting Lewis Strauss.
MaCrae (1992),p. 208-209.
MaCrae (1992),p. 235.
MaCrae (1992),p. 357.
MaCrae (1992),p. 332.
MaCrae (1992), p. 28.
MaCrae (1992), p. 3.
Hargittai( 2008).
http://www.lanl.gov/history/wartime/images/ProjectYBadges/v/vonneumann-john_r.gif which is in the public domain because it’s a government photo. I altered the photograph.
Ulam (1958).
Kurzweil (2005), p. 41.
SIAI (2011).
What is the Singularity. http://singinst.org/overview/whatistheSingularity
Vinge (2010).
Rumsfeld (2002).

Singularity Rising: Surviving and Thriving in a Smarter, Richer, and More Dangerous World – on Amazon

Reviews:

[quote cite="Luke Muehlhauser, Executive Director, MIRI" url="http://intelligence.org"]Many books are fun and interesting, but Singularity Rising is fun and interesting while focusing on some of the most important pieces of humanity’s most important problem.[/quote] [quote cite="Vernor Vinge, computer scientist, Hugo Award-winning author" url="http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html"]There are things in this book that could mess with your head.[/quote] [quote cite="Peter Thiel, self-made technology billionaire and co-founder of the Singularity Summit"]The arrow of progress may kick upwards into a booming curve or it may terminate in an existential zero. What it will not do is carry on as before. With great insight and forethought, Miller’s Singularity Rising prepares us for the forking paths ahead by teasing out the consequences of an artificial intelligence explosion and by staking red flags on the important technological problems of the next three decades.[/quote] [quote cite="Aubrey de Grey, leading biomedical gerontologist and former AI researcher" url="http://sens.org"]We’ve waited too long for a thorough, articulate, general-audience account of modern thinking on exponentially increasing machine intelligence and its risks and rewards for humanity. Miller provides exactly that, and I hope and expect that his book will greatly raise the quality of debate and research in this critical area.[/quote] [box title="Video Interviews"]For more video interviews please Subscribe to Adam Ford’s YouTube Channel

[/box]