James D. Miller’s Singularity Rising is an interesting starter book for proto-transhumanists and will be a must have book for all transhumanists interested in rational and scientific approaches to transhumanism. Along with Singularity Hypotheses and The Transhumanist Reader, Singularity Rising will find a home as part of introductory courses on transhumanist ideas and transhumanism. H+ readers and others familiar with transhumanist and singularitarian scene will recognize the views presented here from publications of the Machine Intelligence Research Institute (MIRI formerly known as the SIAI) and followers of Eliezer Yudkowsky and the LessWrong community.
If you are interested in this ultra-rational approach to transhumanism, Miller provides a concise, cogent, and relatively painless math free introduction to some of the primary ideas. However he can not even scratch the surface of the vast corpus that is available online here in the book format. The book is understandable and easy to read by anyone with a high school education even when it delves into ideas such as mathematical game theory analyses of social dilemmas. I might prefer that many people start with Miller over Kurzweil personally, however if you are a critic of this rationalist school of thought you will find some of this book rough going. And you will not find any surprises here; Miller is a fan of this approach and it shows throughout the book.
Miller is also an economist and not a computer scientist and this hinders his presentation in several discussions here. For example, there are a variety of theorems from theoretical computer science that relate to the challenges of creating”friendly” programs and more specifically “provably friendly” ones. There are results such as Rice’s Theorem that limit what we can know about arbitrary programs and may (or may not) apply to discussions of friendly AI. Can we even tell the difference between “friendly” and “unfriendly” programs? What about programs engineered to be deceptive about their true purposes and the intentions of their designers? What does existing practice and recent revelations about privacy and computer security tell us? This is an admittedly rather technical point but it is important and not even mentioned here.
In the discussions of unfriendly AI, Miller sadly falls prey to doomsday scenarios and fear mongering. Yes, worst case scenarios should be considered but more realistically we are not going to be wiped out by robotic AIs soon even with Google’s recent conquest of the robotic universe. It is my view that we are going to face a world containing various forms of both “friendly” and “unfriendly” AIs that will disrupt our worldviews and ways of living. Notably both types of AI have military and economic value. But as mentioned above it may in fact be nonsensical to differentiate these two types at all since friends can become enemies and vice versa.
More importantly, the consideration of Max More’s Proactionary Principle suggests that we must also assess the risks of not developing a super-intelligent AI; for example we might fail to cure terrible diseases without help from an AI or we might be unable to solve certain difficult socio-economic challenges without them. What benefits we would be giving up by avoiding the risk of unfriendly AI is unfortunately unmentioned here and simply not considered as part of the risk assessment.
On the positive side, the book presents both an introduction and some pretty interesting and novel material in the second half of the book, e.g. the Prisoner’s Dilemma analysis of decisions about human enhancement. I would have enjoyed more materials here, for example I would have liked to see this extended to include a discussion of iterated PD scenarios and Nash Equilibria. But some less intrepid readers might not make it that far as the book is already two hundred pages long and it is a fairly dense prose presentation. Like most eBooks, readability could be aided with more effective use of images, typography and page layout.
Singularity Rising opens with an homage to one of my transhumanist heros, John von Neumann. As is not always recognized, it was von Neumann that first considered the modern idea of a technological singularity and introduced the notion into the world of classified military research just after World War II. It is of course well known that von Neumann’s name is associated with the processing architecture that is at the heart of the computer and information revolutions. It was von Neumann along with the more controversial Edward Teller who became the primary proponents of using technology and science as means to military superiority. This use was opposed by some others such as the noted cyberneticist Norbert Wiener. These ideas later became core of American military strategy and furthered the growth of American military strength during the Cold War and to this day.
Miller imagines an emulation society of von Neumann minds, an idea very reminiscent of the work of fellow economist Robin Hanson. And that is no coincidence, because Hanson’s ideas are considered here in several places. Miller and Hanson have some similar but interestingly different ideas about the future of possible “emulation” or “upload” societies consisting of economically cooperating and competing super intelligent software minds.
Somewhat underplayed here is the need for a variety of minds and social interaction. A society of just von Neumanns wouldn’t really have a high level of novelty or creativity since ostensibly they would all think the same things and could directly exchange and share knowledge. What we really want is a variety of “von Neumann-like” minds each of which is unique and therefore able to explore various and very diverse possibility spaces.
Both Hanson and Miller explore the idea of an economic system based on software intelligences and consider how these might be used to solve real problems. The notion is linked to the area of agoric computing pioneered by Huberman & Hogg, Eric Drexler and others. Some of this work is now of practical importance given the rise of cloud computing and digital currencies such as BitCoin and other novel electronic methods of exchanging value. This warrants a deeper investigation, but Miller gives the reader a good taste of what is to come.
Importantly Miller considers several possible singularity scenarios including enhancement of human intelligence by various means, avoiding one of the main failings of most transhumanist futurism. This book is not just about artificial intelligence. However, as with any scenario analysis, much of the conversation here falls prey to the notion that these ideas will be pursued apart from and without progress in other areas in each “scenario”. Artificial intelligence and intelligence augmentation are not opposing developments but rather complementary paths leading to a symbiosis of man and machine. This is the most likely path of development since such combinations of man-machine can demonstrably outperform either man or machine alone in real world decision tasks. However the roles of man in this symbiosis are changing and will continue to change rapidly as machine capabilities develop.
Is the singularity near? What we actually observe at this time is progress across many and very varied areas of science and technology such that it is quite difficult to maintain abreast of all areas. Further, progress is occurring at a diverse range of speeds, with some areas advancing rapidly and others slowing down. Once we even divide our world into scenarios such as “genetic enhancement” or “brain stimulation” or “AI” but not “IA” we may miss the primary point that many of these ideas are about to advance suddenly and apparently unexpectedly within a fairly narrow period of time.
And many people agree that this will be a process starting in the next 25 years or so.
The book ends with an appropriately “rational” argument about cryonics and preparing for The Singularity that not all readers will find convincing. Four variants of the singularity idea are presented and profiles and expectations about the future for each listed. Presenting this in a table makes it look very official and might convince some readers that these outcomes are more certain than they really are. However, we don’t know what we don’t know and this presentation is a bit misleading. Further the preparations suggested here in my mind fell short and fail to address any real possibility for widespread socio-economic chaos or disruptions of any sort. Ironic given the previous doomsday arguments about super AIs presented.
The realistic possibilities of radical or unexpected developments in many areas such as human longevity and economic use of extra-planetary resources e.g. lunar and asteroid mining, climate change estimates, strategic material resource limitations, pollution and ecological impacts of technology, prior history, politics and predictions about warfare and conflict, as well as many other factors come into play. The suggested preparations exhibit some rather problematic assumptions about what is going to happen precisely and when it might happen, and even in what order events might occur, as well as making some guesses about how these developments will impact our current society and economy. This leads to some fairly apparent biases in the suggestions and it is the sort of highly speculative futurism which Miller avoided scrupulously elsewhere in the book.
Despite these sorts of problems I found Singularity Rising to be an informative and enjoyable read and one which both novices and experts in transhumanism will find worth their time and money. While Singularity Rising is not the best starting book for everyone, for those amenable to a very rational approach it will be eye opening. The book is clearly presented and accessible. Experts and academic researchers will still find some new ideas here and will also be pleased to note that it contains more than a dozen pages of notes and references as well as a decent index. Thank you!
Go get a copy today: http://www.amazon.com/Singularity-Rising-Surviving-Thriving-Dangerous-ebook