In the wake of a tragedy like the nuclear incidents we’re currently seeing in Japan, one of the questions that rises to the fore is: What can we do to prevent similar problems in the future?
This question can be addressed narrowly, via analyzing specifics of nuclear reactor design, or by simply resolving to avoid nuclear power (a course that some Western nations may take, but is unlikely to be taken by China or India, for example). But the question can also be addressed more broadly: What can we do to prevent unforeseen disasters arising as the result of malfunctioning technology, or unforeseen interactions between technology and the natural or human worlds?
It’s easy to advocate being more careful, but careful attention comes with costs in both time and money, which means that in the real world care is necessarily compromised to avoid excessive conflict with other practically important requirements. For instance, the Japanese reactor designs could have been carefully evaluated in scenarios similar to the one that has recently occurred; but this was not done, most likely because it was judged too unlikely a situation to be worth spending scarce resources on.
What is really needed, to prevent being taken unawares by “freak situations” like what we’re seeing in Japan, is a radically lower-cost way of evaluating the likely behaviors of our technological constructs in various situations, including those judged plausible but unlikely (like a magnitude 9 earthquake). Due to the specialized nature of technological constructs like nuclear reactors, however, this is a difficult requirement to fulfill using human labor alone. It would appear that the development of advanced artificial intelligence, including Artificial General Intelligence (AGI) technology, has significant potential to improve the situation.
An AI-powered “artificial nuclear scientist” would have been able to take the time to simulate the behavior of Japanese nuclear reactors in the case of large earthquakes, tidal waves, etc. Such simulations would have very likely led to improved reactor designs, avoiding this recent calamity plus many other possible ones that we haven’t seen yet (but may see in the future).
Of course, AGI may also be useful for palliating the results of disasters that do occur. For instance, cleanup around a nuclear accident area is often slowed down due to the risk of exposing human workers to radiation. But robots can already be designed to function in the presence of high radiation; what’s currently underdeveloped is the AI to control them. And, most of the negative health consequences of radiation from a nuclear accident such as the recent ones are long-term rather than immediate. Sufficiently irradiated individuals will have increased cancer risk, for example. However, the creation of novel therapies based on AI modeling of biological systems and genomic data, could very plausibly lead to therapies remedying this damage. The reason relatively low levels of radiation can give us cancer is because we don’t understand the body well enough to instruct it how to repair relatively minor levels of radiation-incurred bodily damage. AGI systems integratively analyzing biomedical data could change this situation in relatively short order, once developed.
Finally, the creation of advanced intelligences with different capabilities than the human mind, could quite plausibly lead to new insights, such as the development of alternative power sources without the same safety risks. Safe nuclear fusion is one possibility, but there are many others; to take just one relatively pedestrian example, perhaps intelligent robots capable of operating easily in space would perfect some of the existing designs for collecting solar energy from huge solar sails.
There is no magic bullet for remedying or preventing all disasters, but part of the current situation seems to be that the human race’s ability to create complex technological systems has outstripped its ability to simulate their behavior, and foresee and remedy the consequences of this behavior. As the progress of technology appears effectively unstoppable, the most promising path forward may be to progressively (and, hopefully, rapidly) augment the human mind with stronger and stronger AI.
P.S. [added a day after the original article was posted]
A Forbes blogger picked this article up and wrote a critique of it, to which I responded in the comments to his blog post. His main point (very loosely paraphrasing) was that just listing amazing things AGI can do isn’t very useful, and it would be better to describe a path to actually making these things work. Well, yes! A detailed discussion of how to get from current tech to robots capable of solving or preventing nuclear disasters would surely have made a great article, but would have taken a bunch more space, and this was just intended as a short evocative piece. But anyway, the Forbes article succeeded in prodding me to add a few more thoughts about how to get there from here…
This IEEE Spectrum article describes in detail some of the reasons why current robot technology is only of limited use for helping with nuclear disasters. The basic reasons come down to
- lack of radiation shielding (a plain old engineering problem rather than directly an AGI problem — though hypothetically an AGI scientist could solve the problem, I’m sure human scientists can do so also).
- relative physical ineptitude at basic tasks like climbing stairs and opening doors (problems to be solved by a combination of engineering advances and intelligent control (ultimately AGI) advances)
- the need for tele-operation which is awkward when robots are moving around inside shielded buildings and so forth. This is purely an AGI problem — the whole goal of applying AGI to robotics is to let them operate autonomously
There are many approaches in the research community aimed at creating AGI robots of this sort, and if things go well one of these may lead to robots capable of providing serious aid to nuclear disasters within a, say, 5-25 year timeframe. I’ll list just three current approaches for brevity:
- the European IM-CLEVER (“Intrinsically Motivated Cumulative Learning Versatile Robots”) project which aims to make a robot capable of autonomously learning various practical tasks, initially childlike ones and then growing more sophisticated
- the OpenCog project I’m involved in, which is aimed at creating human-level intelligent robots by 2023
- Juyang Weng’s developmental robotics project at Michigan State University
These exemplify the kind of work that I think will eventually lead to robots capable of fixing nuclear disasters. Of course, other relevant intelligent capabilities like inventing new energy sources may be done by AGI systems without robot embodiment; for some relevant current work see the recent automated scientific discovery system Eureqa … which is admittedly a far cry from a system capable of obsoleting nuclear reactors and oil wells, but is serious scientific work explicitly pushing in that direction.