Could AGI Prevent Future Nuclear Disasters?

In the wake of a tragedy like the nuclear incidents we’re currently seeing in Japan, one of the questions that rises to the fore is: What can we do to prevent similar problems in the future?

This question can be addressed narrowly, via analyzing specifics of nuclear reactor design, or by simply resolving to avoid nuclear power (a course that some Western nations may take, but is unlikely to be taken by China or India, for example). But the question can also be addressed more broadly: What can we do to prevent unforeseen disasters arising as the result of malfunctioning technology, or unforeseen interactions between technology and the natural or human worlds?

It’s easy to advocate being more careful, but careful attention comes with costs in both time and money, which means that in the real world care is necessarily compromised to avoid excessive conflict with other practically important requirements. For instance, the Japanese reactor designs could have been carefully evaluated in scenarios similar to the one that has recently occurred; but this was not done, most likely because it was judged too unlikely a situation to be worth spending scarce resources on.

What is really needed, to prevent being taken unawares by “freak situations” like what we’re seeing in Japan, is a radically lower-cost way of evaluating the likely behaviors of our technological constructs in various situations, including those judged plausible but unlikely (like a magnitude 9 earthquake). Due to the specialized nature of technological constructs like nuclear reactors, however, this is a difficult requirement to fulfill using human labor alone. It would appear that the development of advanced artificial intelligence, including Artificial General Intelligence (AGI) technology, has significant potential to improve the situation.

An AI-powered “artificial nuclear scientist” would have been able to take the time to simulate the behavior of Japanese nuclear reactors in the case of large earthquakes, tidal waves, etc. Such simulations would have very likely led to improved reactor designs, avoiding this recent calamity plus many other possible ones that we haven’t seen yet (but may see in the future).

Of course, AGI may also be useful for palliating the results of disasters that do occur. For instance, cleanup around a nuclear accident area is often slowed down due to the risk of exposing human workers to radiation. But robots can already be designed to function in the presence of high radiation; what’s currently underdeveloped is the AI to control them. And, most of the negative health consequences of radiation from a nuclear accident such as the recent ones are long-term rather than immediate. Sufficiently irradiated individuals will have increased cancer risk, for example. However, the creation of novel therapies based on AI modeling of biological systems and genomic data, could very plausibly lead to therapies remedying this damage. The reason relatively low levels of radiation can give us cancer is because we don’t understand the body well enough to instruct it how to repair relatively minor levels of radiation-incurred bodily damage. AGI systems integratively analyzing biomedical data could change this situation in relatively short order, once developed.

Finally, the creation of advanced intelligences with different capabilities than the human mind, could quite plausibly lead to new insights, such as the development of alternative power sources without the same safety risks. Safe nuclear fusion is one possibility, but there are many others; to take just one relatively pedestrian example, perhaps intelligent robots capable of operating easily in space would perfect some of the existing designs for collecting solar energy from huge solar sails.

There is no magic bullet for remedying or preventing all disasters, but part of the current situation seems to be that the human race’s ability to create complex technological systems has outstripped its ability to simulate their behavior, and foresee and remedy the consequences of this behavior. As the progress of technology appears effectively unstoppable, the most promising path forward may be to progressively (and, hopefully, rapidly) augment the human mind with stronger and stronger AI.

P.S. [added a day after the original article was posted]

A Forbes blogger picked this article up and wrote a critique of it, to which I responded in the comments to his blog post.  His main point (very loosely paraphrasing) was that just listing amazing things AGI can do isn’t very useful, and it would be better to describe a path to actually  making these things work.  Well, yes!  A detailed discussion of how to get from current tech to robots capable of solving or preventing nuclear disasters would surely have made a great article, but would have taken a bunch more space, and this was just intended as a short evocative piece.  But anyway, the Forbes article succeeded in prodding me to add a few more thoughts about how to get there from here…

This IEEE Spectrum article describes in detail some of the reasons why current robot technology is only of limited use for helping with nuclear disasters.  The basic reasons come down to

  1. lack of radiation shielding (a plain old engineering problem rather than directly an AGI problem — though hypothetically an AGI scientist could solve the problem, I’m sure human scientists can do so also).
  2. relative physical ineptitude at basic tasks like climbing stairs and opening doors (problems to be solved by a combination of engineering advances and intelligent control (ultimately AGI) advances)
  3. the need for tele-operation which is awkward when robots are moving around inside shielded buildings and so forth.  This is purely an AGI problem — the whole goal of applying AGI to robotics is to let them operate autonomously

There are many approaches in the research community aimed at creating AGI robots of this sort, and if things go well one of these may lead to robots capable of providing serious aid to nuclear disasters within a, say, 5-25 year timeframe.  I’ll list just three current approaches for brevity:

  • the European IM-CLEVER (“Intrinsically Motivated Cumulative Learning Versatile Robots”) project which aims to make a robot capable of autonomously learning various practical tasks, initially childlike ones and then growing more sophisticated
  • the OpenCog project I’m involved in, which is aimed at creating human-level intelligent robots by 2023
  • Juyang Weng’s developmental robotics project at Michigan State University

These exemplify the kind of work that I think will eventually lead to robots capable of fixing nuclear disasters.  Of course, other relevant intelligent capabilities like inventing new energy sources may be done by AGI systems without robot embodiment; for some relevant current work see the recent automated scientific discovery system Eureqa … which is admittedly a far cry from a system capable of obsoleting nuclear reactors and oil wells, but is serious scientific work explicitly pushing in that direction.

You may also like...

12 Responses

  1. Steve Richfield says:

    The disaster is the child of two incredible simple human errors:

    1. A simple toilet-filling mechanism connected to an outside inlet for use in disasters would have provided a way to keep the nuclear fuel submerged despite technical failures. No one would ever install such a simple thing because it would be an admission of engineering weakness.
    2. Spent fuel is kept on site rather than being reprocessed for purely political reasons – no one wants the reprocessing done in their own back yard.

    An AGI would be faced with these same two pressures. Only through the exercise of very un-democratic power could it get past them.

    As I have pointed out on other forums, the obvious, reasonable, expected things that an AGI should and would mandate are COMPLETELY socially unacceptable to just about everyone.

    AGI may be about creating our own God, but I expect that everyone would view such “success” as creation of their own devil.

    BTW, I am on the side of the AGIs here. 99.99% of the population wouldn’t take any uncomfortable action to save the human race.


    • Well Steve … those observations might be true, I don’t know…

      But if so, it doesn’t obviate the potential value of AGI repair robots

      And nor does it rule out the possibility that an AGI could find politically acceptable technical solutions. Heck, just inventing a cheap material much stronger than those currently used, which could then be deployed to store the waste, would be a big help. I bet this could be done in short order by an AGI scientist with sensors and actuators at the nano-scale, capable of manipulating nano-fibers as readily as we stack blocks or tie knots in rope.

  2. Ben Goertzel says:

    A Forbes blogger critiqued this article

    and my response to him resides in two comments at the end of his article…

    — Ben Goertzel (the author of the article)

  3. Anonymous says:

    Creating AGI to prevent nuclear disasters is like creating teleporters to prevent plane crashs. The potential disruptive impact of these technologies is so extreme that it outweighs any such specific suggested use.

    AGI will affect far more people/utility than all expected nuclear power hazards even after our latest awareness raising due to Japan. Similarly, teleporters would affect the world to a far greater degree than all plane crashs of history combined. Problem is, we don’t yet know how to build any of these things.

  4. K. says:

    Concentrated solar energy in a desert, build from raw material find in a desert

    Or alga reactor (for hydrogene, methane,oil, biomass ) eslewhere, and why not on oceans

    give an exponential energy growth

    Automatized everything , the production and maintenance

    and this is “free” energy



    THe energy is everywhere, in abundance


    Could agi prevent us, stupid monkeys, to kill ourselfs ?

    This should have been done after the 2nd WW, why do we follow a stupid story ?

    A spectacle of pain

    Because of politics, because of global elite

    Could agi prevent us, stupid monkeys, to kill ourselfs ?

  5. K. says:

    The biggest treat to human being is human being

    Could agi prevent us, stupid monkeys, to kill ourselfs ?

  6. K. says:

    THe 2nd economy in the world, and maybe one of the most advanced economy which could have build singularity

    is dead, the yen is dead

    rationning energy,

    is rationning IT

    is rationning food

    is killing people

    The yen is dead, the asia economic network is impacted

    THe idea of nuclear reactor is impact : france so europe is impacted, china, russia stop the planned reactor they want to build

    withouth cheap energy( oil, nuclear or THE SUN ) people will die

    That is not a problem


    Don’t you remember : “when you want to be a god transhumanist , if you want to pass the singularity , and be a winner ”

    everything is allowed

    War on africa, in middle east, in iran, south america

    and against china : where like obama said : there is a race for AGI

    OK , don’t you see the picture ?

    I find difficult to accept people who tell they are genius, are so stupid about politics; and economics

    • Michelle says:

      So, you think this was done on purpose? I’m sure that if the US was capable of creating an earthquake they wouldn’t hesitate to do it. Far more cruel military tactics have been employed throughout history. IF it were possible, that shows us that there are some twisted people with this technology and they aren’t too bright when it comes to common sense.

      “As the progress of technology appears effectively unstoppable, the most promising path forward may be to progressively (and, hopefully, rapidly) augment the human mind with stronger and stronger AI.”

      If there was a technology that could make people more intelligent and enhance the body, do you think this would be available to everyone? I mean, do you think governments would want to enhance people? I for one would love to be smarter! 😉

      Many sociopaths have a high IQ. It would be great if emotional intelligence could be enhanced as well. Whoever has access to these future technologies should bump up their ability to reason, empathize, and use common sense before they go making any pinky and the brain plans.

      • Dave Baldwin says:

        No matter the time forward in achieving AGI, we will have the constraints of sloth, greed and vanity.

        What the acceleration affords is the ability for those not concerned with obscene profit to pull off the game changer.

        That way, the “Have’s” do not have everything.

  1. October 6, 2011

    […] take another example.  Let’s say that artificial intelligences are developed, as Ben Goertzel proposes, that are capable of designing safer nuclear power plants and performing all sorts of […]

Leave a Reply