h+ Magazine

Bostrom on Superintelligence (5): Limiting an AI’s Capabilities

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #25467
    Peter
    Member

    This is the fifth part of my series on Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies.

    [See the full post at: Bostrom on Superintelligence (5): Limiting an AI’s Capabilities]

    #25914
    Marcos
    Participant

    It’s at least the 2nd time I see Faraday Cages being suggested around here. (probably in the same context? I don’t remember)

    My understanding is that Faraday Cages can only shield their interior from exogenous electromagnetic waves, they cannot contain the inside from leaking out. Am I mistaken?

    The best you can do is make it somewhat blind.

    It can still learn blind Kung Fu tho.. (google it, lol)

    #25916
    Peter
    Member

    No, it depends. For example a microwave oven uses a Faraday cage to keep the energy inside the oven.

    EMI/RF shielding rooms are used to protect sensitive compartmentalized programs as another example.

    But see http://www.theregister.co.uk/Print/2011/03/10/through_metal_comms_n_power_reinvented/ before you put too much trust in the idea of Faraday Cages or similar notions for containing an AGI.

    This is a good example of why so called “AI safety” research is off track; ignoring well known results and ideas from the field of information security is a huge mistake.

    #25950
    Marcos
    Participant

    oh.. interesting. I guess, in this particular case, the AI would need an inside job (outside?) to place the receiver outside it’s cage. Unless it could make piezoelectric transducers out of any molecule outside…

    However, indeed, it goes to show there can very well exist blind spots in any kind of containment… well, almost any kind 😉

Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.