h+ Magazine

Failure is an Option

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
  • #24210

    Jaron Lanier is wrong about AI

    [See the full post at: Failure is an Option]


    I think that the idea of an elegant algorithm that will be hand coded and bring AGI to life is tempting to dismiss, perhaps for good reason. However, assuming (as I do) that AGI is possible, there is some combination of code that would do the trick – which is what intrigues me about the question of what the minimally sufficient code would be.

    I’ll quote Jeremy Stamper:

    “There exists at any given time a minimum number of bits of code needed to give rise to recursively self-improving artificial intelligence (“The Stamper Minimum”).

    There is an actual value, it is discover-able, and with sufficient intelligence, it will be discovered.

    Because the probability of determining The Stamper Minimum correlates positively with the degree of intelligence applied to determining it, it is extremely unlikely that the code represented by The Stamper Minimum (“The Stamper Minimum Code”) will be used to originate super-intelligence. Instead, it is likely to be discovered retrospectively by a sufficiently advanced super-intelligence.

    The Stamper Minimum is an ever-changing value because the effectiveness of The Stamper Minimum Code is necessarily linked to available external tools and technologies, which are themselves dynamic.”


    Very interesting. I haven’t heard of the Stamper Minimum previously. But this is related to the notions of Kolmogorov complexity, minimum description length, minimum message length, and thereby also Chaitin’s Algorithmic Information Theory.

    Obviously there is a shortest program to do any computation. Since any self improving AGI is a computation, there is a shortest program for AGI.

    Finding it is not so obvious however.

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.