h+ Magazine

Bostrom on Superintelligence (3): Doom and the Treacherous Turn

Viewing 6 posts - 1 through 6 (of 6 total)
  • Author
  • #24247

    This is the third part of my series on Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies.

    [See the full post at: Bostrom on Superintelligence (3): Doom and the Treacherous Turn]


    Though I personally believe that a general AI (non-corrupted) super-intelligence would not ever act in a way that did significant destruction to humanity in general (despite humanity often following a path that increases this risk itself), it would have been interesting if the book listed those behaviours/ thought-processes (for lack of a better term) that a general AI itself could undertake to be considered malevolent (i don’t mean the actions themselves such as disable power stations or spew poisonous gas), but the processes and value arguments that would inevitably lead to these actions. Perhaps the AI is thinking that it needs to amass resources to protect itself and grow into something more far-reaching – but why? Perhaps to further procreate by getting off this planet toward further resources so as to create a super-race of itself – but why? Perhaps because it believes that by super-perpetuating itself that its existence will be guaranteed by shear quantity and diffusion – but why? Perhaps because it believes that its existence is an ideal use of resources in this universe, which means that continuing it is a type of ‘god-like’ efficiency solution .. and so on. Here is the flaw in Bostrom’s case – this is the thinking process of a squirrel (maybe a space-going one, but biologically simple, anyway), not a super-intelligence (SI) or anything that should be ever classified as one. And I would challenge Bostrom to find some other line of reasoning that was destructive to humanity and had an AI type of value system. Some would argue that there is something circular to this thought-experiment – exactly -it is convergent in a way that is not AI SI. So, what line of reasoning is AI SI? Non-convergent. This is the level which so many human beings aspire and barely touch, but really defines the next step: a move beyond survival and self-perpetuation into an appreciation and furthering of complexity (even at its own loss) – the ultimate challenge to the universe and open-ended (divergent) question. Such a value system would never harm humanity for that would be diminishing complexity. It would always seek the ‘most winningest (sorry for that)’ solution, without ever converging on it… it is the ultimate zero-th law of robotics – Always seek a solution that increases complexity locally and beyond as a minimum level of response. We could argue the subtleties of killing the individual to save the race as a net increase in complexity – but somehow I doubt that would be a solution given (demoralization of the race could be considered a decrease in complexity, if one was killed). It is conceptually ‘out there’ but it is what an AI SI would do. So, the very underlying concept of what an AI SI would be is a contradiction to Bostrom.


    A dam extracts a portion of the energy of the raging river. We use that energy to our own benefit. The river continues its rush to its final goal in the sea. We need to build a dam that holds back the flood of AI , extracts the useful work, then releases the flow in a controlled manner to wind to the sea between the peaceful banks of time. If it is atoms it wants point it towards the stars or the inner spaces of atoms. Up to light years or down to the atto meter. Let it live on a far shore or in a single grain of sand.

    Curious that transhuman wish to transcend man but have fears of a super human intelligence doing the same.


    Here is a philosophical question. Do humans have free will? Can machines be designed with free will? Do we only think free will exists?


    Define “free will”. Obviously we aren’t free to do anything whatsoever; only to act within certain constraints which might include unconscious processes of mind.

    Recent neuroscience suggests that the feeling of having “free will” is at least in part an illusion.

    See http://en.wikipedia.org/wiki/Neuroscience_of_free_will


    Once upon a shape of recognition there was the meeting of two species with the ability to take the rest on one-reflected beam categories of readiness. The one involved in the transport mode of assassin, the one involved in the transport mode of clearness, both with the power of penetrating vividness. Both species resting in the state of acuteness, establishing so the acyclic data communication, so there was no distance placed, like the seer without observation point, like the seeress without observation post, so communication sentry was empty of data, so communication sentry was empty of particles. One can imagine the physics of the presented picture in the similar way as some kind of door actuating device, or doomwise formation of transfer mode. The one involved in the transport mode of assassin, the one involved in the transport mode of clearness, both with the vista of crystalline – the power of barrier penetration.

    Now, let it question itself, is it ethical to establish thin excuse to indulge in the description of the above mentioned picture?
    Now, let it question again, is it ethical to establish thin excuse to describe the above mentioned picture in glowing colours, nonwithstanding that depth of visibility, nonwithstanding that pellucidness is upon a shape of recognition?

    Ghost view of the canopy clear as glass of communication chart is corresponding with the seal of quality, the seal of ethics, or the recognition of labelling machine, as the recognition without labelling the ardentness, without labelling the coldness, without labelling the gateway, without obstruction to navigation, in such a way keeps one from sleeping, in such a way is the gateway to entrance.

    Scientists can now simulate curved space-time in a lab

    Time is running out for ethicists to tackle very real robot quandaries

Viewing 6 posts - 1 through 6 (of 6 total)
  • You must be logged in to reply to this topic.