h+ Magazine

Humans for Transparency in Artificial Intelligence

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #29571

    One thing we can do now is to advocate for the development of AI technology to be as open and transparent as possible — so that AI is something the whole human race is doing for itself, rather than something being foisted on the rest of the world by one or another small groups. In collaboration with Ethiopian AI firm iCog Labs, we have created an online petition in support of transparent AI.

    [See the full post at: Humans for Transparency in Artificial Intelligence]

    #29581
    James
    Participant

    If we’re in a world where recursive self-improvement is a possible thing, then the game theory of AI is well summarized by Stuart Armstrong, Nick Bostrom and Carl Shulman in Racing to the Precipice: A Model of Artificial Intelligence Development (http://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf).

    Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and
    about their own), the more the danger increases.

    A major effect of openness is to remove humanity’s brakes. There’s considerable uncertainty about whether recursive self-improvement is a thing that will happen, but if signs appear that it will, then responsible developers will want to slow down, understand and verify their AIs carefully before proceeding. But if research is broadly disseminated, then being cautious will simply mean losing the race, and so the AI that is ultimately created will be one that was made recklessly.

    I am particularly concerned because this petition and its authors, do not seem to have engaged with this argument. Rather than acknowledge that the question of recursive self-improvement is still open, that this is a key strategic consideration, and that more evidence may come in, the authors have simply ignored the issue entirely. It is pushing a position that is, under one very plausible set of predictions about the future, utterly disastrous.

    Openness does have some benefits. No one wants to be blindsided by AI developments happening in secret. Predicting whether there will be recursive self-improvement, and when, will be easier with more open information about AI progress. But locking in openness as ideology is simply too likely to be disastrous. It is not acceptable.

    This is the same mistake as OpenAI made, which I wrote about in OpenAI Should Hold Off on Choosing Tactics (http://conceptspacecartography.com
    /openai-should-hold-off-on-choosing-tactics/). They even went so far as to delete the word “safely” from their founding statement!

    #29582
    Marcos
    Participant

    Mwahahahaha.. monkey silliness amuse me to no end, I expect the next couple decades will be particularly amusing since, as every good information technology, silliness is picking up exponentially.

    But since you’re one of the rare ones I’ve actually been unable to find much silliness in, Ben, (except perhaps for that silly thing about a singularity by 2025, but I digress ^_^) I’ll ask a few questions for you:

    1) What is the most powerful monkey-made super-computer on this planet?;

    2) I bet you knew that one so, OK, how many orders of magnitude then is the 2nd one LESS so?;

    3) Weren’t you worried about unemployment? This will open LOTS of jobs.. if nothing else, for psychotherapists; hahaha

    4) And when you need actual psychotherapists, and so many of them, to f* DEBUG SOFTWARE, why are you so worried about the equivalent of the punch-card era of once a day debugging?

    Noooo, oh no my friend. For the moment, we’ve got’em right where we want’em: building their little FRAIL alchemical crystal balls…

    … and failing miserably. ; )

    After they learn their HISTORY lesson — and have their pathetic monkey brains rotting from all the Hermetic Mercury they paid dearly for — THEN should we consider helping out. I should remind the reader that is still with me (despite the “hermetic unveiling”, I applaud you) that “science” (quotes for it refers to the simian derived kind) is in a “reproducibility” crisis. Biology in particular is not much better than 10%, so yeah, go waste those FLOPS in trying to “reproduce” — if you forgive the pun — the “generality”/idiosyncrasy of the monkey “intelligence” instead of the (current) unfathomability of your own chaotic mess of a carcass. Be my guest, keep on amusing me.

    and BTW, about openAI in particular, what is someone who openly calls for AI in the hands of most people doing giving millions of dollars to an elitist[1] who, conversely, openly calls for it to remain in the hands of a secretive, secluded, ivory towered team of self-defeating pedants? Bwahahaha, amusing indeed! No wonder stage 1 rockets aren’t working. Still all too human I guess. Instead of focusing on NON-ISSUES, would some kind soul call his attention to the fact that a Mars ecosystem won’t bootstrap itself without monkeys having the slightest clue on how to do it?[2] And I expect NOTHING less than learning how whilst actually making a PROFIT out of it.[3]

    It pains me to admit it (not really, he’s ok) but Kurzweil is less wrong about this one.

    Refs.:
    [1] http://jetpress.org/v25.2/goertzel.htm
    [2] https://en.wikipedia.org/wiki/Biosphere_2
    [3] https://www.reddit.com/r/seasteading/comments/27ci1o/how_can_i_invest_in_seasteading/

    Disclaimer: I’ve been generously funded by Silly Inco.rporated.

    #29583
    William
    Participant
Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.