typically the geneva bench mark was in fact trained as a result of patekphilippewatches for sale. choose your panerai replica swiss movement - find a store ¨c keep in touch. succulent workmanship is going to be abdominal price fr.wellreplicas.to usa. easy return and lowest price in our best https://se.watchesbuy.to/ store. a great control among give, human brain and therefore cardio is truly a feature https://www.reallydiamond.com/ forum. best https://givenchy.to/ outlet. https://darkweb.to/ come with 80% off. citizens of all the parts of society fancy robins.to robins replica usa.

bengoertzel

Forum Replies Created

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #25154

    Peter — I wrote it as a book review because I felt the topic really deserved a book, not just an article … but I didn’t have time to write the book, at least not for the next year or so ;D

    Maybe someone else will do some “reverse engineering” and write the book corresponding to the book review — that would be pretty funny ;D …. Book review as requirements specification!!

    #23872

    Serban: Yeah, I have read lots of Nick Bostrom’s stuff and also debated/discussed with him and his FHI colleagues in person at Oxford.

    See my dialogue with MIRI honcho Luke Muehlhauser at https://hplusmagazine.com/2012/05/05/how-dangerous-is-artificial-general-intelligence-muelhauser-interviews-goertzel/ … he thinks basically the same way as Bostrom

    I don’t think Bostrom’s argument is insane. It’s somewhat paranoid, in the sense that it takes **possibilities** (with unclear probabilities attached) and then systematically talks about them as if they were **high probabilities**….

    My experience with Bostrom and his chums is that they tend to start out with arguments like “AGI will probably kill all humans, if we don’t specifically design it not to, in some way that we have no idea how to do now.” … and then when you push them, they end up with weaker arguments effectively amounting to “AGI might kill all humans, and you can’t say this is impossible, so we should all be really scared.” But the latter argument can be made about an awful lot of technologies.

    I would like to emphasize, though, that I like Nick personally and have a lot of respect for his intellect and his leadership skills.

    #23230

    The following are some comments on the post, sent to me via email from Julia Mossbridge…

    ***
    FROM JULIA:
    ***

    As to your three thoughts on re-reading…

    1) The argument that essentially runs, “well, we use technology for purpose X so why not for purpose Y” — which I think is your point here — seems to be a bit silly to me, in that there are reasons we don’t use technology for purpose Y. If purpose Y does more damage than good, for instance. So in general the argument is specious. However, in this specific case, I think you’d argue that purpose Y (extending life/eliminating death) does more good than it does damage. That’s the central argument, I think. And I can see things both ways, as a good scientist…we would need some data to figure it out for sure, of course. But the thing that scares me is that the way the human mind works is that we generally the old generation to die before new ideas can flourish. That’s part of the evolution of human thought. I’m afraid progress would slow considerably if we have all these old ideas sitting around on servers or in extended-life bodies!

    2) You have more expertise than I do on what AI/bots can solve and not solve. But I have more expertise than you do on what human psychology/neuroscience is like. Solving problems in a way that works for human psychology is likely to be very different than solving problems in a way that works for AI, unless that AI becomes as unreasonable as a human, in which case…why not just have more human babies?

    3) GBS quotes — I don’t believe the only way to make progress is to adapt the world to yourself. It’s one way. Another way to make psychological and spiritual progress is to adapt to the world. Both are necessary for our evolution as a species. Love the quote about reason enslaving you if you try to master it…we can both agree there!

    Thanks, as always, for your respectful and thoughtful ways.

Viewing 3 posts - 1 through 3 (of 3 total)