Forum Replies Created
- December 6, 2014 at 1:54 am #25154
Peter — I wrote it as a book review because I felt the topic really deserved a book, not just an article … but I didn’t have time to write the book, at least not for the next year or so ;D
Maybe someone else will do some “reverse engineering” and write the book corresponding to the book review — that would be pretty funny ;D …. Book review as requirements specification!!October 28, 2014 at 1:30 am #23872
Serban: Yeah, I have read lots of Nick Bostrom’s stuff and also debated/discussed with him and his FHI colleagues in person at Oxford.
See my dialogue with MIRI honcho Luke Muehlhauser at https://hplusmagazine.com/2012/05/05/how-dangerous-is-artificial-general-intelligence-muelhauser-interviews-goertzel/ … he thinks basically the same way as Bostrom
I don’t think Bostrom’s argument is insane. It’s somewhat paranoid, in the sense that it takes **possibilities** (with unclear probabilities attached) and then systematically talks about them as if they were **high probabilities**….
My experience with Bostrom and his chums is that they tend to start out with arguments like “AGI will probably kill all humans, if we don’t specifically design it not to, in some way that we have no idea how to do now.” … and then when you push them, they end up with weaker arguments effectively amounting to “AGI might kill all humans, and you can’t say this is impossible, so we should all be really scared.” But the latter argument can be made about an awful lot of technologies.
I would like to emphasize, though, that I like Nick personally and have a lot of respect for his intellect and his leadership skills.October 3, 2014 at 3:24 pm #23230
The following are some comments on the post, sent to me via email from Julia Mossbridge…
As to your three thoughts on re-reading…
1) The argument that essentially runs, “well, we use technology for purpose X so why not for purpose Y” — which I think is your point here — seems to be a bit silly to me, in that there are reasons we don’t use technology for purpose Y. If purpose Y does more damage than good, for instance. So in general the argument is specious. However, in this specific case, I think you’d argue that purpose Y (extending life/eliminating death) does more good than it does damage. That’s the central argument, I think. And I can see things both ways, as a good scientist…we would need some data to figure it out for sure, of course. But the thing that scares me is that the way the human mind works is that we generally the old generation to die before new ideas can flourish. That’s part of the evolution of human thought. I’m afraid progress would slow considerably if we have all these old ideas sitting around on servers or in extended-life bodies!
2) You have more expertise than I do on what AI/bots can solve and not solve. But I have more expertise than you do on what human psychology/neuroscience is like. Solving problems in a way that works for human psychology is likely to be very different than solving problems in a way that works for AI, unless that AI becomes as unreasonable as a human, in which case…why not just have more human babies?
3) GBS quotes — I don’t believe the only way to make progress is to adapt the world to yourself. It’s one way. Another way to make psychological and spiritual progress is to adapt to the world. Both are necessary for our evolution as a species. Love the quote about reason enslaving you if you try to master it…we can both agree there!
Thanks, as always, for your respectful and thoughtful ways.