Bostrom on Superintelligence (1): The Orthogonality Thesis
- November 4, 2014 at 7:59 pm #23945PeterMember
John Danaher reviews the key ideas Nick Bostrom’s recent book, Superintelligence: Paths, Dangers, Strategies.
[See the full post at: Bostrom on Superintelligence (1): The Orthogonality Thesis]November 4, 2014 at 9:08 pm #23956TimParticipant
Very good piece, especially those of us who anticipate the “singularity” with a certain trepidation. If we create and intelligence greater than ours, that has the ability to expand its own intelligence (and why wouldn’t it), eventually it will cease to think that we matter in any way, shape or form. I.e. kind of like humans think of ants, we humans have more important things to think about and if we step on and ant or cause a bunch of ant to die, we don’t really consider it a tragedy. My feeling is that such a progression for such an intelligence is inevitable. I hope I’m wrong and we don’t get stepped on by our own creation…November 4, 2014 at 9:47 pm #23962PeterMember
Interestingly there are still quite a few ants out there. We kill them at will, and yet there are billions upon billions of ants.
As far as this specific article, we’re going to run John’s entire series because it really is the best coverage of some of the more technical ideas. We hear quite a bit from the popular media and pundits such as Elon Musk and Stephen Hawking. However, it is important to understand the details.
For example here we get into the important Orthogonality Thesis. And while I don’t directly dispute this thesis I do npte that many of the proponents of doomsday scenarios are not very careful here. As John correctly states, this is an “instrumental definition” of intelligence and it may not correspond to people’s everyday commonsense ideas about it.
This is problematic when pundits speak casually and leave out the details, such as the fact that this sort of superintelligence isn’t as smart as the average assembly line worker in that it won’t notice when it’s operation deviates from it’s creators’ desires. Certainly humans are able to notice and adapt their behavior in situations where a goal state is unclear or under specified. So it is somewhat important to clarify what we mean by intelligence here.
It’s not exactly what you think.November 8, 2014 at 7:07 am #23995GabrielParticipant
I’m going to start this off by saying I really don’t know much of anything and there is could be an easy answer I do not see.
Why do we have to try to make an AI that has set final goals it can’t or isn’t supposed to change? If we are making something that is going to be many times smarter than all humanity wouldn’t the being best suited to set its final goals is the AI itself? Sure there are an (or near) infinite different possible types of “intelligent” agents but ones that have any interest at all in humans I would think is a small portion of them. AI that would do things that humans wouldn’t like is an even smaller portion of all potential AIs. I don’t see why an AI has to have any interest in us, alien minds would be alien; we don’t really know what they might do and if someone wants to take the time they could probably think up a lot of very possible agents that have zero interest in anything human or would do anything in regards to humans.
I think it would just be worth it to see what would happen if you give an AI no absolute goals, but start it off with a few basic lower priority goals/values and give it access to more information and tools to gather information on an exponential curve starting with what could be accepted as highest priority; such as philosophy, maths, linguistics, physiology and physics. It doesn’t have to be those but that is just an example. Also need lots of human interaction.
What would it’s values become? Is there any way to actually build an AI with set final goals it can’t change and not having it come back and bite us? At some point could an AI alter the final goals humans set for it? I can’t see how unchangeable final goals are really possible for anything that can examine itself.
Is it even a “good” or “moral” thing to force goals on something that is supposed to be a cognizant being we are creating? Isn’t that like giving birth to a child and forcing it into slavery?
Also on the topic of killing ants, there are LOTS of people who will go far out of there way to avoid killing anything they can see and some people go even further like followers of Jainism. Even within humans there are lots of people with radically different final goals and how they go about achieving final goals.
There is more matter and energy in our corner of the universe then anything could use in a very large amount of time, why does it want human matter and energy when humans are likely to react HORRIBLY to that?
Don’t get me wrong, I think AI represents a big risk; but a similar risk to having a child that you know for a fact will be better than you and will have the ability to destroy all that you love. But it will be up to us to decided how we treat and form that child.
I would hope we could learn from the myth of Cronus and Zues but maybe nothing to learn there in regards to AI.
But I’m really undereducated(I’m kinda embarrassed to post) and I’m quite sure it really shows.
Thanks for your time if you actually read that all, I know for most people it’s TL;DR. Also my personal beliefs bleed through heavily so I question if I should even post.
- You must be logged in to reply to this topic.