What is Intelligence?
Intelligence is a very difficult concept (maybe thats the reason why many people try to avoid it or narrow it down). I’ve worked on this question for many many years now, and we went through the literature; psychology literature, philosophy literature; AI literature) what individuals, researches, and also groups came up with definitions, they are very diverse. But there seems to be one redcurrant theme and if you wnat to put it in one sentence, then you could define intelligence as:”an agents ability to achieve goals in a wide range of environments”, or to succeed in a wide range of environments.
If you now look at this sentence and ask, “wow, how can this single sentence capture the complexity of intelligence?” There are two answers to that. First: many aspect of that are emergent properties of intelligence, like being able to learn – if I want to succeed or solve a problem I need to acquire new knowledge, so learning is an emergent phenomenon this definition.
And the second answer is: this is just a sentence that contains a few words, what you really have to do, and that’s the hard part, is to transform it into meaningful equations and then study these equations. And that’s what I have done in the last 12 years.
Bounded Rationality: It is an interesting question whether resource bounds should be included in any definition of intelligence or not, and the natural answer is of course they should. Well there are several problems, the first one is that nobody ever came up with a reasonable theory of bounded rationality (people have tried), so it seems to be very hard. And this is not specific to AI or intelligence, but it seems to be symptomatic in science, so if you look at the several fields (physics the crown discipline) theories have been developed: Neuton’s mechanics, General Relativity Theory, Quantum Field theory, the Standard Model of Particle Physics. there are more and more precise, but they get less and less computable, and having a computable theory is not a principle in developing these theories, of course at some point you have to test these theories and you want to do something with them, and then you need a computable theory – this is a very difficult issue (and you have to approximate them or do something about it). But having computational resources built into the fundamentals of the theories, that is at least in physics, and if you look at other disciplines, that is not how things work.
You design theories so that they describe your phenomenon as well as possible and the computational aspect is secondary. Of course if it is incomputable and you can’t do anything with it, you have to come up with another theory, but this always comes second. And only in computer science (and this comes naturally) computer scientists try to think about how they can design an efficient algorithm to solve my problem, and since AI is sitting in the computer science department traditionally, the mainstream thought is “how can I build a resource bounded artificial intelligent system”. And I agree that ultimately this is what we want. But the problem is so hard I think, that we should take (or a large fraction of the scientists) this approach, model the problem first, define the problem first, and once we are confident that we have solved this problem, then go to the second phase, try to approximate the theory, try to make a computational theory out of it. And then there are many many possibilities, you could still try to create a resource bounded theory of intelligence, which will be very very hard if you want to have it very principled, or you do some heuristics… or .. or .. or… many options. Or the short answer may be I am not smart enough to come up with a resource bounded theory of intelligence, therefore I only developed one without resource constraints (that would be the short answer).
Ok so now we have this informal definition that intelligence is an agents ability to succeed or achieve goals in a wide range of environments. The point is you can formalize this theory, and we have done that and it is called AIXI. Or Universal AI is the general field theory and AIXI is the particular agent which acts optimally in this sense.
So that works as follows: it has a planning component, and it has a learning component. What the learning component does is, think about a robot walking around in the environment, and at the beginning it has no data/knowledge about the world, so what it has to do is acquire data/knowledge of the world and then build its own model of the world, how the world works. And it does that, so there are very powerful general theories on how to learn a model from data, from very complex scenarios. This theory is rooted in Kolomogrov complexity, algorithmic information theory – the basic idea is you look for the simplest model which describe your data efficiently well.
Hutter’s AIXI: http://www.hutter1.net/ai/aixigentle.pdf