Growing Robot Minds

One way to increase the intelligence of a robot is to train it with a series of missions, analogous to the missions (aka levels) in a video game.

In a developmental robot, the training would not be simply learning—its brain structure would actually change. Biological development shows some extremes that a robot could go through, like starting with a small seed that constructs itself, or creating too many neural connections and then in a later phase deleting a whole bunch of them.

As another example of development vs. learning, a simple artificial neural network is trained when the weights have been changed after a series of training inputs (and error correction if it is supervised). In contradistinction, a developmental system changes itself structurally. It would be like growing completely new nodes, network layers, or new networks entirely during each training level.

Or you can imagine the difference between decorating a skyscraper (learning) and building a skyscraper (development).

construction (development) vs. interior design (learning)

What Happens to the Robot Brain in these Missions?

Inside a robotic mental development mission or stage, almost anything could go on depending on the mad scientist who made it. It could be a series of timed, purely internal structural changes. For instance:

  1. Grow set A[] of mental modules
  2. Grow mental module B
  3. Connect B to some of A[]
  4. Activate dormant function f() in all modules
  5. Add in pre-made module C and connect it to all other modules

Instead of (or in addition to) pre-planned timed changes, the stages could be in part based on environmental interactions. And I think that is actually a possibly useful tactic to make a robot adjust to its own morphology and the particular range of environments that it must operate and survive in. And that makes the stages more like the aforementioned missions as one has in computer games.

Note that learning is most likely going to be happening at the same time (unless learning abilities are turned off as part of a developmental level). In the space of all possible developmental robots, one would expect some mental change gray areas somewhere between development and learning.

Given the input and triggering roles of the environment, each development level may require a special sandbox world. The body of the robot may also undergo changes during each level.

The ordering of the levels/ sandboxes would depend on what mental structures are necessary going in to each one.

A Problem

One problem that I have been thinking about is how to prevent cross-contamination of mental changes. One mission might nullify a previous mission.

For example, let’s say that a robot can now survive in Sandbox A after making it through Mission A. Now the robot proceeds through Mission B in Sandbox B. You would expect the robot to be able to survive in a bigger new sandbox (e.g. the real world) that has elements of both Sandbox A and Sandbox B (or requires the mental structures developed during A and B). But B might have messed up A. And now you have a robot that’s good at B but not A, or even worse not good at anything.

Imagine some unstructured cookie dough. You can form a blob of it into a special shape with a cookie cutter.

cookie cutter

But applying several cookie cutters in a row might result in an unrecognizable shape, maybe even no better than the original blob.

As a mathematical example, take a four stage developmental sequence where each stage is a different function, numbered 1-4. This could be represented as:

y = f_{4}(f_{3}(f_{2}(f_{1}(x))))

where x is the starting cognitive system and y is the final resulting cognitive system.

This function composition is not commutative, e.g.

f_{4}\circ f_{3}\circ f_{2}\circ f_{1} \neq f_{1}\circ f_{2}\circ f_{3}\circ f_{4}

A Commutative Approach

There is a way to make an architecture and transform function type that is commutative. You might think that will solve our problem, however it only works with certain constraints that we might not want. To explain I will show you an example of a special commutative configuration.

We could require all the development stages to have a minimal required integration program. I.e. f1(), f2(), etc. are all sub-types of m(), the master function. Or in object oriented terms:

The example here would have each mission result in a new mental module. The required default program would automatically connect this module with the same protocol to all other modules.

So in this case:

f_{4}\circ f_{3}\circ f_{2}\circ f_{1} = f_{1}\circ f_{2}\circ f_{3}\circ f_{4}

I don’t think this is a good solution since it seriously limits the cognitive architecture. We would not even be able to build a simple layered control system where each higher layer depends on the lower layers. We cannot have arbitrary links and different types of links between modules. And it does not address how conflicts are arbitrated for outputs.

However, we could add in some dynamic adaptive interfaces in each module that apply special changes. For instance, module type B might send out feelers to sense the presence of module type A, and even if A is added afterwards, eventually B will find it after all the modules have been added. But, we will not be able to actually unleash the robot into any of the environments it should be able to handle until the end, and this is bad. It removes the power of iterative development. And it means that a mission associated with a module will be severely limited.

The most damning defect with this approach is that there’s still no guarantee that a recently added module won’t interfere with previous modules as the robot interacts in a dynamic world.

A Pattern Solution

A non-commutative solution might reside in integration patterns. These would preserve the functionality from previous stages as the structure is changed.

a multipole switch to illustrate mental switching

For instance, one pattern might be to add a switching mechanism. The resulting robot mind would be partially modal—in a given context, it would activate the most appropriate developed part of its mind but not all of parts at the same time.

A similar pattern could be used for developing or learning new skills—a new skill doesn’t overwrite previous skills, it is instead added to the relevant set of skills which are triggered or selected by other mechanisms.

Mental Development

Adaptability and uninterrupted continuous operations are important features of mental development.

An organism that can’t adapt enough is too rigid and brittle—and dies. The environment will never be exactly as expected (or exactly the same as any other previous time during evolution). Sure, in broad strokes the environment has to be the same (e.g. gravity), and the process of reproduction has limits, but many details change.

During its lifetime (ontogeny), starting from inception, all through embryogeny and through childhood and into adulthood, an organism always has to be “on call.” At the very least, homeostasis is keeping it alive. I’d say for the most part all other informational control/feedback systems, including the nervous system which in turn includes the brain, all have to be always operational. There are brief gaps, for instance highly altricial animals depend on their parents protecting them as children. And of course there’s the risk when one is asleep. But even in those vulnerable states, organisms do not suddenly change their cognitive architectures or memories in a radically broken way. Otherwise they’d die.

There are a couple computer-ish concepts that might be useful for analyzing and synthesizing these aspects of mental development:

  1. Parameter ranges
  2. Polymorphism

These aren’t the only relevant concepts; I just happen to be thinking about them lately.

Parameter Ranges

Although it seems to be obvious, I want to point out that animals such as humans have bodies and skills that do not require exact environmental matches. For example, almost any human can hammer a nail regardless of the exact size and shape of the hammer, or the nail for that matter.

Crows have affordances with sticks and string-like objects. Actually, crows aren’t alone—other bird species have been seen pulling strings for centuries [1]. Anyway, crows use their beaks and feet to accomplish tasks with sticks and strings.

crow with a stick

crow incrementally pulling up a string to get food

Presumably there is some range of dimensions and appearances of sticks and strings that a crow can:

  1. Physically manipulate
  2. Recognize as an affordance

And note that those items are required regardless of whether crows have insight at all or are just behaving through reinforcement feedback loops [1].

Organisms fit into their niches, but there’s flexibility to the fit. However, these flexibilities are not unlimited—there are ranges. E.g., physically a person’s hand cannot grip beyond a certain width. The discipline of Human factors (and ergonomics) studies these ranges of the human body as well as cognitive limits.

For an AI agent, there are a myriad places in which there could a function f and its parameter x with some range R of acceptable values for x. Perhaps the levels of abstraction for this to be useful are exactly those involved with perception of affordances and related skill-knowledge.

Polymorphism

Polymorphic entities have the same interfaces. Typically one ends up with a family of types of objects (e.g. classes). At any given time, if you are given an object of a family, you can interface to it the same way.

In object oriented programming this is exemplified by calling functions (aka methods) on pointers to a base class object without actually knowing what particular child class the object is an instance of. It works conceptually because all the child classes inherit the same interface.

For example, in C++:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
class A {
public:
    virtual void foo() = 0;
};
class B : public A
{
public:
    void foo() { std::cout << "B foo!\n"; }
};
class C : public A
{
public:
    void foo() { std::cout << "C foo!\n"; }
};
void main()
{
    std::unique_ptr<A> obj(new B);
    obj->foo(); // does the B variation of foo()
    std::unique_ptr<A> obj2(new C);
    obj2->foo(); //does the C variation of foo()
}

 

That example is of course contrived for simplicity. It’s practical if you have one piece of code which should not need to know the details of what particular kind of object it has—it just needs to do operations defined by an interface. E.g., the main rendering loop for a computer game could have a list of objects to call draw() on, and it doesn’t care what specific type each object is—as long as it can call draw() on each object it’s happy.

So what does this have to do with mental development? Well, it could be a way for a mind to add new functions to existing concepts without blowing away the existing functions. For instance, at a high level, a new skill S4 could be learned and connected to concept C. But concepts S1, S2, and S3 are still usable for concept C. In a crisp analogy to programming polymorphism, the particular variant of S that is used in a context would be based on the sub-type of C that is perceived.

On the other hand, polymorphism could be fuzzier, in other words, the mechanism of the switch could be anything. E.g. a switch could be influenced by global emotional states instead of the active sub-type of C.

One can imagine polymorphism both at reflexive levels and on conceptual symbol-based levels. Reflexive (or “behavioral”) networks could be designed to effectively arbitrate based on the “type” of an external situation via the mapping of inputs (sensor data) to outputs (actuators). Conceptual based mental polymorphism would presumably actually categorize a perception (or perhaps an input from some other mental module) which would be mapped by subcategory to the appropriate version of an action. And maybe it’s not an action, but just the next mental node to visit.


References
[1] Taylor, A., Medina, F., Holzhaider, J., Hearne, L., Hunt, G., & Gray, R. (2010) An Investigation into the Cognition Behind Spontaneous String Pulling in New Caledonian Crows. PLoS ONE, 5(2). DOI: 10.1371/journal.pone.0009345

###

Samuel H. Kenyon is an amateur AI researcher, software engineer, interaction designer, actor, and writer. He blogs at http://synapticnulship.com/ where this post previously appeared. Republished under creative commons license.

 

Image credits:

  1. Nintendo
  2. Georgia State University Library via Atlanta Time Machine
  3. dezeen
  4. crooked brains
  5. diagram by the author
  6. MyDukkan
  7. University of Oxford
  8. David Schulz