Recent experiments

On neural architecture and phase difference

I am just about getting to grips now with what sort of neural architectures becomes optimal for swimming behaviour. In a recent chat with Andrea Soltoggio, we discovered after some careful skype analysis that the connectivity pattern originally conceptualised is suboptimal. Let us just recall the conceptualised architecture:

snakeArch

This actually means that the waves of excitation propagating down each half can only be out of phase by a maximum of PI / 2 (for maximum efficiency, the two waves should be out of phase by PI). Using the above connectivity regime resulted in architectures that would emerge like the following (best architectures from three independently seeded evolutionary runs):

oldArchs

The yellow spheres are inhibitory neurons, the red spheres are excitatory. The gray lines are connections. Note that the ‘connection’ between the top inner yellow-red pair is actually a symmetric connection (as conceptualised in the top diagram on this page). As can be observed the overal shape of the neural architecture is kind of ‘depressed’ on one side (with the exception of the final two circuits). It was discovered that this ‘depression’ would always be on the side in which the yellow inhibitory neuron of the head CPG was located. Thus on the depression side, weights feeding into the motors were ‘lower’ than those feeding into the motors on the right hand side (since in the model, weights are inversely proportional to the Euclidean distance between a pair of neurons). What has happened is that the architecture has arranged itself in order to try and boost the phase difference. By decreasing the weights on the depressed side, the phase difference between the waves propagating down the left and right side of the animat, could be increased.

Towards a larger phase difference

As mentioned above the phase difference between the two waves should really be PI. This led myself and Andrea Soltoggio to devise the following conceptual architecture which should generate a nicer dynamic (indeed it does having prototyped it in matlab):

fullPIArch

(To make things clearer, excitatory units have been coloured red, inhibitory units coloured yellow, thus connections arising from a yellow unit are negative.). In this conceptual example, the waves propagating down the descending connections will be phase shifted by PI. The top left yellow inhibitory unit has the effect of repressing the output of the red CPG unit thus when the red CPG unit is positive, the top left inhibitory yellow unit will be precisely negative. Conversely the right excitatory descending unit will instead be in phase with the red CPG unit (since it is also excitatory). The overall effect is that left and right descending units will be in direct anti-phase. Having a potential phase difference of PI (of course it also depends on the time constants and weight values and inhibitory informations that should emerge) will garner more thrust, and, since the architecture will not have to compensate for a lack of phase difference, it is more likely that neural archtectures will emerge to be more bilaterally symmetric than in the prior example. Indeed this is the case (examples omitted for brevity).

Allowing evolution to decide on the connectivity

As a further step, the connectivity regime is also evolved. It is first observed that in the second conceptualised connectivity regime, there exists a ‘head CPG’ unit. Let us call this the central-CPG or c-CPG for short. We can then have a boolean gene value to decide whether or not a given agent has a c-CPG unit. Secondly, we can specify genes to decide whether or not each left-right pair of descending neurons can have connections between them. Finally, we can have boolean genes to determine the existence of descending connections. One simulation has shown the emergence of the second of the above two conceptual architectures as being the most optimal. This is highlighted by the following video in which the neural architecture of the best agent is visualised (number signifies generation, sampled every 10 generation):


As also shown, there is a natural tendency for bilateral symmetry within the architecture to emerge.

Clearly the above network is hierarchical. Indeed the paper...

How Hierarchical Control Self-organizes in Artificial Adaptive Systems (article)
Author
R W. Paine and J. Tani
Journal
Adaptive Behaviour
Year
2005

...has just been pointed out to me by Andrea Soltoggio. In this paper, the authors explain how optimal neural controllers for simple wheeled agents emerge to be partitioned such that higher order functioning occurs in one network (analagous to the top circuit in the above video), which then drives the lower-level circuits (analogous to the motor control descending units in the above video).

I have not actually read the paper yet, it was just explained to me. I’m gonna have a look at it now...

Comments

motor primitives in simple biophysical agents

I am attempting to disseminate the emergence of motor primitives in a bio-physical swimming agent. My aim is to find out the simplest circuits that can produce simple behaviours such as swimming forwards, turning, and alteration of swimming speed etc.

In the first stage, a very simple CPG network composed of analogue leaky integrator units and of the following connectivity was established to be optimal:

snakeArch

The top two neurons form the central pattern generator. The signal then propagates down the length of the agent and a wave of excitation is formed. The outer lateral neurons are motors which drive the agent. The signs represent whether a connection is excitatory or inhibitory.

In an evolutionary experiment, the weights and inhibitory informations and time constant values of the above network were evolved. The weights were derived according to neuronal Euclidean distance information. This was so neural properties could be partly governed by spatial architecture (and then one could investigate the coupling between neural architecture and body plan morphology).

Having achieved some very nice swimming behaviour, a couple of perceptrons were then added as turning mechanisms (entirely decoupled from the above CPG network). The outputs of a perceptron add a small amount of force to a given motor to achieve turning. Essentially, a Braitenberg vehicle was developed. The perceptron weight values were all fixed to 1.The overall neural topology then looks like this:

snakeArchB

The resulting neural dynamics allows for the agent to follow a target moving through the environment, as shown by the below video:



The next step will attempt to add the turning mechanism directly to the CPG pathway.

Comments