neural architecture

Recent experiments

On neural architecture and phase difference

I am just about getting to grips now with what sort of neural architectures becomes optimal for swimming behaviour. In a recent chat with Andrea Soltoggio, we discovered after some careful skype analysis that the connectivity pattern originally conceptualised is suboptimal. Let us just recall the conceptualised architecture:


This actually means that the waves of excitation propagating down each half can only be out of phase by a maximum of PI / 2 (for maximum efficiency, the two waves should be out of phase by PI). Using the above connectivity regime resulted in architectures that would emerge like the following (best architectures from three independently seeded evolutionary runs):


The yellow spheres are inhibitory neurons, the red spheres are excitatory. The gray lines are connections. Note that the ‘connection’ between the top inner yellow-red pair is actually a symmetric connection (as conceptualised in the top diagram on this page). As can be observed the overal shape of the neural architecture is kind of ‘depressed’ on one side (with the exception of the final two circuits). It was discovered that this ‘depression’ would always be on the side in which the yellow inhibitory neuron of the head CPG was located. Thus on the depression side, weights feeding into the motors were ‘lower’ than those feeding into the motors on the right hand side (since in the model, weights are inversely proportional to the Euclidean distance between a pair of neurons). What has happened is that the architecture has arranged itself in order to try and boost the phase difference. By decreasing the weights on the depressed side, the phase difference between the waves propagating down the left and right side of the animat, could be increased.

Towards a larger phase difference

As mentioned above the phase difference between the two waves should really be PI. This led myself and Andrea Soltoggio to devise the following conceptual architecture which should generate a nicer dynamic (indeed it does having prototyped it in matlab):


(To make things clearer, excitatory units have been coloured red, inhibitory units coloured yellow, thus connections arising from a yellow unit are negative.). In this conceptual example, the waves propagating down the descending connections will be phase shifted by PI. The top left yellow inhibitory unit has the effect of repressing the output of the red CPG unit thus when the red CPG unit is positive, the top left inhibitory yellow unit will be precisely negative. Conversely the right excitatory descending unit will instead be in phase with the red CPG unit (since it is also excitatory). The overall effect is that left and right descending units will be in direct anti-phase. Having a potential phase difference of PI (of course it also depends on the time constants and weight values and inhibitory informations that should emerge) will garner more thrust, and, since the architecture will not have to compensate for a lack of phase difference, it is more likely that neural archtectures will emerge to be more bilaterally symmetric than in the prior example. Indeed this is the case (examples omitted for brevity).

Allowing evolution to decide on the connectivity

As a further step, the connectivity regime is also evolved. It is first observed that in the second conceptualised connectivity regime, there exists a ‘head CPG’ unit. Let us call this the central-CPG or c-CPG for short. We can then have a boolean gene value to decide whether or not a given agent has a c-CPG unit. Secondly, we can specify genes to decide whether or not each left-right pair of descending neurons can have connections between them. Finally, we can have boolean genes to determine the existence of descending connections. One simulation has shown the emergence of the second of the above two conceptual architectures as being the most optimal. This is highlighted by the following video in which the neural architecture of the best agent is visualised (number signifies generation, sampled every 10 generation):

As also shown, there is a natural tendency for bilateral symmetry within the architecture to emerge.

Clearly the above network is hierarchical. Indeed the paper...

How Hierarchical Control Self-organizes in Artificial Adaptive Systems (article)
R W. Paine and J. Tani
Adaptive Behaviour

...has just been pointed out to me by Andrea Soltoggio. In this paper, the authors explain how optimal neural controllers for simple wheeled agents emerge to be partitioned such that higher order functioning occurs in one network (analagous to the top circuit in the above video), which then drives the lower-level circuits (analogous to the motor control descending units in the above video).

I have not actually read the paper yet, it was just explained to me. I’m gonna have a look at it now...