CTRNN Parameter Space

October 2006

During the summer I worked with Randy Beer at Indiana University on a project involving Continuous-Time Recurrent Neural Networks (CTRNNs). Simulated on a computer, these networks consist of any number of nodes, each of which has an internal state that changes in time in a simple but nonlinear way. When you connect them to each other, the behavior of each node also depends on the states of all the other nodes, and you can thus end up with very complicated behavior.

Randy's work has explored how, when combined with an evolutionary algorithm, CTRNNs can perform simple cognitive tasks without any other instructions. For example, a simulated "bug" can independently discover that the best way to walk is by swinging its legs in a certain coordinated way. The idea is this: 1) make a bunch of random CTRNNs to control your bug, 2) see which ones make your bug crawl the furthest, 3) use these to make new CTRNNs, 4) repeat. After thousands of generations, this process creates CTRNNs that are amazingly good at making bugs crawl. Even though the initial, randomly picked CTRNNs produce hopeless random swings of legs and feet, the final CTRNNs beautifully coordinate a smooth walking behavior. Moreover, networks have been evolved to perform all sorts of tasks, from chemotaxis to simple learning.

So how does this work? This is a huge question, with at least two big parts. First: How do the evolved networks solve the problems they are given? In the language of dynamical systems, how are the best networks set up to produce the desired trajectories in phase space? (And even, which trajectories produce the desired behavior?) These are the problems that Randy and his group have focused on in the past.

Secondly, we'd also like to know how the evolutionary algorithm accomplishes its goal: How can it find the needle of the solution in the daunting haystack of all possible CTRNNs? In reality, even with all the successes of evolving CTRNNs that perform perfectly, many evolutionary "searches" turn up emptyhanded, and for reasons that are not well-understood. There are lots of ad hoc procedures people use when they hit a dead end, but there is no universal way to increase the likelihood of an evolutionary search finding its goal. My work over the summer set out to change that, or at least to understand the question better.

One way to look at the problem an evolutionary algorithm faces is to imagine the space of all possible CTRNNs with some fixed number of nodes. This space has one axis for every parameter that can be altered to give a different network: every parameter that controls each node's internal behavior, along with the connection strengths between different nodes. Every point in this space specifies one CTRNN; the evolutionary algorithm's job, then, is to find a point in this space that represents a good solution to a given task. A simple algorithm might, for example, start at one point in parameter space, sample nearby points to see how well they perform at the task, and move toward the best-performing networks.

Examining the mathematics behind CTRNNs, it turns out that portions of this parameter space represent networks that inherently produce less interesting behavior. In some areas, the state of every node will always saturate to a stable and unchanging value--not so good for making bugs walk. In other areas, some or none of the nodes are intrinsically saturated. My goal was to map out all of these areas, and eventually to use this information to guide evolutionary algorithms toward the most interesting regions of parameter space.

The parameter space problem turned out to be a complicated mathematical problem that was interesting in its own right, involving combinatorics, probability distributions, and high-dimensional spaces. In the end, I was able to figure out some useful approximations that make it easier to estimate the percentage of a given parameter space that contains "interesting" CTRNNs. (The start of a more technical account can be found in this rough draft.) At the tail end of the summer, we discussed possible ways this information could be used to construct smarter evolutionary algorithms, but with busy schedules looming, we didn't get much further. It remains to be seen whether these seeds will bear any fruit.