In biology, we often find large collections of relatively simple interacting elements that combine to create complicated structures with complex behavior. Neural networks and gene interaction networks are two such examples.
The communities studying these systems share a common problem: the available measurements are not able to resolve all the individual components or interactions (nor do we expect them to be able to do so soon, with the large numbers of components typically involved). We are left in a situation in which we are unable to fill in the middle-scale details. We know how a single neuron fires, and how a single gene transcription factor can alter the rate of the production of proteins. We also have a feeling for the broad behavior of information transfer in the brain, and the switching abilities of entire gene networks. But lacking is an understanding of exactly how the components fit into the whole.
The problem, then, is taking incomplete information about the behavior of a network and inferring something about how it works. The goal is to be able to at least match the output of a network with a model, and at most to have a full understanding of the individual players and interactions among them that produce the behavior in the actual system.
For my graduate A exam, Veit Elser suggested that I look into methods that the neuroscience community uses to deal with the fact that studies of biological neural networks are limited by the amount of simultaneous data they can record. The question got me thinking about parameter search algorithms, prior beliefs in Bayesian statistics, and effective network models. You can read my response here [pdf].