Appendix to M.A. Janssen, J.M. Anderies, and B.H. Walker (2003) Robust Strategies for Managing Rangelands with Multiple Stable Attractors, Journal of Environmental Economics and Management, in review.

This appendix provides a brief introduction to genetic algorithms, the version we used in the paper, and provide links to software and tutorials.

Genetic Algorithms (GAs) were invented by John Holland in the early 1960s, and developed by him and his students and colleagues. This lead to Holland's book "Adaption in Natural and Artificial Systems" published in 1975. Holland had a double aim: to improve the understanding of natural adaptation process, and to design artificial systems having properties similar to natural systems. Since the early 1980s genetic algorithms are increasingly used for optimization questions, especially for problems with complex problem spaces.

The basic idea is as follows: the genetic pool of a given population potentially contains the solution, or a better solution, to a given adaptive problem. This solution is not "active" because the genetic combination on which it relies is split between several subjects. Only the association of different genomes can lead to the solution. No subject has such a genome, but during reproduction and crossover, new genetic combination occur and, finally, a subject can inherit a "good gene" from both parents .

GAs exploit the idea of the survival of the fittest and an interbreeding population to create a novel and innovative search strategy. A population of strings, representing solutions to a specified problem, is maintained by the GA. The GA then iteratively creates new populations from the old by ranking the strings and interbreeding the fittest to create new strings, which are (hopefully) closer to the optimum solution to the problem at hand. So in each generation, the GA creates a set of strings from the bits and pieces of the previous strings, occasionally adding random new data to keep the population from stagnating. The end result is a search strategy that is tailored for vast, complex, multimodal search spaces. Which strings are chosen and combined is a stochastic process, and this is a radically different approach to the problem solving methods used in many more traditional algorithms, which tend to be more deterministic in nature, such as the gradient methods.

The idea of survival of the fittest is of great importance to genetic algorithms. GAs use what is termed as a fitness function in order to select the fittest string that will be used to create new, and conceivably better, populations of strings. The fitness function takes a string and assigns a relative fitness value to the string. The method by which it does this and the nature of the fitness value does not matter. The only thing that the fitness function must do is to rank the strings in some way by producing the fitness value. These values are then used to select the fittest strings. The concept of a fitness function relates to the objective function in optimization problems.

There are numerous tutorials on the internet that shows in an interactive way the working of genetic algorithms. A good starting point is the following genetic algorithm portal.

The Differences between Genetic Algorithms and Traditional Methods

The brief list, based on Goldberg (1989), of the essential differences between GAs and other forms of optimization is the following:.

  1. Genetic algorithms a coded form of the function values (parameter set), rather than with the actual values themselves. So, for example, if we want to find the minimum of the function f(x)=x3+x2+5, the GA would not deal directly with x or y values, but with strings that encode these values. For this case, strings representing the binary x values should be used.
  2. Genetic algorithms use a set, or population, of points to conduct a search, not just a single point on the problem space. This gives GAs the power to search noisy spaces littered with local optimum points. Instead of relying on a single point to search through the space, the GAs looks at many different areas of the problem space at once, and uses all of this information to guide it.
  3. Genetic algorithms use only payoff information to guide themselves through the problem space. Many search techniques need a variety of information to guide themselves. Hill climbing methods require derivatives, for example. The only information a GA needs is some measure of fitness about a point in the space (sometimes known as an objective function value). Once the GA knows the current measure of "goodness" about a point, it can use this to continue searching for the optimum.
  4. GAs are probabilistic in nature, not deterministic. This is a direct result of the randomization techniques used by GAs.
  5. GAs are inherently parallel. Here lies one of the most powerful features of genetic algorithms. GAs, by their nature, are very parallel, dealing with a large number of points (strings) simultaneously. Holland has estimated that a GA processing n strings at each generation, the GA in reality processes n3 useful substrings.

Our Implementation for the JEEM paper

We implemented a very basic genetic algorithm in Java 2 version 1.4.0, mainly following the source Pascal code of a simple genetic algorithm as described in Goldberg (1989) pages 343-349. we used a 10-digit binary string to code each control parameter, leading to a 30-digit long string that represents a search space of 230 candidate solutions. The crossover rate was 0.2 and the mutation rate 0.01, in line with applications discussed in the literature, such as in Goldberg (1989). A formal description of the components of the genetic algorithm is the following attachment.

More information on Genetic Algorithms and Optimization

Basic references:

D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, 1989.
J. H. Holland. Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Arbor, Michigan, 1975.
Z. Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs. Springer-Verlag, Berlin, third edition, 1996.
M. Mitchell. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA, 1996.

There is a huge amount of information about genetic algorithms, the software, the applications, tutorials, demo programs, etc. A great portal to find the relevant information is the genetic algorithm section of the AI depot.