I am an Assistant Professor of Computer Science at the University of Wyoming and direct the Evolving Artificial Intelligence Lab.
I study evolutionary computation, a technology that harnesses natural selection to evolve, instead of engineer, artificial intelligence, robots, and physical designs.
Here are videos on some areas of my research.
Evolutionary computing simulates natural selection using a ‘survival of the fittest’ rule. The difference is that, instead of plants and animals competing, different versions of software are battling for their place in the next generation. A Darwinian process is set up so that the better programs will have lots of copies (versions similar or identical) in the next generation and less desirable software is eliminated. What makes some programs 'better' than others is determined by the person setting up the experiment (e.g., the ability to get out of mazes, drive a car without crashing, control a legged robot, etc.). Over time the software gets better and better since mutations (random changes in the programs) and 'sex' (combining a portion of the code of one program with a portion of another) will occasionally produce a program that is a slight improvement over its parents. This slightly better software will thrive for a while until it too is replaced by the next slightly better software. Given enough generations, these small changes can add up to produce jaguars, whales, Olympians and poets.
Natural selection, plus a lot of time, produced all of the "endless forms most beautiful and most wonderful" on this planet, to quote Darwin. In a computer world, because generations can happen in microseconds, we don’t need millions of Earth years to pass before interesting things begin to happen.
While evolutionary computation has frequently come up with better designs than any human engineer, it still pales in comparison to natural evolution. Simply put, our simulations do not evolve hawks and tigers. I am interested in figuring out how we can get evolution in computers to evolve things as complex and impressive as the creatures of the natural world, including ourselves. I think that some of the keys to this task are the use of generative encodings and the evolution of modularity, phenotypic plasticity and of evolvability itself. I have performed investigations in all of these areas and continue to be interested in them.
Current Research Focus
Recently, I have been studying generative encodings that evolve neural networks. Generative encodings involve the reuse of code in genomes for multiple parts of a phenotype. For example, the same genetic code is reused to make each segment of a millipede. Neural networks are digital versions of brains. A complex web of neurons can produce surprisingly intelligent behavior, whether it is in your head or in a computer simulation. Designing complex brains, however, is difficult and time consuming for us humans. That's where evolution comes in. However, traditional evolutionary methods for evolving neural networks do not produce large-scale, complex, regular, modular, and hierarchical brains. I investigate the processes in nature that produce such structural organization in animal brains in order evolve such properties in the brains of artificially intelligent robots.
Other Research Interests
I am also interested in, and have performed research in, the following areas:
• The evolution of evolvability, especially mutation rates
• The evolution of altruism
• The evolution of phenotypic plasticity
• Using digital evolution to teach evolution
You can read more about my work in my publications.
Source CodeFree, open source code is available for the following:
- my HyperNEAT experiments (maintained by Joost Huizinga)
- our work with evolved soft robots
- Sferes code used for our paper The evolutionary origins of modularity.
- The Developmental Symbolic Encoding (DSE) code from the paper A novel generative encoding for evolving modular, regular and scalable networks
Advisors and Collaborators
My postdoc advisor was Hod Lipson.
The principle advisors for my Ph.D. were Robert T. Pennock, Charles Ofria and Richard Lenski.