Abstracts and Speaker Biographies
for Tutorials for the Genetic Programming 1998 Conference
July 22 - 25 (Wednesday - Saturday), 1998
University of Wisconsin - Madison, Wisconsin USA
23 Tutorials:
Wednesday July 22 - 9:15 AM - 11:30 AM
Introduction to Genetic Algorithms - David E. Goldberg, University of Illinois
Genetic Programming with Linear Genomes - Wolfgang Banzhaf, University of
Dortmund
Design of Electrical Circuits using Genetic Programming ­p; Forrest H
Bennett III, Visiting Scholar, Stanford University
Introduction to Genetic Programming - John Koza, Stanford University
Wednesday July 22 - 1:00 PM - 3: 15 PM
Large-Scale Discrete Optimization via Parallel Genetic Algorithms - Robert
R. Meyer, University of Wisconsin
Computing with DNA: An Introduction - Russell Deaton, University of Memphis
and Stephen Karl, University of South Florida
Cellular Encoding - David Andre, University of California - Berkeley
Intelligent Agents - Vasant Honavar, Iowa State University
Wednesday July 22 - 3:45 PM - 6 PM
Inductive Logic Programming - Luc De Raedt, Katholieke Universiteit Leuven
and Nada Lavrac, Jozef Stefan Institute
Introduction to Evolutionary Robotics - Takashi Gomi, Applied AI Systems,
Inc.
Constrained Genetic Programming - Cezary Z. Janikow, University of Missouri
- St. Louis
Bio-inspired Hardware Systems: A Phylogenetic, Ontogenetic, Epigenetic
Approach - Moshe Sipper, Swiss Federal Institute of Technology, Lausanne
Thursday July 23 - 3:25 PM - 5:40 PM
Reinforcement Learning - Richard S. Sutton, University of Massachusetts
Cellular Programming: The Evolution Of Parallel Cellular Machines - Moshe
Sipper, Swiss Federal Institute of Technology, Lausanne
Evolutionary Programming - Kumar Chellapilla, University of California -
San Diego
Inherent Characteristics and Biases of Evolution in Genetic Programming
- Justinian Rosca, Siemens Corporate Research
Friday July 24 - 3:25 PM - 5:40 PM
Cellular Neural Networks, the new CNN Universal Machine as a spatiotemporal
array computer, and CNN applications - Tamas Roska, Hungarian Academy of
Sciences and the University of California at Berkeley
Probabilistic Incremental Program Evolution (PIPE) - Rafal Salustowicz,
IDSIA, Lugano, Switzerland
Evolution Strategies - Michael Herdy, Technical University of Berlin
Saturday July 25 - 9:15 AM - 11:30 AM
An Introduction to Evolutionary Computation - David B. Fogel, Natural Selection,
Inc.
Advanced Genetic Programming - John Koza, Stanford University
Strongly Typed Genetic Programming - Thomas Haynes, Wichita State University
Ant Colony Optimization - Marco Dorigo, Universite' Libre de Bruxelles
-
Cellular Encoding - David Andre, University of California - Berkeley
Cellular encoding is the practice of using genetic programming to evolve
programs that construct non-tree representations to solve problems. Although
tree-based programs are powerful, it is often not obvious how to apply them
to problem domains where a distinctly non-tree solution is required. Inspired
by the principles of developmental biology, cellular encoding solves this
dilemma by using tree-based genetic programs to construct complex representations
from embryonic structures. From a simple embryo, the developmental program
specifies the construction steps to build or grow the complex system. In
the last five years, cellular encoding has been used to evolve a variety
of complex representations, including electronic circuits, neural networks,
finite state machines, mechanical parts, robot architectures, and parallelized
programs. In some problem domains, cellular encoding has allowed genetic
programming to produce solutions that are competitive with human performance.
The tutorial will introduce the basic concepts and mechanisms of cellular
encoding and show its applicability to a variety of domains. The theory
of cellular encoding as well as the issues surrounding when to use it rather
than tree based methods will also be covered. The tutorial will be centered
around the domains of evolving electronic circuits and neural networks,
but will cover the other applications as well. In addition, advanced issues
such as Lamarckian learning, the Baldwin effect, and dealing with invalid
individuals will be discussed.
Tutorial participants will be expected to have a basic understanding of
genetic programming, but need otherwise have no special preparation.
David Andre is currently researching genetic programming and machine
learning at the University of California at Berkeley. He has published more
than 45 papers and journal articles on genetic programming over the past
5 years, and has given two previous tutorials at the genetic programming
conference over various aspects of cellular encoding. He is an inventor
of the genetic programming system to evolve electrical circuits, and has
written a public domain code base for genetic programming. His research
topics include learning behavioral hierarchies, evolving electrical circuits,
applying genetic programming to problems in molecular biology, methods for
speeding up evolutionary computation, and investigating theoretical issues
in genetic programming. In addition, he is currently working on a book on
how to apply genetic programming to difficult, real world problems.
Genetic Programming with Linear Genomes - Wolfgang Banzhaf, University
of Dortmund
Traditionally, Genetic Programming is done using highly non-linear data
structures such as expression trees. In our tutorial we shall consider Genetic
Programming using other means than trees. Although this is mainly a question
of representation, we know from other evolutionary computation methods,
that representation is indeed an important step toward success or failure
of a certain algorithm.
The tutorial will be self-contained in that it will start with evolutionary
computation in general, then go on to the traditional method of Genetic
Programming using trees, until we come to linear representations of programs.
We shall provide an overview of different methods used so far before discussing
in more detail the induction of native machine code programs.
Wolfgang Banzhaf holds a PhD in physics and is presently an associate
professor in Applied Computer Science at the University of Dortmund, Germany.
His research interests are Genetic Programming, Evolutionary Computation
in general, Artificial Life, Self-organization, and Artificial Neural Networks.
Earlier he was a research associate in Synergetics at the University of
Stuttgart, Germany and later became a visiting researcher at Mitsubishi
Electric's Central Research Laboratory in Japan. Before coming to Dortmund,
he worked as a Senior Researcher at Mitsubishi Electric's Research Laboratory
(MERL) in Cambridge, Mass., USA.
Design of Electrical Circuits using Genetic Programming ­p; Forrest
H Bennett III, Visiting Scholar, Stanford University
The design (synthesis) of analog electrical circuits starts with a high-level
statement of the circuit's desired behavior and requires creating a circuit
that satisfies the specified design goals. Analog circuit synthesis entails
the creation of both the topology and the sizing (numerical values) of all
of the circuit's components. The difficulty of the problem of analog circuit
synthesis is well known and there is no previously known general automated
technique for synthesizing an analog circuit from a high-level statement
of the circuit's desired behavior. This tutorial presents a single uniform
approach using genetic programming for the automatic synthesis of both the
topology and sizing for numerous different prototypical analog circuits,
including a lowpass filter, a crossover (woofer and tweeter) filter, a source
identification circuit, an amplifier, a computational circuit, a time-optimal
controller circuit, a temperature-sensing circuit, and a voltage reference
circuit. The problem-specific information required for each of the eight
problems is minimal and consists primarily of the number of inputs and outputs
of the desired circuit, the types of available components, and a fitness
measure that restates the high-level statement of the circuit's desired
behavior as a measurable mathematical quantity. Use of genetic programming
for field-programmable digital gate arrays will also be covered.
Forrest H Bennett III is visiting scholar in the Computer Science
Department at Stanford University. He received his B. S. degree in Applied
Mathematics at the University of Colorado in 1985. He ran a software consulting
business for 5 years, where he designed systems, including industry's leading
industrial drive shaft design system. He then became the Chief Engineer
at Manco Systems where he designed and implemented the company's primary
software product which is used for data collection in manufacturing environments.
He has done research on using functional languages for programming parallel
computers. His current research involves using genetic programming to solve
problems and has published more than 24 papers in areas such as automatic
programming of multi-agent systems, analog circuit design, and programming
field programmable gate arrays.
Evolutionary Programming - Kumar Chellapilla, University of California
- San Diego
Evolutionary programming (EP) was initially proposed as a learning process
to generate artificial intelligence. Since then, EP has evolved into a generally
applicable problem-solving technique with a broad range of applications
in medicine, industry, and defense. The tutorial will start with an introduction
to EP and the underlying theory. Several EP applications in the areas of
continuous and discrete parameter optimization will be presented. The role
of adaptive and self-adaptive techniques for optimizing evolutionary search
in the context of static and dynamic objective functions will be discussed.
The design of representation-specific variation operators will be addressed
in detail in light of the optimization of variable-length structures such
as finite state machines, neural networks, fuzzy systems, and computer programs
represented as parse trees.
The tutorial will acquaint the attendee with the basics of EP. The attendees
will become aware of a wide range of problems that have been successfully
addressed using EP and will be able to judge the utility of using EP in
their own work.
The tutorial will cover the following:
- The common requirement for optimizing (minimizing/maximizing) an objective
or cost function
- Simulated Evolution - Evolutionary Algorithms
- History of EP. including evolving Finite State Machines as predictive
models, parameter optimization, self-adaptation, etc.
- Introduction to EP, including the basic algorithm, the theory relating
to the property of asymptotic convergence to the global optimum and the
results on the rates of optimization on simple functions
- Philosophical issues concerning EP and other EAs - similarities and
differences
- Applying EP to optimize cost functions, including continuous parameter
optimization, a complete walkthrough example with the Bohachevsky and Rosenbrock
functions, discrete parameter optimization, a walkthrough example with the
traveling salesman problem, variable length structure optimization, a walkthrough
example for evolving a computer program (represented as a parse tree) using
EP to backup a truck and trailer system
- Self-adaptation techniques (Gaussian, Log normal, covariance updates)
- Issues in optimizing noisy and dynamic objective functions
- Applications, including combinatorial optimization, function optimization
(constrained and unconstrained), connectionist systems learning using EP,
time series modeling and prediction, and medical diagnosis (Breast Cancer
Detection). The applications addressed in detail will be based on the attendees'
areas of interest. However, all of them will be included in the handouts
with an exhaustive set of references, so that those interested will have
a clear starting point in using EP for their application.
Kumar Chellapilla earned the M.S in electrical engineering from V.U
in 1997 and is now a Ph.D. student in the Dept. of ECE at UCSD. His research
interests include adaptive and self-adaptive techniques in evolutionary
programming and their application to the design and optimization of intelligent
systems. He has over 15 journal and conference publications.
Computing with DNA: An Introduction - Russell Deaton, University of
Memphis and Stephen Karl, University of South Florida
The use of real biological phenomena to perform computation has come of
age with Adleman's experimental concept proof (Science, 1994) that biotechnology
is now mature enough to allow solutions of computational problems provably-hard
for VLSI using the natural processes of life's DNA recombination. The subsequent
enthusiasm for DNA computing is based on its potential to solve optimization
problems through the massive parallelism of naturally occurring DNA template-matching
reactions. This tutorial describes and analyzes Adleman's result and the
methods, challenges, and promise of DNA computing from the point of view
of, computation, biology, and thermodynamics. Emphasis will be placed on
the reliability of the approach, in particular, fault-tolerance and the
encoding problem, as well as its relationship to evolutionary computation.
Several open problems of interest to both computer science and biology will
be discussed.
Dr. Stephen Karl of University of South Florida works on evolution,
population genetics, and conservation biology. His goal is to evaluate global
processes operating within and among natural populations through the use
of _molecular_ techniques (mainly the analysis of mitochondrial and nuclear
DNA genotypes), in order to obtain information on the evolution and ecology
of animal populations that would otherwise be difficult or impossible to
obtain by classical methodologies. Current projects deal with a wide variety
of animals, and particularly marine invertebrates. Applications of DNA computing
are particularly relevant to many problems in evolutionary biology. For
example, the evaluation of relationships among organisms is a central goal
of systematic biology. In this endeavor, it is useful to be able to determine
the shortest of all possible bifurcating relationships between organisms.
The total number of possible trees to evaluate can be quite large for even
a small number of taxa (20 produce 2 X 10^{20}, currently beyond computational
means.
Russell Deaton is in the Electrical Engineering Department at the
University of Memphis. He received my Ph.D. in 1992 from Duke University
in the area of solid state device physics and fabrication. Since then, his
research has included DNA computing, application of artificial neural networks
to the analysis of magnetic resonance images, and performance analysis of
software for network applications. In DNA computing, his research has focused
on question of reliability and efficiency, and the relation between the
DNA chemistry and computational capability.
------------------------------------------------------------------
Ant Colony Optimization - Marco Dorigo, Universite' Libre de Bruxelles
Ant Colony Optimization (ACO for short) is a novel approach to distributed
combinatorial optimization inspired by the foraging behavior of ant colonies.
In this tutorial I will present the basic ideas on which ACO is based, I
will give a detailed explanation of the way it works using the traveling
salesman problem as running example, and will then briefly discuss some
applications to problems like sequential ordering, quadratic assignment,
graph colouring, vehicles routing, and routing in telecommunications networks.
The tutorial will cover the following subjects:
- How real ants behavior is interesting for computer scientists? Here
I will give a few facts about ants behavior which I used to devise ant colony
algorithms.
- Examples of problems which can be solved by ant colonies: combinatorial
optimization problems and telecommunications networks routing problems.
Here I will briefly survey the problems which can be attacked by ant colony
algorithms.
- Detailed description of a paradigmatic ant colony algorithm. Here I
will use the travelling salesman problem as running example to introduce
the details of Ant System, the first ant colony algorithm introduced in
1991.
- Developments and extensions of Ant System. Here I will explain how Ant
System has been extended in various ways to improve its performance. I will
also show some results of comparisons between ACS (currently the best performing
ant colony algorithm) and other general purpouse heuristics like simulated
annealing, evolutionary computation, and tabu search.
- Hybridization of ant colony algorithms with local search. Here I will
show results obtained with ant colony algorithms plus local search applied
to the symmetric and asymmetric travelling salesman problems. The hybrid
ant colony algorithm will be compared with state-of-the-art specialized
heuristics for the same problems.
- Ant colony algorithms in combinatorial optimization. Here I will explain
how ant colony algorithms can be applied to combinatorial problems other
than the travelling salesman. I will cover applications to some of the following
combinatorial problems: the quadratic assignment problem, the sequential
ordering problem, the vehicle routing problem, the graph colouring problem,
etc.
- Ant colony algorithms for telecommunications networks. Here I will cover
ant colony algorithms that solve the routing problem for both circuit switched
and packet switched telecommunications networks.
Marco Dorigo received the Laurea (Master of Technology) degree in
industrial technologies engineering in 1986, the Ph.D. degree in information
and systems electronic engineering in 1992 from Politecnico di Milano, Milan,
Italy, and the title of Agrege' de l'Enseignement Superieur, from the Universiti
Libre de Bruxelles, Belgium, in 1995. From 1992 to 1993 he was a research
fellow at the International Computer Science Institute of Berkeley, CA.
In 1993 he was a NATO-CNR fellow, and from 1994 to 1996 a Marie Curie fellow.
Since 1996 he has been a Research Associate with the FNRS, the Belgian National
Fund for Scientific Research. His research areas include evolutionary computation,
distributed models of computation, and reinforcement learning. He is interested
in applications to autonomous robotics, combinatorial optimization, and
telecommunications networks. Dr. Dorigo is an Associate Editor for the IEEE
Transactions on Systems, Man, and Cybertnetics, and for the IEEE Transactions
on Evolutionary Computation. He is a member of the Editorial Board of the
Evolutionary Computation journal and of the Adaptive Behavior journal. He
was awarded the 1996 Italian Prize for Artificial Intelligence.
An Introduction to Evolutionary Computation - David B. Fogel, Natural
Selection, Inc.
Evolution is the primary unifying principle of modern biological thought.
Classic Darwinian evolutionary theory, combined with the selectionism of
Weismann and the genetics of Mendel, has now become a rather universally
accepted set of arguments known as the neo-Darwinian paradigm.
Evolutionary thought, however, extends beyond the study of life. Evolution
is an optimization process that can be simulated using a computer or other
device and put to good engineering purpose. The interest in such simulations
has increased dramatically in recent years as applications of this technology
have been developed to supplant conventional technologies in power systems,
pattern recognition, control systems, factory scheduling, pharmaceutical
design, and diverse other areas.
Evolutionary computation has become the standard term that encompasses methods
for simulating evolution, typically including evolution strategies, evolutionary
programming, genetic algorithms, and related areas of genetic programming,
classifier systems, and also artificial life. The term is still relatively
new and represents an effort to bring together researchers who have been
working in these and other closely related fields but following different
paradigms.
This tutorial will provide an introduction to evolutionary computation.
Basic concepts in modeling evolution will be discussed, along with prototypical
examples of genetic algorithms, evolutionary programming, and evolution
strategies applied to optimization problems. Attention will be given to
fundamental convergence and other mathematical properties of these algorithms.
Emphasis will be placed on the issues involved in choosing appropriate representations,
variation operators, and selection mechanisms when designing an evolutionary
algorithm. Brief consideration will also be given to advanced topics such
as self-adaptation and the use of operator fitness distributions. Attendees
will receive the essential knowledge required to understand the fundamental
concepts presented in the vast majority of papers and tutorials presented
at Genetic Programming 1998 Conference.
David B. Fogel, Ph.D. is Chief Scientist of Natural Selection, Inc.,
in La Jolla, CA. He has over 13 years of experience applying evolutionary
computation to real-world problems in industry, medicine, and defense. Dr.
Fogel received the Ph.D. in engineering sciences (systems science) from
UCSD in 1992. He is the founding Editor-in-Chief of the IEEE Transactions
on Evolutionary Computation, co-Editor-in-Chief of Oxford University Press'
Handbook of Evolutionary Computation, author of two books, including most
recently Evolutionary Computation: Toward a New Philosophy of Machine Intelligence
(IEEE Press, 1995), and editor of The Fossil Record of Evolutionary Computation
(IEEE Press, forthcoming). Dr. Fogel has over 100 publications in evolutionary
computation and is the Program Chairman of the 1998 IEEE International Conference
on Evolutionary Computation (ICEC), held as part of the World Congress on
Computational Intelligence in Anchorage, Alaska, May, 1998.
Introduction to Evolutionary Robotics - Takashi Gomi, Applied AI Systems,
Inc.
The field of Evolutionary Robotics (ER) emerged as an important recent addition
to intelligent robotic research in the early 1990s. It currently deals mostly
with the evolution of control software for mobile robots with the goal of
developing competent robots that perform tasks which are well adapted to
the requirements and are continuously adapting in a given operational environment.
The tutorial is partly based on results of an on-going survey of key Evolutionary
Robotic research activities conducted worldwide today. Included in the survey
are research activities in such places as Sussex University, Stanford University,
University of Zurich, University of Aarhus (Denmark), Massachusetts Institute
of Technology (MIT), Ecole Normale Superieure (France) and Ecole Polytechnique
Federale De Lausanne (EPFL). The latest information on Evolutionary Hardware
will also be included.
The tutorial highlights the principles of evolving complete software for
intelligent robots; differences in the ways various methods of Evolutionary
Computation are applied to create robots which evolve in their operational
environment; recent noteworthy achievements in the field; and prognosis
for the application of the technology to industry, business, and life in
general in the near future. A large number of video clips will be used to
highlight performance of various evolved robots.
Participants should have a good introductory knowledge of GA, GP, and EC,
in general. Beginners in robotics are welcome.
Takashi Gomi was born in Tokyo in 1940. He obtained his M. Eng. dgree
in 1964 from Waseda University in Electrical and Electronic Engineering
and his D. Eng in 1997 from Hokkaido University. From 1971-73 he worked
as a Graduate Research and Teaching Assistant at the University of Alberta
in Computing Science. He became a member of the Scientific Staff at Bell
Northern Research in 1973. After a brief experience at Atomic Energy of
Canada, Dr. Gomi established Applied AI Systems, Inc. (AAI) in 1983 as a
company dedicated to research and development of intelligent system technology,
the oldest specialty AI company in Canada and known widely in japan, Europe,
and USA. In his company he conducts application research, intelligent system
development including intelligent robots, marketing of a wide range of AI
and artificial life products, and training of AI research engineers. Despite
its small size (18 people), AAI is recognized as the top level intelligent
system R&D organization in Canada. Since 1992, a series of R&D projects
aimed at the transfer of behavior-based AI or New AI technology to government,
business and industry has begun. In October 1995, Dr. Gomi became President
of a newly established AI and artificial life company in Japan, called AAI
Japan. Dr. Gomi is a member of the IEEE Service Robot Technical Committee.
Introduction to Genetic Algorithms - David E. Goldberg, University of
Illinois
This tutorial introduces genetic algorithms, shows numerous applications
of GAs, reports on recent research results in the field, and outlines pending
problems and areas of research.
David E. Goldberg is Professor of General Engineering at the University
of Illinois at Urbana-Champaign. He holds a Ph.D. from the University of
Michigan and has written numerous papers on the application and foundations
of genetic algorithms. His book Genetic Algorithms in Search, Optimization,
and Machine Learning (Addison-Wesley, 1989) is widely used and his recent
studies have considered the speed, convergence, and accuracy of the traditional
genetic algorithm as well as a nontraditional approach, called the messy
genetic algorithm, that works faster, better, and more reliably at solving
problems.
Strongly Typed Genetic Programmingg - Thomas Haynes, Wichita State University
For GP, a prime consideration in designing the function and terminal sets
is closure, which requires all of the functions to accept arguments of a
single data type (i.e., a float) and return values of that same data type.
A consequence is that all functions must return values that can be used
as arguments for any other function. Hence, closure entails any element
can be a child node in a parse tree for any other element without having
conflicting data types.
A problem with using GP to solve large and complex problems is the considerable
size of the state-space to be searched for generating good solutions. Even
for small terminal and function sets and tree depths, search spaces of the
order of 10E+30 to 10E+40 are not uncommon. To address this pressing problem,
researchers have been investigating various means to reduce the GP state-space
size for complex problems. Notable work in this area include automatically
defined functions (ADF), module acquisition (MA), and strongly typed genetic
programming (STGP). The first two methods utilize function decomposition
to reduce the state-space. The STGP method utilizes structuring of the GP
S-expression to reduce the state-space.
This tutorial will cover
- Define STGP (work of David Montana)
- Examine the utility of STGP
- Show how to add strong typing to GP
- Review case studies
- Predator/prey work by Thomas Haynes
- Type hierarchy by Thomas Haynes
- Line Segment detection by Chris Harris
- Functional GP by Tina Yu
- Object-Oriented Programs by Wilker Bruce
Thomas Haynes received his BSEP (Engineering Physics) from the University
of Oklahoma in 1986. In 1994, he received a MS in Computer Science from
the University of Tulsa. His thesis was entitled "A Simulation of Adaptive
Agents in a Hostile Environment" which combined STGP and multiagent
learning. He is currently a PhD candidate at the University of Tulsa, with
his dissertation being "Collective Adaptation: The Sharing of Building
Blocks". In 1997, he was a Visiting Assistant Professor at the University
of Missouri, St. Louis. He is currently an Assistant Professor at Wichiat
State University.
Evolution Strategies - Michael Herdy, Technical University of Berlin
The tutorial will cover the design of mutation operators and adaptation
mechanisms for strategy parameters in evolution strategies. The tutorial
will show the application of evolution strategies to different problem classes.
Necessary condition for the applicability of the evolution strategy is the
validity of strong causality: Small variations of the variables must lead
to small changes of the values of the fitness function. If the original
formulation does not comply with strong causality then the creation of suitable
mutation operators can often help. The tutorial will show how to build operators
for some combinatorial optimization problems (e.g. traveling salesman problem,
Rubik's cube, true magic square). With computer demonstrations the effect
of the operators will be demonstrated. Suitable step size adaptation mechanisms
play an important role for the convergence to an optimal solution of a given
problem. This is not only valid for real coded problems but it holds also
for combinatorial problems as the true magic square problem. In addition
the adaptation of the population size is important for the solution of such
problems. These mechanisms will also be explained and demonstrated with
computer demonstrations. Evolution strategies with subjective selection
of the offspring can be applied to optimization problems, when no objective
evaluation of the offspring is possible. The application of ES with subjective
selection and automatically adapted step size to optimization problems with
no possibility of objective evaluation will be explained. The optimization
of coffee blends or mixtures of honey belong as well to that category of
problems as the optimization of glaze mixtures for ceramic tiles. With computer
demonstrations the application of ES with subjective selection will be demonstrated.
Michael Herdy received his diploma as college engineer in telecommunications
(1980) and as academically trained electrical engineer (1989). Working from
1981 to 1989 as engineer in the laboratory of Ingo Mueller at the Technical
University of Berlin (Department of Thermo Dynamics and Fluid Dynamics).
Working from 1989 to 1994 as a researcher and teacher in the field of theory
and applications of evolution strategies in the lab of Ingo Rechenberg at
the Technical University of Berlin (Department of Bionics and Evolution
Techniques). From 1994 to now working in a research project in the same
field and the same lab. Special interests are the wide fields bionic transferring
of results from biological evolution to techniques and evolutionary algorithms.
Intelligent Agents - Vasant Honavar, Iowa State University
The tutorial will present an overview of intelligent agents and their applications.
Specific topics to be covered include: intelligent agent architectures,
languages and tools for design and implementation of intelligent agents,
mobile agents, sample applications of intelligent agents in software customization,
information retrieval, knowledge discovery and data mining, robotics, and
design and manufacturing.
Dr. Vasant Honavar is an associate professor of Computer Science
and Neuroscience and Director of the Artificial Intelligence Laboratory
at Iowa State University. He received his Ph.D. in Computer Science and
Cognitive Science from University of Wisconsin, Madison in 1990. Honavar's
current research and teaching interests include: artificial intelligence,
cognitive science, machine learning, neural networks, intelligent agents,
and multi-agent systems, adaptive systems, data mining and knowledge discovery,
computational and cognitive neuroscience, evolutionary computation and artificial
life, machine perception, intelligent manufacturing systems, intelligent
systems, knowledge-based systems, robotics, pattern recognition, medical
informatics, computational biology, intelligent multi-media information
systems, artificial intelligence applications in science and engineering,
and parallel and distributed computing. Honavar has published over 80 refereed
articles in journals, books, and conference proceedings, 10 invited book
chapters, and two edited books on these topics. He is presently writing
a book on Adaptive and Learning Systems.
Constrained Genetic Programming - Cezary Z. Janikow, University of Missouri
- St. Louis
Genetic Programming (GP) relies on the closure property, which states that
every function can call any other function and that any terminal can provide
values for any function argument. Unfortunately, this property leads to
many invalid (domain-specific syntax and semantics) or simply unwanted (heuristics)
structures (e.g., programs). This enlargement of the search space (aggravated
by sufficiency) can slow down the simulated evolution.
In GP, this is dealt with by either penalizing such structures (e.g., by
setting evaluation to 0) to ensure their extinction, or by extending interpretations
(e.g., protected division) to validate otherwise invalid structure. But,
these choices are arbitrary in general. Koza has introduced structure-preserving
crossover - an ad-hoc, domain-specific approach, aimed at preventing crossover
from generating undesired structures.
The objective of Constrained Genetic Programming (CGP) is to provide general
and domain-independent means for dealing with these problems in crossover,
as well as in initialization and mutation. Arising from this need, CGP has
now evolved into a more elaborate means to dictate strong (weak) constraints
(heuristics) on the solution space. In fact, we are currently working on
using similar mechanisms to evolve GP representation in the process of the
same evolution.
CGP was developed in 1995 at NASA/JSC and UMSL, simultaneously and independently
of Montana's Strongly Typed GP (STGP). CGP v1 in fact implemented the same
ideas as STGP, except that 1) rather than using type names for inducing
constraints, it expected explicit constraints - this allows CGP to specify
semantic constraints, 2) CGP's crossover was shown to be stronger (i.e.,
able to generate more valid programs) 3) STGP has practically merged type
constraints with tree depths. But the biggest difference lies in CGP's formal
methodology. Constraints can be entered in multiple forms because they show
up in different forms. These constraints are transformed into a normal form,
which is shown to be a minimal description of the same constraints. Constrained
initialization, mutation, and crossover are shown to satisfy the normal
constraints, and they are shown to have the same complexity (O) as the unconstrained
operators. CGP v2.1 extends v1 by also introducing data typing and overloading
(as in programming languages). Overloading allows defining functions, such
as plus, to compute different types according to the types of its instance's
arguments. It also allows weak constraints, that are heuristic preferences
(weighted) on some evolved forms over others. Another advantage that CGP
v2.1 offers is its extensive implementation: it has been implemented into
lil-gp ( public domain tool for developing GP). Therefore, all lil-gp's
capabilities are still available (currently, lil-gp 1.02 has been used,
but plug-in approach allows for easy merging CGP with future lil-gp release).
The only exception is that even though CGP provides for propagating constraints
between modules, the implementation has not been extended to ADFs.
In the tutorial we will present the constraint language, formal analysis
of constraints and constrained operators, and comparison of CGP to STGP.
We will illustrate CGP v2.1 with application examples. We will discuss the
implications of being able to control the search space (theoretical and
practical). We will also discuss current/future developments: ADFs, recursive
functions, merging with depth limitations, and evolving GP representation.
Cezary Z. Janikow has received B.A. (1986) and M.S. (1987) from the
University of North Carolina at Charlotte, and Ph.D. (1991) from the University
of North Carolina at Chapel Hill, all in computer science. Currently, he
is an Associate Professor of CS at UMSL. His research interests include
evolutionary computation, representation and constraints in genetic algorithms
and genetic programming, machine learning, and most recently fuzzy representation
for machine learning. He co-authored GENOCOP (linear constrains in GA),
he authored and developed GenET (generic-GA tool), CGP (Constrained GP),
GIL (GA for inductive learning) and FID (fuzzy decision tree). His other
interests include software design and development (both structured and object-oriented),
C/C++/Java programming, and UNIX productivity tools.
Introduction to Genetic Programming - John Koza, Stanford University
This introductory tutorial is intended primarily for people attending the
Genetic Programming Conference for the first time. It will provide basic
information about genetic programming, how it works, and how to apply it
to a particular problem. Genetic programming will be illustrated using problems
of system identification (symbolic regression), control, classification,
optimization, pattern recognition, and design.
The tutorial will include discussion of promising application areas for
GP, directions for possible future research, and a bibliography, conferences,
E-Mail lists, WWW and FTP sites.
Automatically defined functions and other more advanced topics will be briefly
surveyed. These more advanced topics will each be covered in greater detail
in the "Advanced Genetic Programming" tutorial (see below). This
tutorial should provide sufficient background for the "Advanced Genetic
Programming" tutorial.
John R. Koza
is consulting professor in the Computer Science Department and Symbolic
Systems Program at Stanford University. At Stanford, he teaches courses
on Genetic Algorithms and Genetic Programming, Artificial Life, and Computational
Issues in Molecular Biology. He is author of the 1992 book Genetic Programming:
On the Programming of Computers by Means of Natural Selection from the MIT
Press, a 1994 book Genetic Programming II: Automatic Discovery of Reusable
Programs from the MIT Press, over 100 articles, and is currently working
on a third book on genetic programming. He received his Ph.D. in Computer
Science from the University of Michigan under John Holland in 1972.
Advanced Genetic Programming - John Koza, Stanford University
This tutorial covers more advanced topics of genetic programming, including
Automatically Defined Functions (ADFs)
Memory, State, Data structures, Mental Models
Evolutionary Selection of Architecture
Evolution of Architecture using Architecture-Altering Operations
Implementation on parallel computer
Automatically Defined Macros (ADMs)
Automatically Defined Iterations (ADIs)
Automatically Defined Loops (ADLs)
Recursion
Rapidly Reconfigurable field-programmable gate arrays (FPGAs) and evolvable
hardware
Implementation of GP in assembly code
Cellular encoding (Developmental GP)
Selected problems where GP is competitive with human solutions
Automated design of analog circuits
The tutorial will include discussion of promising application areas for
GP, directions for possible future research, and a bibliography, conferences,
E-Mail lists, WWW and FTP sites.
John R. Koza
is consulting professor in the Computer Science Department and Symbolic
Systems Program at Stanford University. At Stanford, he teaches courses
on Genetic Algorithms and Genetic Programming, Artificial Life, and Computational
Issues in Molecular Biology. He is author of the 1992 book Genetic Programming:
On the Programming of Computers by Means of Natural Selection from the MIT
Press, a 1994 book Genetic Programming II: Automatic Discovery of Reusable
Programs from the MIT Press, over 100 articles, and is currently working
on a third book on genetic programming. He received his Ph.D. in Computer
Science from the University of Michigan under John Holland in 1972.
Large-Scale Discrete Optimization via Parallel Genetic Algorithms -
Robert R. Meyer, University of Wisconsin
Large-scale optimization models arise in a broad variety of scientific,
engineering, and industrial applications and provide exciting challenges
in both the development of new algorithms and in the efficient implementation
of these algorithms on advanced computer architectures. The focus of this
tutorial will be parallel and distributed genetic algorithms that provide
effective approaches for problems that are often intractable for standard
optimization techniques because of their size or difficulty. For example,
the GA techniques to be described have been successfully used in minimum
perimeter domain decomposition problems that have more than one billion
binary variables in standard quadratic assignment formulations. Successful
applications of GA's to large-scale optimization problems arising in network
design and biocomputing will also be considered.
These structured problems of enormous size can be solved by employing a
very high-level view that utilizes decomposition into subproblems of manageable
size followed by distributed coordination operations that appropriately
combine subproblem solutions (building blocks in a genetic algorithm context)
to achieve feasible solutions of the original problem. An additional advantage
of this high-level view is the compression of the problem representation,
which, combined with the "island" GA paradigm, makes the cost
of the communication operations of the algorithm small relative to computation
in parallel and distributed computing environments. Computational results
on a variety of advanced computer architectures will be presented.
Robert Meyer received a B.S. in Mathematics from Cal Tech in 1964
and a Ph.D. in Computer Sciences from the University of Wisconsin-Madison
in 1968. After five years as a research analyst for the Shell Oil Company
in Emeryville (California) and Houston, he joined the faculty of the University
of Wisconsin in 1973, and is currently Professor of Computer Sciences there.
His research interests include large-scale optimization (with emphasis on
decomposition methods and network-related applications), genetic algorithms,
and parallel and distributed computation, and are represented by more than
sixty technical papers and five co-edited volumes of conference proceedings.
Inductive Logic Programming - Luc De Raedt, Katholieke Universiteit
Leuven and Nada Lavrac, Jozef Stefan Institute
Since the very start of machine learning (and artificial intelligence),
logic has been very popular as a representation language for inductive concept-learning
and the possibilities for learning in a first order representation have
been investigated. Early research involved structural matching, least general
generalization, model inference, and theory restructuring. Recently, the
area of logic and learning has concentrated in the lively research field
of Inductive Logic Programming, which studies inductive machine learning
within the framework of logic programming.
The tutorial aims at providing a complete survey of these developments,
especially those concerning first order logic. First order logic significantly
extends propositional representations and therefore enlarges the scope of
machine learning applications. In terms of knowledge discovery in databases,
propositional techniques learn from a single relation or table in a relational
databases, whereas first order approaches typically cope with multiple relational
tables. Since learning in first-order logic also allows to induce (logic)
programs, there is a shared interest with genetic programming.
The course gives an overview of the historical developments, fundamental
techniques and applications of Inductive Logic Programming. Logical and
algorithmic foundations of this field are emphasized, and illustrated by
means of well-known systems and algorithms. This includes: learning as search,
logical representations, operators and settings for induction, a survey
of the state-of-the-art, and applications in knowledge discovery (including
case-studies) and logic programming. Furthermore, pointers to many of the
available systems in the public domain are provided.
The course assumes basic knowledge of artificial intelligence. Basic knowledge
of logic (e.g., some notions about Prolog and/or the predicate calculus)
is beneficial but not required.
Luc De Raedt is a post-doctoral fellow of the Belgian National Fund
for Scientific Results and an assistant professor at the Katholieke Universiteit
Leuven (Belgium). His main interest is in inductive logic programming. He
is a coordinator of the European ESPRIT III and IV projects on Inductive
Logic Programming, was (co-)chair of the ECML-94 conference, ILP-95 workshop,
and IJCAI-97 Workshop on "Frontiers of ILP" and he has published
two books on inductive logic programming, and various papers in journals
such as Artificial Intelligence, Machine Learning, and Journal of Logic
Programming. He has given tutorials and invited talks on ILP at ISMIS-93,
ILPS-93, MSL-96, LOPSTR 96, IJCAI-97, ICML-97, PADD-97, EUROVAV-97, etc.
He is a member of the editorial board of Machine Learning, New Generation
Computing, AI Communications and Intelligent Data Analysis.
Nada Lavrac is a research associate at the Jozef Stefan Institute,
Ljubljana, Slovenia. Her main research interest is in inductive logic programming
and medical applications of machine learning. She is co-author and editor
of several books published by Sigma Press, MIT Press, Kluwer and Springer,
including "Inductive logic programming: techniques and applications"
(Ellis Horwood). She was a coordinator of ILPnet, the European Scientific
Network in Inductive Logic Programming (1993-96) and co-chair of ILP-97.
She is member of editorial boards of Applied Artificial Intelligence, New
Generation Computing, AI in Medicine and AI Communications.
Inherent Characteristics and Biases of Evolution in Genetic Programming
- Justinian Rosca, Siemens Corporate Research
Genetic Programming (GP) has gained increased acceptance as a useful tool
in a wide variety of applications, from complex design, control, data mining,
structure learning, and pattern recognition. Getting the most out of efforts
to apply GP to hard problems relies on an understanding of the specific
characteristics and limitations of GP search. Such characteristics arise
from choices about the types of representations used and biases in generating
and processing representations.
In addition, GP has sparked numerous ideas about hybrid approaches to problem
solving. Further advances along these lines also necessitate an understanding
of the inherent characteristics and biases of evolution in GP. This tutorial
will highlight, analyze, and exemplify the important characteristics and
biases of evolution specific to GP.
The four main goals of the tutorial are to:
Summarize theoretical and experimental knowledge about
characteristics and biases of GP evolution
Describe details of main results capturing static analysis of distribution
of program behaviors and role of the problem representation and the set
of primitives, dynamic biases induced by tree shaped expressions, role of
variable size in GP, statistical perspective on the dynamics of GP.
Present practical information about how you can exploit the inherent
characteristics of GP in your own problem
Provide a list of references for future investigation into this
topic.
Justinian Rosca received his PhD at the University of Rochester and
is now with Siemens Corporate Research in Princeton, New Jersey.
Cellular Neural Networks, the new CNN Universal Machine as a spatiotemporal
array computer, and CNN applications - Tamas Roska, Hungarian Academy of
Sciences
The Cellular Neural/nonlinear Network (CNN) has been invented by L.O. Chua
and L. Yang in Berkeley in 1988. It is a dynamic nonlinear processing array
composed of continuous valued dynamical systems with geometrically local
interactions. These interactions are described by a cloning template. In
1992-93, T. Roska and L. O. Chua invented the CNN Universal Machine (CNN-UM)
architecture, in which the CNN array is embedded in a stored program computer,
combining spatio-temporal nonlinear dynamics with local and global logic.
The first two fully operational chips implementing the CNN-UM have been
implemented in Seville and Berkeley, the Seville chip has on-chip optical
input (focal plane). Meanwhile the computational infrastructure has been
developed in Budapest and these two chips were functionally tested. The
Trillion equivalent operations per sec (per square cm) marks a new speed
record in a single chip (under stored programming). It has been shown that
neuromorphic models of vision and other modalities can be combined by using
the CNN-UM architecture. This work led to the concept of the Bionic Eye.
This tutorial will also contain some mission critical application case studies.
Tamas Roska received the Diploma in electrical engineering from the
Technical University of Budapest in 1964 and the Ph.D. and D.Sc. degrees
in Hungary in 1973 and 1982, respectively. Since 1964 he has held various
research positions. During 1964-1970 he was with the Measuring Instrument
Research Institute, Budapest, between 1970 and 1982 with the Research Institute
for Telecommunication, Budapest (serving also as the head of department
for Circuits, Systems and Computers) and since 1982 he is scientific adviser
at the Computer and Automation Institute of the Hungarian Academy of Sciences
where he is presently head of the Analogic (dual) and Neural Computing Systems
Research Laboratory. Professor Roska has taught several courses at the Technical
University of Budapest, presently, he is teaching a graduate course on "Electronic
operators and neural circuits". In 1974 and since 1989 in each year,
he has been Visiting Scholar at the Department of Electrical Engineering
and Computer Sciences and the Electronics Research Laboratory of the University
of California at Berkeley. His main research areas in electronic circuits
and systems have been: active circuits, computer-aided design, nonlinear
circuit and systems, neural circuits and analogic spatiotemporal supercomputing.
He has published several papers and four textbooks (partly as a co-author),
and held several guest seminars at various universities and research institutions
in Europe, USA, and Japan. Dr. Roska is a co-inventor of the CNN Universal
Machine (with L. O. Chua), a US Patent of the University of California with
worldwide protection and the analogic CNN Bionic Eye (with F. Werblin and
L. O. Chua), a filed US patent of the University of California. He has contributed
also to the development of the various implementations of these inventions
making this Cellular Analogic Supercomputer a reality. Professor Roska is
the member of several Hungarian and international Scientific Societies.
Since 1975 he is member of the Technical Committee on Nonlinear Circuits
and Systems of the IEEE Circuits and Systems Society. Between 1987-89 he
was the founding Secretary and later he served as Chairman of the Hungary
Section of the IEEE. Recently, he has been Associate Editor of the IEEE
Transactions on Circuits and Systems, Guest Co-Editor of special issues
on Cellular Neural Networks of the International Journal of Circuit Theory
and Applications (1992, 1996) and the IEEE Transactions on Circuits and
Systems (1993). He is a member of the Editorial Board of the International
Journal of Circuit Theory and Applications. Dr. Roska received the IEEE
fellow award for contributions to the qualitative theory of nonlinear circuits
and the theory and design of programmable cellular neural networks. In 1993
he was elected to be a member of the Academia Europaea (European Academy
of Sciences, London) and the Hungarian Academy of Sciences. For technical
innovations he received the D. Gabor Award, for establishing a new curriculum
in information technology and for his scientific achievement he was awarded
the A. Szentgyvrgyi and the Szichenyi Award, respectively. In 1994, Dr.
Roska became the elected active member of the Academia Scientiarium et Artium
Europaea (Salzburg). November 1996. He is currently affiliated with the
Hungarian Academy of Sciences and the University of California at Berkeley.
Probabilistic Incremental Program Evolution (PIPE) - Rafal Salustowicz,
IDSIA, Lugano, Switzerland
Probabilistic Incremental Program Evolution (PIPE) is a novel technique
for automatic program synthesis. PIPE iteratively generates successive populations
of programs according to an adaptive probability distribution over all possible
programs that can be constructed from a predefined instruction set. Each
iteration it uses the best program(s) to refine the distribution and thus,
to stochastically generate better and better programs.
Unlike Genetic Programming, PIPE does not store domain knowledge in a population,
but in a probability distribution. It also does not use a crossover operator
to generate solution candidates, but a new method based on Population-Based
Incremental Learning. PIPE additionally works with different program representations,
such as simple parse trees, hierarchically structured multi-dimensional
trees and directed acyclic graphs that in principle allow for automatic
evolution of modules.
This tutorial is designed to introduce GP researchers to the foundations
of PIPE and to bring them up to date with the state of the art in the design
of different program representations and update rules that can be used by
PIPE. It will present how PIPE can be applied to standard problems, multi-agent
learning tasks, and time-series analysis. It will discuss "the tricks
of the trade" in applying PIPE and include a short overview over existing
easy-to-use software.
Rafal Salustowicz received the "Diplom" (M.S.) degree in
computer science from Technical University, Berlin, Germany in 1995. From
1992 to 1994 he was a research and teaching assistant at Technical University,
Berlin, where he taught classes in computer vision and conducted research
on 3D-scene, color, and motion analysis. In 1994 he spend half a year as
a visiting research assistant at the department of psychology, Stanford
University, California designing a genetic algorithm for the construction
of topologies of neural networks. Since 1995 current he has been pursuing
a "Dr.-Ing." (Ph.D.) degree at IDSIA, the Istituto Dalle Molle
di Studi sull'Intelligenza Artificiale in Lugano, Switzerland. His research
interests are in probabilistic search through program space, neural networks,
genetic algorithms, learning and optimization in highly dynamic environments,
time-series analysis, and multiagent learning.
----------------------------------------------------------------------
Bio-inspired Hardware Systems: A POEtic (Phylogenetic, Ontogenetic,
Epigenetic) Approach - Moshe Sipper, Swiss Federal Institute of Technology,
Lausanne
If one considers life on Earth since its very beginning, three levels of
organization can be distinguished: the phylogenetic level concerns the temporal
evolution of the genetic programs within individuals and species, the ontogenetic
level concerns the developmental process of a single multicellular organism,
and the epigenetic level concerns the learning processes during an individual
organism's lifetime. In analogy to nature, the space of bio-inspired hardware
systems can be partitioned along these three axes, phylogeny, ontogeny,
and epigenesis, giving rise to the POE model. This tutorial is an exposition
and examination of bio-inspired systems within the POE framework, the goals
being: (1) to present an overview of current-day research, (2) to demonstrate
that the POE model can be used to classify bio-inspired systems, and (3)
to identify possible directions for future research, derived from a POE
outlook, involving systems endowed with evolutionary, reproductive, regenerative,
and learning capabilities. Emphasis will be placed on the phylogenetic axis:
evolvable hardware and evolutionary electronics.
Moshe Sipper is
a senior researcher in the Logic Systems Laboratory at the Swiss Federal
Institute of Technology, Lausanne, Switzerland. He received a B.A. in Computer
Science from the Technion -- Israel Institute of Technology, an M.Sc. and
a Ph.D. from Tel Aviv University. His chief interests involve the application
of biological principles to artificial systems, including evolutionary computation,
cellular automata (with an emphasis on evolving cellular machines), bio-inspired
systems, evolving hardware, complex adaptive systems, artificial life, and
neural networks. Dr. Sipper has authored and co-authored several scientific
papers in these areas, as well as his recent book Evolution of Parallel
Cellular Machines: The Cellular Programming Approach (Heidelberg: Springer-Verlag,
1997). He was co-organizer of a special session entitled ``Toward Evolware,''
held as part of the IEEE International Conference on Evolutionary Computation
(ICEC'97), and is Program Chairman of the Second International Conference
on Evolvable Systems: From Biology to Hardware (ICES98), to be held in Lausanne
in September 1998.
Cellular Programming: The Evolution Of Parallel Cellular Machines -
Moshe Sipper, Swiss Federal Institute of Technology, Lausanne
Nature abounds in systems involving the actions of simple, locally interacting
components, that give rise to coordinated global behavior. These collective
systems have evolved by means of natural selection to exhibit striking problem-solving
capacities, while functioning within a complex, dynamic environment. In
recent years we are witness to a rapidly growing interest in such complex
adaptive systems</i>, addressing, among others, the major problem
of designing them to exhibit a specific behavior or solve a given problem.
One possible approach, which we explore in this tutorial, is to employ artificial
evolution. The systems studied are essentially generalizations of the well-known
cellular-automata model, originally introduced by Stanislaw Ulam and John
von Neumann in the 1940s. Introducing the cellular programming</i>
evolutionary algorithm, we apply it to a number of computational tasks,
demonstrating that high-performance systems can be evolved. We also show
that one can evolve connectivity architectures for such systems. Evolving
cellular systems hold potential both scientifically, as vehicles for studying
phenomena of interest in areas such as complex adaptive systems and artificial
life, as well as practically, showing a range of potential applications
ensuing the construction of adaptive systems, and in particular evolving
ware, evolware</i>.
Moshe Sipper s
a senior researcher in the Logic Systems Laboratory at the Swiss Federal
Institute of Technology, Lausanne, Switzerland. He received a B.A. in Computer
Science from the Technion -- Israel Institute of Technology, an M.Sc. and
a Ph.D. from Tel Aviv University. His chief interests involve the application
of biological principles to artificial systems, including evolutionary computation,
cellular automata (with an emphasis on evolving cellular machines), bio-inspired
systems, evolving hardware, complex adaptive systems, artificial life, and
neural networks. Dr. Sipper has authored and co-authored several scientific
papers in these areas, as well as his recent book Evolution of Parallel
Cellular Machines: The Cellular Programming Approach (Heidelberg: Springer-Verlag,
1997). He was co-organizer of a special session entitled ``Toward Evolware,''
held as part of the IEEE International Conference on Evolutionary Computation
(ICEC'97), and is Program Chairman of the Second International Conference
on Evolvable Systems: From Biology to Hardware (ICES98), to be held in Lausanne
in September 1998.
Reinforcement Learning - Richard S. Sutton, University of Massachusetts
Reinforcement learning is learning about, from, and while interacting with
a environment in order to achieve a goal. In other words, it is a relatively
direct model of the learning that people and animals do in their normal
lives. In the last two decades, this age-old problem has come to be much
better understood by integrating ideas from psychology, optimal control,
artificial neural networks, and artificial intelligence. New methods and
combinations of methods have enabled much better solutions to large-scale
applications than had been possible by all other means. This tutorial will
provide a top-down introduction to the field, covering Markov decision processes
and approximate value functions as the formulation of the problem, and dynamic
programming, temporal-difference learning, and Monte Carlo methods as the
principal solution methods. Relationships to genetic programming and evolutionary
methods will be highlighted. The role of neural networks and planning will
also be covered. The emphasis will be on understanding the capabilities
and appropriate role of each of class of methods within in an integrated
system for learning and decision making.
Richard S. Sutton received the B.A. degree in psychology from Stanford
University in 1978 and the M.S. and Ph.D. degrees in Computer Science from
the University of Massachusetts in 1980 and 1984. He then worked for nine
years at GTE Laboratories in Waltham, where he was principal investigator
of their connectionist machine learning project. Since 1995 he has been
a senior research scientist in the computer science department at the University
of Massachusetts, Amherst. Dr. Sutton's research interests center on the
learning problems facing a decision-maker interacting with its environment.
He is one of the founders of the field of reinforcement learning and authored
the original paper on temporal-difference learning. He is the author with
Andrew Barto of the textbook Reinforcement Learning: An Introduction. He
is also interested in animal learning psychology, neural networks, and systems
that continually improve their representations and models of the world.
Click here to go to www.genetic-programming.org