Paper Presentation Schedule  
   

Paper presentation schedule PFD File

Friday       Saturday       Sunday


Paper Presentations                                                      Friday 10 July 8:30 – 10:10

 

GP-2: Building Blocks

Room: Cartier A

Session Chair: Riccardo Poli (University of Essex)                                          

8:308:55         Functional Modularity for Genetic Programming

Krzysztof Krawiec, Bartosz Wieloch

Genetic Programming, Modularity, Problem Decomposition

In this paper we introduce, formalize, and experimentally validate a novel concept of functional modularity for Genetic Programming (GP). We rely on module definition that is most natural for GP: a piece of program code (subtree). However, as opposed to syntax-based approaches that abstract from the actual computation performed by a module, we analyze also its semantic using a set of fitness cases. In particular, the central notion of this approach is subgoal, an entity that embodies module's desired semantic and is used to evaluate module candidates. As the cardinality of the space of all subgoals is exponential with respect to the number of fitness cases, we introduce monotonicity to assess subgoals' potential utility for searching for good modules. For a given subgoal and a sample of modules, monotonicity measures the correlation of subgoal's distance from module's semantics and the fitness of the solution the module is part of. In the experimental part we demonstrate how these concepts may be used to describe and quantify the modularity of two simple problems of Boolean function synthesis. In particular, we conclude that monotonicity usefully differentiates two problems with different nature of modularity, allows us to tell apart the useful subgoals from the other ones, and may be potentially used for problem decomposition and enhance the efficiency of evolutionary search.

8:559:20         How Online Simplification Affects Building Blocks in Genetic Programming

David Kinzett, Mark Johnston, Mengjie Zhang

Genetic Programming, Simplification, Code Bloat, Building Blocks

This paper investigates the effect on building blocks during evolution of two online program simplification methods in genetic programming. The two simplification methods considered are algebraic simplification and numerical simplification. The building blocks considered are of a more general form (two and three level subtrees) than numeric constants only. Unlike most of the existing work which often uses simple symbolic regression tasks, this work considers classification tasks as examples. We develop a new method for encoding possible building blocks for the analysis. The results show that the two online program simplification methods can generate new diverse building blocks during evolution although they also destroy existing ones and that many of the existing building blocks are retained during evolution. Compared with the canonical genetic programming method, the two simplification methods can generate much smaller programs, use much shorter evolutionary training time and achieve comparable effectiveness performance.

9:209:45         Discovering a Domain Alphabet

Michael D Schmidt, Hod Lipson

Building Blocks, Domain Alphabet, Symbolic Regression

A key to the success of any genetic programming process is the use of a good alphabet of atomic building blocks from which solutions can be evolved efficiently. An alphabet that is too granular may generate an unnecessarily large search space; an inappropriately coarse grained alphabet may bias or prevent finding optimal solutions. Here we introduce a method that automatically identifies a small alphabet for a problem domain. We process solutions on the complexity-optimality Pareto front of a number of sample systems and identify terms that appear significantly more frequently than merited by their size. These terms are then used as basic building blocks to solve new problems in the same problem domain. We demonstrate this process on symbolic regression for a variety of physics problems. The method discovers key terms relating to concepts such as energy and momentum. A significant performance enhancement is demonstrated when these terms are then used as basic building blocks on new physics problems. We suggest that identifying a problem-specific alphabet is key to scaling evolutionary methods to higher complexity systems.

9:4510:10       Program Optimization by Random Tree Sampling

Makoto Tanji, Hitoshi Iba

Genetic Programming, Program Sampling, Fragment Preservation, Recombination Operator

This paper describes a new program evolution method named PORTS (Program Optimization by Random Tree Sampling) which is motivated by the idea of preservation and control of tree fragments. We hypothesize that to reconstruct building blocks efficiently, tree fragments of any size should be preserved into the next generation, according to their differential fitnesses. PORTS creates a new individual by sampling from the promising trees by traversing and transition between trees instead of subtree crossover and mutation. Because the size of a fragment preserved during a generation update follows a geometric distribution, merits of the method are that it is relatively easy to predict the behavior of tree fragments over time and to control sampling size, by changing a single parameter. Our experimental results on three benchmark problems show that the performance of PORTS is competitive with SGP (Simple Genetic Programming). And we observed that there is a significant difference of fragment distribution between PORTS and simple GP.

 

GA-1: Best Paper Nominees

Room: Cartier B

Session Chair: Günther R. Raidl (Vienna University of Technology)                       

8:308:55         On the Significance of the Permutation Problem in Neuroevolution é

Stefan Haflidason, Richard Neville

Genetic Algorithms, Neural Networks, Permutation Problem

In this paper we investigate the impact of the Permutation Problem on a standard Genetic Algorithm evolving neural networks for a difficult control problem. Through the use of Price's equation and an explicit enumeration of permutations within the population we demonstrate that for the given problem and representation the Permutation Problem is not as serious a concern as previously thought. In addition we present the concept of incompatible representations as a more useful guide for what to avoid in the evolution of neural networks.

8:559:20         Analysis of Coevolution for Worst-Case Optimization é

Philipp Stuermer, Anthony Bucci, Juergen Branke, Pablo Funes, Elena Popovici

coevolution, coevolutionary algorithms, worst-case optimization, best worst-case, Minimax, dynamics analysis

The problem of finding entities with the best worst-case performance across multiple scenarios arises in domains ranging from job shop scheduling to designing physical artifacts. In spite of previous successful applications of evolutionary computation techniques, particularly coevolution, to such domains, little work has examined utilizing coevolution for optimizing worst-case behavior. Previous work assesses certain algorithm mechanisms using aggregate performance on test problems. We examine fitness and population trajectories of individual algorithm runs, making two observations: first, that aggregate plots wash out important effects that call into question what these algorithms can produce; and second, that none of the mechanisms is generally better than the rest. More importantly, our dynamics analysis explains how the interplay of algorithm properties and problem properties influences performance. These contributions argue in favor of a reassessment of what makes for a good worst-case coevolutionary algorithm and suggest how to design one.

9:209:45         Maximal Age in Randomized Search Heuristics with Aging é

Christian Horoba, Thomas Jansen, Christine Zarges

aging, evolutionary algorithms, genetic algorithms, immune algorithms, runtime analysis

The concept of aging has been introduced and applied in many different variants in many different randomized search heuristics. The most important parameter is the maximal age of search points. Considering static pure aging known from artificial immune systems in the context of simple evolutionary algorithms, it is demonstrated that the choice of this parameter is both, crucial for the performance and difficult to set appropriately. The results are derived in a rigorous fashion and given as theorems with formal proofs. An additional contribution is the presentation of a general method to combine fitness functions into a function with stronger properties than its components. By application of this method we combine a function where the maximal age needs to be sufficiently large with a function where the maximal age needs to be sufficiently small. This yields a function where an appropriate age lies within a very narrow range.

9:4510:10       Tunneling Between Optima: Partition Crossover for the Traveling Salesman Problem é

Darrell Whitley, Adele Howe, Doug Hains

fitness landscape, Traveling Salesman Problem, recombination

A new recombination operator is introduced for the Traveling Salesman Problem called partition crossover. Theoretical and empirical results indicate that when two local optima are recombined using partition crossover, two offspring are produced that are highly likely to also be local optima. Thus, the operator is capable of jumping or tunneling from two local optima to two new and distinct local optima without searching intermediate solutions. The operator is respectful and it transmits alleles which means that 1) all common edges from the two parents are inherited and 2) the offspring are constructed using only edges inherited from the two parents. Partition crossover is not always feasible: sometimes two new Hamiltonian circuits cannot be constructed by the operator using only edges inherited from the two parents. But empirical results indicate that partition crossover is feasible 95 percent of the time when recombining randomly selected local optima. Furthermore, from a sample of local optima that are within a short random walk of the global optimum, partition crossover typically relocates the global optimum in a single move when crossover is feasible.

 

Evolutionary Computation in Practice-1

Room: Bonsecours

Session Chair: To be announced                                                                       

8:3010:10      

 

RWA-2: Mechanical Engineering, Networks & Optimisation

Room: Victoria

Session Chair: Gregory S Hornby (UC Santa Cruz)                                         

8:308:55         Multi Material Topological Optimization of Structures and Mechanisms

Jonathan D Hiller, Hod Lipson

Topological optimization, Multi-material composites, Shape optimization, Solid freeform fabrication

Multi-material 3D-printing technologies permit the freeform fabrication of complex spatial arrangements of materials in arbitrary geometries. This technology has opened the door to a large mechanical design space with many novel yet non-intuitive possibilities. This space is not easily searched using conventional topological optimization methods such as homogenization. Here we present an evolutionary design process for three-dimensional multi-material structures that explores this design space and designs substructures tailored for custom functionalities. The algorithm is demonstrated for the design of 3D non-uniform beams and 3D compliant actuators.

8:559:20         Evolutionary Optimization of Multistage Interconnection Networks Performance

Jiri Jaros

collective communications, communication scheduling, evolutionary design, multistage interconnection networks

The paper deals with optimization of collective communications on multistage interconnection networks (MINs). In the experimental work, unidirectional MINs like Omega, Butterfly and Clos are investigated. The study is completed by bidirectional binary, fat and full binary tree. To avoid link contentions and associated delays, collective communications are processed in synchronized steps. Minimum number of steps is sought for the given network topology, wormhole switching, minimum routing and given sets of sender and/or receiver nodes. Evolutionary algorithm proposed in this paper is able to design optimal schedules for broadcast and scatter collective communications. Acquired optimum schedules can simplify the consecutive writing high-performance communication routines for application-specific networks on chip, or for development of communication libraries in case of general-purpose multistage interconnection networks.

9:209:45         A Multiobjective Evolutionary Algorithm For The Task Based Sailor Assignment Problem

Dipankar Dasgupta, Fernando Nino, Deon Garrett, Koyel Chaudhuri, Soujanya Medapati, Aishwarya Kaushal, James Simien

Sailor Assignment problem, Multiobjective evolutionary algorithms, Task Based Sailor Assignment Problem, Hybrid metaheuristics, NSGA-II

This paper investigates a multiobjective formulation of the United States Navy's Task based Sailor Assignment Problem and examines the performance of a multiobjective evolutionary algorithm (MOEA), called NSGA-II, on large instances of this problem. Our previous work [3, 5, 4], consider the sailor assignment problem (SAP) as a static assignment, while the present work assumes it as a time dependent multitask SAP, making it a more complex problem, in fact, an NP-complete problem. Experimental results show that the presented genetic-based solution is appropriate for this problem.

9:4510:10       Evolutionary Search and Convertible Agents For the Simultaneous Type and Dimensional Synthesis of Planar Mechanisms

John C. Oliva, Erik D. Goodman

Planar Mechanisms, Evolutionary Computing, Mechanism Synthesis, Optimization, Mechanical Engineering

In the field of mechanical engineering, synthesizing a mechanism to perform an intended task is deceptively complex. In this paper, a novel approach to automated mechanism synthesis is described which uses an evolutionary search algorithm and a technique called convertible agents to simultaneously find the most appropriate mechanism type for a given problem, while finding an optimum set of dimensions for that mechanism to complete a specified task. The search was limited to four-bar, Stephenson, and Watt types of planar, single-degree-of-freedom mechanisms, although the method is readily scalable to include any number of different types. Several case studies are described which illustrate the effectiveness of the method. The developed convertible agent approach is well suited for evolutionary design applications in which there are a small number of distinct topological possibilities each with parametric variables to be optimized.

 

SBSE-2: Search-Based Testing

Room: Versailles

Session Chair: Phil McMinn (University of Sheffield)                                       

8:308:55         Dealing with Inheritance in OO Evolutionary Testing

Javier Ferrer, Francisco Chicano, Enrique Alba

Software Testing, Object-Oriented, Evolutionary Algorithm, OO Evolutionary Testing, instanceof, Search Based Software Engineering

Most of the software developed in the world follows the object-oriented (OO) paradigm. However, the existing work on evolutionary testing is mainly targeted to procedural languages. All this work can be used with small changes on OO programs, but object orientation introduces new features that are not present in procedural languages. Some important issues are polymorphism and inheritance. In this paper we want to make a contribution to the inheritance field by proposing some approaches that use the information of the class hierarchy for helping test case generators to better guide the search. To the best of our knowledge, no work exists using this information to propose test cases. In this work we define a branch distance for logical expressions containing the instanceof operator in Java programs. In addition to the distance measure, we propose two mutation operators based on the distance. We study the behaviour of the mutation operators on a benchmark set composed of nine OO programs. The results show that the information collected from the class hierarchy helps in the search for test cases.

8:559:20         Insight Knowledge in Search Based Software Testing

Andrea Arcuri

Evolutionary Testing, Theory, Object-Oriented Software, Search Landscape

Software testing can be re-formulated as a search problem, hence search algorithms (e.g., Genetic Algorithms) can be used to tackle it. Most of the research so far has been of empirical nature, in which novel proposed techniques have been validated on software testing benchmarks. However, only little attention has been spent to understand why meta-heuristics can be effective in software testing. This insight knowledge could be used to design novel more successful techniques. Recent theoretical work has tried to fill this gap, but it is very complex to carry out. This has limited its scope so far to only small problems. In this paper, we want to get insight knowledge on a difficult software testing problem. We combine together an empirical and theoretical analysis, and we exploit the benefits f both.

9:209:45         MC/DC Automatic Test Input Data Generation

Zeina Awedikian, Kamel Ayari, Giuliano Antoniol

Test input data generation, Search based testing, MC/DC

In regulated domain such as aerospace and in safety critical domains,  oftware quality assurance is subject to strict regulation such as the TCA DO-178B standard. mong other conditions, the DO-178B mandates for the satisfaction of the modified condition/decision coverage (MC/DC) esting criterion for software where failure condition may have catastrophic onsequences. MC/DC is a white box testing criterion aiming at proving hat all conditions involved in a predicate can influence the predicate alue in the desired way. In this paper, we propose a novel fitness function inspired by chaining test data generation to efficiently generate test input data satisfying the MC/DC criterion.Preliminary results show the superiority of the novel fitness function that is able to avoid plateau leading to a behavior close to random test of traditional white box fitness functions.

9:4510:10       Using Automated Search to Generate Test Data for Matlab

Sion Ll Rhys, Simon Poulding, John A Clark

Search-Based Software Engineering, Genetic Algorithms, Matlab

The critical functionality of many software applications relies on code that performs mathematically complex computations. However, such code is often difficult to test owing to the compound datatypes used and complicated mathematical operations performed. This paper proposes the use of automated search as an efficient means of generating test data for this type of software. Taking \matlab{} as an example of widely-used mathematical software, a technical framework is described that extends previous work on search-based test data generation in order to handle matrix datatypes and associated relational operators. An empirical evaluation demonstrates the feasibility of this approach.

 

EMO-2: Advanced Search Techniques

Room: St. Laurent

Session Chair: Joshua D Knowles (University of Manchester)                                

8:308:55         Evolutionary Multi-Objective Quantum Control Experiments with the Covariance Matrix Adaptation

Ofer M. Shir, Jonathan Roslund, Herschel Rabitz

Experimental Multi-Objective Optimization, Multi-Observable Quantum Control, MO-CMA-ES, Laser Pulse Shaping

Experimental multi-objective Quantum Control is an emerging topic within the broad physics and chemistry application domain of controlling quantum phenomena. This realm offers cutting edge ultrafast laser laboratory applications, which pose multiple objectives, noise, and possibly constraints on the high-dimensional search. In this study we introduce the topic of Multi-Objective Quantum Control (MOQC), and consider specific systems to be Pareto optimized subject to uncertainty (noise), either experimentally or by means of simulated systems. Unlike the vast majority of other reported systems, the current modeling of noise considers additive Gaussian noise on the input (decision) parameters, which propagates in an unknown manner to the observable (fitness) values. We employ the multi-objective version of the CMA-ES (MO-CMA), which, to the best of our knowledge, is applied here for the first time to a real-world experimental problem, and assess its performance on the investigated systems. In particular, we study its empirical behavior on the MOQC noisy systems, as well as on the Multi-Sphere model landscape, in light of previous theoretical studies on single-objective single-parent Evolution Strategies, and draw some practical conclusions concerning the projection of fitness disturbance on the perceived Pareto front and the need for parental fitness reevaluation in elitist strategies. We show that elitism diminishes the value of the archived Pareto set, even when the perceived Pareto front is well approximated to the true front.

8:559:20         Solving Complex High-Dimensional Problems with the Multi-Objective Neural Estimation of Distribution Algorithm

Luis Martí, Jesús García, Antonio Berlanga, José M. Molina

Estimation of Distribution Algorithms, Multi-objective Optimization, Growing Neural Gas

The multi-objective optimization neural estimation of distribution algorithm (MONEDA) was devised with the purpose of dealing with the model-building issues of MOEDAs and, therefore address their scalability.In this paper we put forward a comprehensive set of experiments that intends to compare MONEDA with similar approaches when solving complex community accepted MOPs. In particular, we deal with the Walking Fish Group scalable test problem set (WFG). These tests aim to establish the optimizing capacity of MONEDA and the consistency as an optimization method. The fundamental conclusion of these assessment is that we provide strong evidences of the viability of MONEDA for handling hard and complex high-dimensional problems and its superior performance when compared to similar approaches. In spite of the fact that obviously further studies are necessary, these extensive experiments have provided solid ground for the use of MONEDA in more ambitious real-world applications.

9:209:45         On the Hybridization of SMS-EMOA and Local Search for Continuous Multiobjective Optimization

Patrick Koch, Oliver Kramer, Günter Rudolph, Nicola Beume

Hybrid Evolutionary Multiobjective Algorithm, Local Search, Memetic Algorithms, Hybrid Metaheuristics

In the recent past, hybrid metaheuristics became famous as successful optimization methods. The motivation for the hybridization is a notion of combining the best of two worlds: evolutionary black box optimization and local search. Successful hybridizations in large combinatorial solution spaces motivate to transfer the idea of combining the two worlds to continuous domains as well. The question arises: Can local search also improve the convergence to the Pareto front in continuous multiobjective solutions spaces? We introduce a relay and a concurrent hybridization of the successful multiobjective optimizer SMS-EMOA and local optimization methods like Hooke & Jeeves and the Newton method. The concurrent approach is based on a parameterized probability function to control the local search. Experimental analyses on academic test functions show increased convergence speed as well as improved accuracy of the solution set of the new hybridizations.

9:4510:10       Using a Distance Metric to Guide PSO Algorithms for Many-Objective Optimization

Upali K Wickramasinghe, Xiaodong Li

Particle swarm optimization, Many-objective optimization, Multi-objective optimization, User-preference methods, Reference point method, Light beam search

In this paper we propose to use a distance metric based on user-preferences to efficiently find solutions for many-objective problems. We use a particle swarm optimization (PSO) algorithm as a baseline to demonstrate the usefulness of this distance metric, though the metric can be used in conjunction with any evolutionary multi-objective (EMO) algorithm. Existing user-preference based EMO algorithms rely on the use of dominance comparisons to explore the search-space. Unfortunately, this is ineffective and computationally expensive for many-objective problems. In the proposed distance metric based PSO, particles update their positions and velocities according to their closeness to preferred regions in the objective-space, as specified by the decision maker. The proposed distance metric allows an EMO algorithm's search to be more effective especially for many-objective problems, and to be more focused on the preferred regions, saving substantial computational cost. We demonstrate how to use a distance metric with two user-preference based PSO algorithms, which implement the reference point and light beam search methods. These algorithms are compared to a user-preference based PSO algorithm relying on the conventional dominance comparisons. Experimental results suggest that the distance metric based algorithms are more effective and efficient especially for difficult many-objective problems.

 

RWA-3: Finance 1

Room: St. Charles

Session Chair: Ian Dempsey (UCD/Pipeline Financial Group, Inc.)                         

8:308:55         Soft Memory for Stock Market Analysis using Linear and Developmental Genetic Programming

Garnett Wilson, Wolfgang Banzhaf

linear genetic programming, financial analysis, developmental genetic programming

Recently, a form of memory usage was introduced for genetic programming (GP) called soft memory. Rather than have a new value completely overwrite the old value in a register, soft memory combines the new and old register values. This work examines the performance of a soft memory linear GP and developmental GP implementation for stock trading. Soft memory is known to more slowly adapt solutions compared to traditional GP. Thus, it was expected to perform well on stock data which typically exhibit local turbulence in combination with an overall longer term trend. While soft memory and standard memory were both found to provide similar impressive accuracy in buys that produced profit and sells that prevented losses, the softer memory settings traded more actively. The trading of the softer memory systems produced less substantial cumulative gains than traditional memory settings for the stocks tested with climbing share price trends. However, the trading activity of the softer memory settings had moderate benefits in terms of cumulative profit compared to buy-and-hold strategy for share price trends involving a drop in prices followed later by gains.

8:559:20         Behavioural GP Diversity for Adaptive Stock Selection

W Yan, Christopher D Clack

Genetic Programming, Diversity, Phenotype, Finance, Robustness, Dynamic Environment

We present a new mechanism for preserving phenotypic behavioural diversity in Genetic Programming. We provide a real-world case study for hedge fund portfolio optimization, and experimental results on real-world data that indicate the importance of phenotypic behavioural diversity both in achieving higher fitness and in improving the robustness of the GP population for continuous learning.

9:209:45         Robustness of Multiple Objective GP Stock-Picking in Unstable Financial Markets

Ghada Hassan, Christopher D Clack

GP, Multiobjective Optimization, Robustness, Portfolio Optimisation, Finance, Dynamic Environment

Multiple Objective Genetic Programming (MOGP) is a promising stock-picking technique for fund managers, because the Pareto front approximates the risk/reward Efficient Frontier and simplifies the choice of investment model for a given client's attitude to risk. Unfortunately GP solutions don't work well if used in an environment that is different from the training environment, and the financial markets are notoriously unstable, often lurching from one market context to another (e.g. ``bull'' to ``bear''). This turns out to be a hard problem --- simple dynamic adaptation methods are insufficient and robust behaviour of solutions becomes extremely important. In this paper we provide the first known empirical results on the robustness of MOGP solutions in an unseen environment consisting of real-world financial data. We focus on two well-known mechanisms to determine which leads to the more robust solutions: Mating Restriction, and Diversity Preservation. We introduce novel metrics for Pareto front robustness, and a novel variation on Mating Restriction, both based on phenotypic cluster analysis.

 

GBML-2: Learning Classifier Systems - Knowledge Representations

Room: Les Courants

Session Chair: Stewart W Wilson (Prediction Dynamics)                                 

8:308:55         The Multi-label OCS with a Genetic Algorithm for Rule Discovery: Implementation and First Results

Rosane Maria Maffei Vallim, Thyago S. P. C. Duque, David E. Goldberg, André C. P. L. F. Carvalho

OCS, multi-label classification, LCS, GA

Learning Classifier Systems (LCSs) are rule-based systems that can be manipulated by a genetic algorithm. LCSs were first designed by Holland to solve classification problems and a lot of effort has been made since then, resulting in a broad number of different algorithms. One of these is called Organizational Classifier System (OCS), a LCSs that tries to organize its rule set favoring good rules to be together in the same organization. However, the proposal of OCS did not include the discovery mechanism. Recently, the OCS was applied to multi-label classification, a type of classification where one instance can have more than one associated label. The authors represented the multi-label classification problem as a default hierarchy and combined the organizational capabilities of OCS together with Smith's default hierarchy formation theory to solve a simple multi-label problem.

The purpose of this paper is to extend this idea with the inclusion of a genetic algorithm for the discovery of new rules and present some initial results obtained using the new method. The preliminary results obtained show that the method is comparable to other multi-label techniques. Final discussions present the conclusions of the work and some directions for further research.

8:559:20         Discrete Dynamical Genetic Programming

Richard Preen, Larry Bull

Learning Classifier Systems, XCS, Random Boolean Networks, Reinforcement Learning, Self-Adaptation

A number of representation schemes have been presented for use within Learning Classifier Systems, ranging from binary encodings to neural networks. This paper presents results from an investigation into using a discrete dynamical system representation within the XCS Learning Classifier System. In particular, asynchronous random Boolean networks are used to represent the traditional condition-action production system rules. It is shown possible to use self-adaptive, open-ended evolution to design an ensemble of such discrete dynamical systems within XCS to solve a number of well-known test problems.

9:209:45         Towards Continuous Actions in Continuous Space and Time using Self-Adaptive Constructivism in Neural XCSF

Gerard Howard, Larry Bull, Pier-Luca Lanzi

Constructivism, Learning Classifier Systems, Neural Networks, Self-adaptation, Continuous Environments

This paper presents a Learning Classifier System (LCS) where each classifier condition is represented by a feed-forward multi-layered perceptron (MLP) network. Adaptive behavior is realized through the use of self-adaptive parameters and neural constructivism, providing the system with a flexible knowledge representation. The approach allows for the evolution of networks of appropriate complexity to solve a continuous maze environment, here using either discrete-valued actions, continuous-valued actions, or continuous-valued actions of continuous duration. In each case, it is shown that the neural LCS employed is capable of developing optimal solutions to the reinforcement learning task presented in this paper.

9:4510:10       uQFCS: QFCS with Unfixed Fuzzy Sets in Continuous Multi-Step Environments with Continuous Vector Actions

José Abdón Ramírez-Ruiz, Manuel Valenzuela-Rendón, Hugo Terashima-Marín

Induction Theory, Genetic Algorithm, Fuzzy Classifier Systems, Fuzzy Logic, Learning Classifier Systems

uQFCS is a generalization of QFCS presented previously in which the condition of fixed fuzzy sets imposed to QFCS is eliminated. Therefore, these fuzzy sets are evolved with the action parts of the fuzzy rules. uQFCS also can solve the multi-step reinforcement learning problem in continuous environments and with a set of continuous vector actions. This paper presents results that show that uQFCS can also evolve rules to represent only those parts of the input and action space where the expected values are important for making decisions. Results for the uQFCS are compared with those obtained by Q-learning and QFCS. uQFCS has similar performance to QFCS. uQFCS was tested in the Frog Problem and in five versions of the n-Environment Problem from which two of them are problems of one inertial particle.

 

COM-2: Applications

Room: Verriere A

Session Chair: Carlos Cotta (University of Málaga)                                          

8:308:55         Comparisons between an Exact and a MetaHeuristic Algorithm for the Molecular Distance Geometry Problem

Antonio Mucherino, Leo Liberti, Carlile Lavor, Nelson Maculan

protein molecules, distance geometry, combinatorial optimization, branch and prune, monkey search

We consider the Discretizable Molecular Distance Geometry Problem (DMDGP), which consists in a subclass of instances of the distance geometry problem related to molecular conformations for which a combinatorial reformulation can be supplied. We investigate the performances of two different algorithms for solving the DMDGP. The first one is the Branch and Prune (BP) algorithm, an exact algorithm that is strongly based on the structure of the combinatorial problem. The second one is the Monkey Search (MS) algorithm, a meta-heuristic algorithm that is inspired by the behavior of a monkey climbing trees in search for food supplies, and that exploits ideas and strategies from other meta-heuristic searches, such Genetic Algorithms, Differential Evolution, and so on. The comparison between the two algorithms is performed on a set of instances related to protein conformations. The used instances simulate data obtained from the Nuclear Magnetic Resonance (NMR), because the typical distances provided by NMR are considered and a predetermined number of wrong distances are included.

8:559:20         A Hybrid Genetic Algorithm for a Variant of Two-Dimensional Packing Problem

Jin Kim, Byung-Ro Moon

breadth-first search, geographic crossover, hybrid genetic algorithm, local search, packing, two-dimensional

A variant of two-dimensional packing problem was given in the GECCO'2008 competition. This paper describes the genetic algorithm that produced the best result and thus won the No. 1 prize. As the problem is naturally represented by a two-dimensional chromosome, two-dimensional crossovers are used to generate more diverse chromosomes and effectively maintain geographical linkage among genes. We developed a local search heuristic based on the breadth-first search algorithm; we describe how to implement the heuristic efficiently using problem-specific knowledge. The local search was combined with a steady-state genetic algorithm and the combination showed strong synergy.

9:209:45         A Cooperative and Self-adaptive Metaheuristic for the Facility Location Problem

David Meignan, Jean-Charles Créput, Abderrafiaa Koukam

combinatorial optimization, metaheuristic, multiagent system, facility location problem

This paper presents a coalition-based metaheuristic (CBM) to solve the uncapacitated facility location problem. CBM is a population-based metaheuristic where individuals encapsulate a single solution and are considered as agents. In comparison to classical evolutionary algorithms, these agents have additional capacities of decision, learning and cooperation. Our approach is also a case study to present how concepts from multiagent systems' domain may contribute to the design of new metaheuristics. The tackled problem is a well-known combinatorial optimization problem, namely the uncapacitated facility location problem, that consists in determining the sites in which some facilities must be set up to satisfy the requirements of a client set at minimum cost. A computational experiment is conducted to test the performance of learning mechanisms and to compare our approach with several existing metaheuristics. The results showed that CBM is competitive with powerful heuristics approaches and presents several advantages in terms of flexibility and modularity.

9:4510:10       Meta-Heuristics for Reconstructing Cross Cut Shredded Text Documents

Matthias Prandtstetter, Günther R. Raidl

ant colony optimization, variable neighborhood search, integer linear programming, document reconstruction

In this work, we present two new approaches based on variable neighborhood search (VNS) and ant colony optimization (ACO) for the reconstruction of cross cut shredded text documents. For quickly obtaining initial solutions, we consider four different construction heuristics. While one of them is based on the well known algorithm of Prim, another one tries to match shreds according to the similarity of their borders. Two further construction heuristics rely on the fact that in most cases the left and right edges of paper documents are blank, i.e. no text is written on them. Randomized variants of these construction heuristics are applied within the ACO. Experimental tests reveal that regarding the solution quality the proposed ACO variants perform better than the VNS approaches in most cases, while the running times needed are shorter for VNS. The high potential of these approaches for reconstructing cross cut shredded text documents is underlined by the obtained results.

 

ALIFE-1: Best Paper Nominees

Room: Verriere B

Session Chair: Kenneth De Jong (George Mason University)                                 

8:308:55         How Novelty Search Escapes the Deceptive Trap of Learning to Learn é

Sebastian Risi, Sandy D Vanderbleek, Charles E Hughes, Kenneth O Stanley

Novelty Search, Neural Networks, Adaptation, Learning, Neuromodulation, Neuroevolution, NEAT

A major goal for researchers in neuroevolution is to evolve artificial neural networks (ANNs) that can learn during their lifetime. Such networks can adapt to changes in their environment that evolution on its own cannot anticipate. However, a profound problem with evolving adaptive systems is that if the impact of learning on the fitness of the agent is only marginal, then evolution is likely to produce individuals that do not exhibit the desired adaptive behavior. Instead, because it is easier at first to improve fitness without evolving the ability to learn, they are likely to exploit domain-dependent static (i.e. non-adaptive) heuristics. This paper proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm, which opens up a new avenue in the evolution of adaptive systems because it can exploit the behavioral difference between learning and non-learning individuals. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely and has shown prior promising results in other domains. This paper shows that novelty search significantly outperforms fitness-based search in a tunably deceptive T-Maze navigation domain because it fosters the emergence of adaptive behavior.

8:559:20         Evolution of Robust Data Distribution\\Among Digital Organisms é

David B. Knoester, Andres J. Ramirez, Philip K. McKinley, Betty H.C. Cheng

Digital evolution, cooperative behavior, natural selection, multilevel selection, mutation, germline, biologically-inspired computing, communication, distributed systems

This paper describes a study of the evolution of robust communication, specifically the distribution of data among individuals in a population, using digital evolution. In digital evolution, a population of self-replicating computer programs exists in a user-defined computational environment and is subject to instruction-level mutations and natural selection. To encourage the evolution of this cooperative behavior, we make use of "digital germlines, " a form of group-level selection similar to multicellularity in biology. The results of experiments using the Avida platform for digital evolution demonstrate that populations of digital organisms are capable of evolving to distribute data in a network, and that through the application of different selective pressures, these digital organisms can overcome communication obstacles such as message loss, limited bandwidth, and node failure.

9:209:45         Sustaining Diversity using Behavioral Information Distance é

Faustino J Gomez

Genetic Algorithms, behavioral diversity, recurrent neural networks, Tartarus, normalized compression distance, Kolmogorov complexity

Conventional similarity metrics used to sustain diversity in evolving populations are not well suited to sequential decision tasks. Genotypes and phenotypic structure are poor predictors of how solutions will actually behave in the environment. In this paper, we propose measuring similarity directly on the behavioral trajectories of evolving candidate policies using a universal similarity measure based on algorithmic information theory: normalized compression distance (NCD). NCD is compared to four other similarity measures in both genotype and phenotype space on the POMDP Tartarus problem, and shown to produce the most fit, general, and complex solutions.

 

Paper Presentations                                                      Friday 10 July 10:40 – 12:20

 

 

GP-3: Classification and Agents

Room: Cartier A

Session Chair: Sean Luke (George Mason University)                                     

10:4011:05     Pareto Front Feature Selection: Using Genetic Programming to Explore Feature Space

Kourosh Neshatian, Mengjie Zhang

Genetic Programming, Feature Subset Selection, Filter Approach, Pareto Front

In this paper we use genetic programming (GP) for feature selection in binary classification tasks. Mathematical expressions built by GP transform the feature space in a way that the relevance of subsets of features can be measured using a simple relevance function. We make some modifications to the standard GP to make it explore large subsets of features when necessary. This is done by increasing the depth limit at run-time and at the same time trying to avoid bloating and overfitting by some control mechanism. We take a filter (non-wrapper) approach to exploring the search space. Unlike most filter methods that usually deal with single features, we explore subsets of features. The solution of the proposed search is a vector of Pareto-front points. Our experiments show that a linear search over this vector can improve the classification performance of classifiers while decreasing their complexity.

11:0511:30     Genetic Programming for Protein Related Text Classification

Marc Segond, Cyril Fonlupt, Denis Robilliard

Evolutionary algorithms, Text mining, Genetic programming

Since the genomics revolution, bioinformatics has never been so popular. Many researchers have investigated with great success the use of evolutionary computation in bioinformatics for example in the field of protein folding or determining genome sequences. In this paper, instead of using evolutionary computation as a way to provide new and innovative solutions to complex bioinformatics problems, we use genetic programming as a tool to evolve programs that are able to automatically classify research papers as dealing or not with a given protein. In a second part, we show that the attributes that are selected by the genetic programming evolved programs can be used efficiently for proteins classification.

11:3011:55     Evolving an Autonomous Agent for non-Markovian Reinforcement Learning

Jae-Yoon Jung, James A. Reggia

Descriptive Encoding, Reinforcement Learning, Evolution Strategy, Genetic Programming

In this paper, we investigate the use of nested evolution in which each step of one evolutionary process involves running a second evolutionary process. We apply this approach to build an evolutionary system for reinforcement learning (RL) problems. Genetic programming based on a descriptive encoding is used to evolve the neural architecture, while an evolution strategy is used to evolve the connection weights. We test this method on a non-Markovian RL problem involving an autonomous foraging agent, finding that the evolved networks significantly outperform a rule-based agent serving as a control. We also demonstrate that nested evolution, partitioning into subpopulations, and crossover operations all act synergistically in improving performance in this context.

11:5512:20     Evolution of Team Composition in Multi-agent Systems

Joshua Rubini, Robert B Heckendorn, Terence Soule

autonomous vehicles, cooperation, teams

Evolution of multi-agent teams has been shown to be an effective method of solving complex problems involving the exploration of an unknown problem space. These autonomous and heterogeneous agents are able to go places where humans are unable to go and perform tasks that would be otherwise dangerous or impossible to complete. This research tests the ability of the Orthogonal Evolution of Teams (OET) algorithm to evolve heterogeneous teams of agents which can change their composition, i.e. the numbers of each type of agent on a team. The results showed that OET could effectively produce both the correct team composition and a team for that composition that was competitive with teams evolved with OET where the composition was fixed a priori

 

GA-2: Algorithm Design I

Room: Cartier B

Session Chair: Stephanie Forrest (University of New Mexico)                                

10:4011:05     Improving Genetic Algorithms Performance via Deterministic Population Shrinkage

Juan Luís J. Laredo, Carlos Fernandes, Juan Julián Merelo, Christian Gagné

Adaptation, Self-adaptation, Genetic Algorithms, Parameter Tuning, Performance Analysis

Despite the intuition that the same population size is not needed throughout the run of an Evolutionary Algorithm (EA), most EAs use a fixed population size. This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing (SVPS) scheme on the performance of Genetic Algorithms (GAs). It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter. The method uses as initial population size an estimation of the minimum size needed to supply enough building blocks, using a fixed-size selectorecombinative GA converging within some confidence interval toward good solutions for a particular problem. Following this methodology, a scalability analysis is conducted on deceptive, quasi-deceptive, and non-deceptive trap functions in order to assess whether SVPS-GA improves performances compared to a fixed-size GA under different problem instances and difficulty levels. Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.

11:0511:30     Analysis of Adaptive Operator Selection Techniques on the Royal Road and Long K-Path Problems

Álvaro Fialho, Marc Schoenauer, Michèle Sebag

Parameter Control, Genetic Algorithms, Adaptive Operator Selection

One of the choices that most affect the performance of Evolutionary Algorithms is the selection of the variation operators that are efficient to solve the problem at hand. This work presents an empirical analysis of different Adaptive Operator Selection (AOS) methods, i.e., techniques that automatically select the operator to be applied among the available ones, while searching for the solution. Four previously published operator selection rules are combined to four different credit assignment mechanisms. These 16 AOS combinations are analyzed and compared in the light of two well-known benchmark problems in Evolutionary Computation, the Royal Road and the Long K-Path.

11:3011:55     An Evolutionary Algorithm with Species-specific Explosion for Multimodal Optimization

Ka-Chun Wong, Kwong-Sak Leung, Man-Hon Wong

Evolutionary Algorithm, Genetic Algorithm, Multimodal Optimization, Function Optimization, Species-specific Explosion, Species Conserving Genetic Algorithm

This paper presents an evolutionary algorithm, which we call Evolutionary Algorithm with Species-specific Explosion (EASE), for multimodal optimization. EASE is built on the Species Conserving Genetic Algorithm (SCGA), and the design is improved in several ways. In particular, it not only identifies species seeds, but also exploits the species seeds to create multiple mutated copies in order to further converge to the respective optimum for each species. Experiments were conducted to compare EASE and SCGA on four benchmark functions. Cross-comparison with recent rival techniques on another five benchmark functions was also reported. The results reveal that EASE has a competitive edge over the other algorithms tested.

11:5512:20     Adaptation of Expansion Rate for Real-coded Crossovers

Youhei Akimoto, Jun Sakuma, Isao Ono, Shigenobu Kobayashi

Real-coded Genetic Algorithms, Function Optimization, Premature Convergence, Adaptation of Expansion Rate

Premature convergence is one of the most notable obstacles that GAs face with. Once it happens, GAs cannot generate candidate solutions globally and the solutions are finally captured by local minima. To overcome it, we propose a mechanism that indirectly controls the variety of the population. It is realized by adapting the expansion rate parameter of crossovers, which determines the variance of the crossover distribution. The resulting algorithm is called adaptation of expansion rate (AER). The performance of the proposed methods is compared to an existing GA on several benchmark functions including functions whose landscape have ridge or multimodal structure. On these functions, existing GAs are likely to lead to premature convergence. The experimental result shows our approach outperforms the existing one on deceptive functions without disturbing the performance on comparatively easy problems.

 

Evolutionary Computation in Practice-2

Room: Bonsecours

Session Chair: To be announced                                                                       

10:4012:20    

 

Competitions

Room: Victoria

Session Chair:                                                                        

10:4012:20    

 

SBSE-1: Best Paper Nominees and Sensitivity Analysis

Room: Versailles

Session Chair: Massimiliano Di Penta (RCOST - University of Sannio)                  

10:4011:05     Software Project Planning for Robustness and Completion Time in the Presence of Uncertainty using Multi Objective Search Based Software Engineering é

Stefan Gueorguiev, Mark Harman, Giuliano Antoniol

Pareto optimality, project planning, software engineering management,

All large--scale projects contain a degree of risk and uncertainty. Software projects are particularly vulnerable to overruns, due to the this uncertainty and the inherent difficulty of software project cost estimation. In this paper we introduce a search based approach to software project robustness. The approach is to formulate this problem as a multi objective Search Based Software Engineering problem, in which robustness and completion time are treated as two competing objectives. The paper presents the results of the application of this new approach to four large real--world software projects, using two different models of uncertainty.

11:0511:30     Search-Based Failure Discovery using Testability Transformations to Generate Pseudo-Oracles é

Phil McMinn

search-based software testing, oracle, pseudo-oracle, non-testable program, program transformation, testability transformation

Testability transformations are source-to-source program transformations that are designed to improve the testability of a program. This paper introduces a novel approach in which transformations are used to improve testability of a program by generating a pseudo-oracle. A pseudo-oracle is an alternative version of a program under test whose output can be compared with the original. Differences in output between the two programs may indicate a fault in the original program. Two transformations are presented. The first can highlight numerical inaccuracies in programs and cumulative roundoff errors, whilst the second may detect the presence of race conditions in multi-threaded code. Once a pseudo-oracle is generated, techniques are applied from the field of search-based testing to automatically find differences in output between the two versions of the program. The results of an experimental study presented in the paper show that both random testing and genetic algorithms are capable of utilizing the pseudo-oracles to automatically find program failures. Using genetic algorithms it is possible to explicitly maximize the discrepancies between the original programs and their pseudo-oracles. This allows for the production of test cases where the observable failure is highly pronounced, enabling the tester to establish the seriousness of the underlying fault.

11:3011:55     Search Based Data Sensitivity Analysis Applied to Requirement Engineering

Mark Harman, Jens Krinke, Jian Ren, Shin Yoo

Pareto Optimality, Data Sensitivity Analysis, The Next Release Problem, Multi-objective Optimisation, Empirical study, Multi-objective Genetic Algorithms

Software engineering is plagued by problems associated with unreliable cost estimates. This paper introduces an approach to sensitivity analysis for requirements engineering. It uses Search-Based Software Engineering to aid the decision maker to explore sensitivity of the cost estimates of requirements for the Next Release Problem (NRP). The paper presents both single- and multi-objective formulation of NRP with empirical sensitivity analysis on synthetic and real-world data. The results show strong correlation between the level of inaccuracy and the impact on the selection of requirements, as well as between the cost of requirements and the impact, which is as intuitively expected. However, there also exist a few sensitive exceptions to these trends; the paper uses a heat-map style visualisation to reveal these exceptions which require careful consideration. The paper also shows that such unusually sensitivity patterns occur in real-world data and how the proposed approach clearly identifies them.

 

EMO-1: Best Paper Nominees

Room: St. Laurent

Session Chair: David Corne (Heriot-Watt University)                                       

10:4011:05     Articulating User Preferences in Many-Objective Problems by Sampling the Weighted Hypervolume é

Anne Auger, Johannes Bader, Dimo Brockhoff, Eckart Zitzler

Hypervolume indicator, preference articulation, Monte Carlo sampling

The hypervolume indicator has become popular in recent years both for performance assessment and to guide the search of evolutionary multiobjective optimizers. Two critical research topics can be emphasized with respect to hypervolume-based search: (i) the hypervolume indicator inherently introduces a specific preference and the question is how arbitrary user preferences can be incorporated; (ii) the exact calculation of the hypervolume indicator is expensive and efficient approaches to tackle many-objective problems are needed. In two previous studies, we addressed both issues independently: a study proposed the weighted hypervolume indicator with which user-defined preferences can be articulated; other studies exist that propose to estimate the hypervolume indicator by Monte-Carlo sampling. Here, we combine these two approaches for the first time and extend them, i.e., we present an approach of sampling the weighted hypervolume to incorporate user-defined preferences into the search for problems with many objectives. In particular, we propose weight distribution functions to stress extreme solutions and to define preferred regions of the objective space in terms of so-called preference points; sampling them allows to tackle problems with many objectives. Experiments on several test functions with up to 25 objectives show the usefulness of the approach in terms of decision making and search.

11:0511:30     Space Partitioning with Adaptive epsilon-Ranking and Substitute Distance Assignments: A Comparative Study on Many-Objective MNK-Landscapes é

Hernan Aguirre, Kiyoshi Tanaka

Evolutionary Many-Objective Optimazation, Non-linear Fitness Functions, Epistasis, Combinatorial Problems

This work compares the performance among objective space partitioning with adaptive epsilon-ranking, subvector dominance assignment, and epsilon dominance assignment methods that have been recently proposed for many-objective optimization. These three methods enhance selection using different strategies to recalculate the primary or secondary ranking of solutions and have been implemented using the framework of NSGA-II. The first method focuses on the primary ranking of solutions by partitioning the objective space into lower dimensional subspaces and re-ranking solutions within each subspace using an adaptive epsilon-ranking procedure. On the other hand, the latter two methods focus on the secondary ranking of solutions, replacing crowding distance with a substitute assignment distance. As test problems, we use scalable MNK-Landscapes with 4 <= M <= 10 objectives, N=100 bits, varying the number of epistatic interactions per bit K in the range 0 <= K <= 50.

11:3011:55     Multiplicative Approximations and the Hypervolume Indicator é

Tobias Friedrich, Christian Horoba, Frank Neumann

approximation, evolutionary algorithms, hypervolume indicator, indicator-based algorithms, multi-objective optimization

Indicator-based algorithms have become a very popular approach to solve multi-objective optimization problems. In this paper, we contribute to the theoretical understanding of algorithms maximizing the hypervolume for a given problem by distributing $\mu$ points on the Pareto front. We examine this common approach with respect to the achieved multiplicative approximation ratio for a given multi-objective problem and relate it to a set of $\mu$ points on the Pareto front that achieves the best possible approximation ratio. For the class of linear fronts and a class of concave fronts, we prove that the hypervolume gives the best possible approximation ratio. In addition, we examine Pareto fronts of different shapes by numerical calculations and show that the approximation computed by the hypervolume may differ from the optimal approximation ratio.

 

RWA-1: Best Paper Nominees

Room: St. Charles

Session Chair: Michael O'Neill (University College Dublin)                              

10:4011:05     Optimization of the Trading Rule in Foreign Exchange using Genetic Algorithm é

Akinori Hirabayashi, Claus Aranha, Hitoshi Iba

Genetic Algorithms (GA), Optimization, Finance, Foreign Exchange (FX), Technical Analysis.

The generation of profitable trading rules for Foreign Exchange (FX) investments is a difficult but popular problem. The use of Machine Learning in this problem allows us to obtain objective results by using information of the past market behavior. In this paper, we propose a Genetic Algorithm (GA) system to automatically generate trading rules based on Technical Indexes. Unlike related researches in the area, our work focuses on calculating the most appropriate trade timing, instead of predicting the trading prices.

11:0511:30     Tracking Multiple Objects in Non-Stationary Video é

Hoang Nguyen, Bir Bhanu

multi-object tracking, swarm intelligence, bacterial foraging optimization, non-stationary video, occlusion

One of the key problems in computer vision and pattern recognition is tracking. Multiple objects, occlusion, and tracking moving objects using a moving camera are some of the challenges that one may face in developing an effective approach for tracking. While there are numerous algorithms and approaches to the tracking problem with their own shortcomings, a less-studied approach considers swarm intelligence. Swarm intelligence algorithms are often suited for optimization problems, but require advancements for tracking objects in video. This paper presents an improved algorithm based on Bacterial Foraging Optimization in order to track multiple objects in real-time video exposed to full and partial occlusion, using video from both fixed and moving cameras. A comparison with various algorithms is provided.

11:3011:55     Optimizing Low-Discrepancy Sequences with an Evolutionary Algorithm é

François-Michel De Rainville, Christian Gagné, Olivier Teytaud, Denis Laurendeau

Quasi-random, Orthogonal Latin Hypercube, Scrambled Halton, Discrepancy, Evolutionary Algorithms, Combinatorial Optimization

Many fields rely on some stochastic sampling of a given complex space. Low-discrepancy sequences are methods aiming at producing samples with better space-filling properties than uniformly distributed random numbers, hence allowing a more efficient sampling of that space. State-of-the-art methods like nearly orthogonal Latin hypercubes and scrambled Halton sequences are configured by permutations of internal parameters, where permutations are commonly done randomly. This paper proposes the use of evolutionary algorithms to evolve these permutations, in order to optimize a discrepancy measure. Results show that an evolutionary method is able to generate low-discrepancy sequences of significantly better space-filling properties compared to sequences configured with purely random permutations.

 

GBML-3: Clustering and Neural Networks

Room: Les Courants

Session Chair: Albert Orriols-Puig (PhD student)                                              

10:4011:05     Unsupervised Feature Weighting with Multi Niche Crowding Genetic Algorithms

Mihaela Elena Breaban, Henri Luchian

feature selection, unsupervised feature weighting, clustering, crowding genetic algorithm

This paper is concerned with feature weighting/selection in the context of unsupervised clustering. Since different subspaces of the feature space may lead to different partitions of the data set, an efficient algorithm to tackle multi-modal environments is needed. In this context, the Multi-Niche Crowding Genetic Algorithm is used for searching relevant feature subsets. The proposed method is designed to deal with the inherent biases regarding the number of clusters and the number of features that appear in an unsupervised framework. The first one is eliminated with the aid of a new unsupervised clustering criterion, while the second is tackled with the aid of cross-projection normalization. The method delivers a vector of weights which offers a ranking of features in accordance with their relevance to clustering.

11:0511:30     Free Lunches for Neural Network Search

Riccardo Poli, Mario Graff

Neural Networks, Evolutionary Algorithms, No-Free-Lunch

In this paper we prove that for a variety of practical situations,  the no-free-lunch (NFL) theorem does not apply to algorithms that search the space of artificial neural networks, such as evolutionary algorithms. We find, in particular, that, while conditions under which NFL applies exist, these require extremely restrictive symmetries on the set of possible problems which are unlikely encountered in practice. In other words, not all algorithms are equally good at finding neural networks that solve problems under all possible performance measures: a superior search algorithm for this domain does exist.

11:3011:55     Evolving Competitive Car Controllers for Racing Games with Neuroevolution

Luigi Cardamone, Daniele Loiacono, Pier Luca Lanzi

NEAT, Games, TORCS, Simulated Car Racing

Modern computer games are at the same time an attractive application domain and an interesting testbed for the evolutionary computation techniques. In this paper we apply NeuroEvolution of Augmenting Topologies (NEAT), a well known neuroevolution approach, to evolve competitive non-player characters for a racing game. In particular, we focused on The Open Car Racing Simulator (TORCS), an open source car racing simulator, already used as a platform for several scientific competitions dedicated to games. We suggest that a competitive controller should have two basic skills: it should be able to drive fast and reliably on a wide range of tracks and it should be able to effectively overtake the opponents avoiding the collisions. In this paper we apply NEAT to evolve separately these skills and then we combined them together in a single controller. Our results show that the resulting controller outperforms the best available controllers on a challenging racing task. In addition, the experimental analysis also confirms that both the skills are necessary to develop a competitive controller.

11:5512:20     Agglomerative Genetic Algorithm for Clustering in Social Networks

Marek Lipczak, Evangelos Milios

genetic algorithms, graph clustering, community detection, social networks

Size and complexity of data repositories collaboratively created by Web users generate a need for new processing approaches. In this paper, we study the problem of detection of fine-grained communities of users in social networks, which can be defined as clustering with a large number of clusters. The practical size of social networks makes the traditional evolutionary based clustering approaches, which represent the entire clustering solution as one individual, hard to apply. We propose an Agglomerative Clustering Genetic Algorithm (ACGA): a population of clusters evolves from the initial state in which each cluster represents one user to a high quality clustering solution. Each step of the evolutionary process is performed locally, engaging only a small part of the social network limited to two clusters and their direct neighborhood. This makes the algorithm practically useful independently of the size of the network. Evaluation on two social network models indicates that ACGA is potentially able to detect communities with accuracy comparable or better than two typical centralized clustering algorithms even though ACGA works under much stricter conditions.

 

COM-1: Best Paper Nominees and Tree Problems

Room: Verriere A

Session Chair: Christian Blum (Universitat Politècnica de Catalunya)                     

10:4011:05     Fixed-Parameter Evolutionary Algorithms and the Vertex Cover Problem é

Stefan Kratsch, Frank Neumann

Combinatorial Optimization, Evolutionary Algorithms, Multi-Objective Optimization, Runtime Analysis

In this paper, we consider multi-objective evolutionary algorithms for the \textsc{Vertex Cover} problem in the context of parameterized complexity. We relate the runtime of our algorithms to the input size and the cost of a minimum solution and point out that the search process of evolutionary algorithms creates partial solutions that are similar to the effect of a kernelization (\ie a special type of preprocessing from parameterized complexity). Based on this, we show that evolutionary algorithms solve the vertex cover problem efficiently if the size of a minimum vertex cover is not too large, i.e.\ the expected runtime is bounded by~$O(f(\OPT) \cdot n^c)$, where~$c$ is a constant and~$f$ a function that only depends on $\OPT$. This shows that evolutionary algorithms are randomized fixed-parameter tractable algorithms for the vertex cover problem.

11:0511:30     Exploiting Hierarchical Clustering for Finding Bounded Diameter Minimum Spanning Trees on Euclidean Instances é

Martin Gruber, Günther R. Raidl

Bounded Diameter Minimum Spanning Tree, Construction Heuristics, Greedy Randomized Search, Dynamic Programming, Local Improvement

The bounded diameter minimum spanning tree problem is an NP-hard combinatorial optimization problem arising, for example, in network design when quality of service is of concern. There exist various exact and metaheuristic approaches addressing this problem, whereas fast construction heuristics are primarily based on Prim's minimum spanning tree algorithm and fail to produce reasonable solutions in particular on large Euclidean instances. A method based on hierarchical clustering to guide the construction process of a diameter constrained tree is presented. Solutions obtained are further refined using a greedy randomized adaptive search procedure. Based on the idea of clustering we also designed a new neighborhood search for this problem. Especially on large Euclidean instances with a tight diameter bound the results are excellent. In this case the solution quality can also compete with that of a leading metaheuristic, whereas the computation only needs a fraction of the time.

11:3011:55     Multiobjective Genetic Programming Approach to Evolving Heuristics for the Bounded Diameter Minimum Spanning Tree Problem

Rajeev Kumar, Bipul Kumar Bal, Peter I Rockett

Optimization methods, multiobjective optimization, genetic algorithm, genetic programming, heuristics, combinatorial optimization, bloat, Pareto front

The bounded-diameter (or diameter-constrained) minimum spanning tree (BDMST) problem is a well-studied combinatorial optimization problem for which several heuristics such as: one-time tree construction (OTTC), center based tree construction(CBTC), iterative refinement (IR) and randomized greedy heuristics (RGH) have been developed. Very little work, however, has been done on producing heuristics for BDMSTs using evolutionary algorithms. In this paper we have used multiobjective genetic programming (MOGP) to explore the evolution of a set of heuristics for the BDMST problem. The quality of the Pareto fronts obtained from the MOGP-evolved heuristics is comparable to, or in some cases better than, the Pareto fronts generated by established, hand-crafted heuristics. MOGP is thus a very promising technique for finding heuristics for BDMSTs and more general multiobjective combinatorial optimizations.

11:5512:20     New Heuristic and Hybrid Genetic Algorithm for Solving the Bounded Diameter Minimum Spanning Tree Problem

Binh Huynh Thi Thanh, Hoai Xuan Nguyen, RI Bob McKay, Nghia Duc Nguyen

Bounded diameter minimum spanning tree, heuristic algorithm, hybrid genetic algorithm, multi-parent crossover

In this paper, we propose a new heuristic, called Center-Based Recursive Clustering CBRC, for solving the bounded diameter minimum spanning tree (BDMST) problem. Our proposed hybrid genetic algorithm [12] is also extended to include the new heuristic and a multi-parent crossover operator. We test the new heuristic and genetic algorithm on two sets of benchmark problem instances for the Euclidean and Non-Euclidean cases. Experimental results show the effectiveness of the proposed heuristic and genetic algorithm.

 

ALIFE-2: Co-evolution

Room: Verriere B

Session Chair: Giovanni Squillero (ALIFE Chair)                                              

10:4011:05     Environmental Robustness in Multi-agent Teams

Terence Soule, Robert B Heckendorn

genetic programming, multi-agent systems, teams, cooperation, OET

Evolution has proven to be an effective method of training heterogeneous multi-agent teams of autonomous agents to explore unknown environments. Autonomous, heterogeneous agents are able to go places where humans are unable to go and perform tasks that would be otherwise dangerous or impossible to complete. However, a serious problem for practical applications of multi-agent teams is how to move from training environments to real-world environments. In particular, if the training environment cannot be made identical to the real-world environment how much will performance suffer? In this research we investigate how differences in training and testing environments affect performance. We find that while in general performance degrades from training to testing, for difficult training environments performance improves in the test environment. Further, we find distinct differences between the performance of different training algorithms with Orthogonal Evolution of Teams (OET) producing the best overall performance.

11:0511:30     Problem Decomposition Using Indirect Reciprocity in Evolved Populations

Heather J Goldsby, Sherri Goings, Jeff Clune, Charles Ofria

Digital evolution, Altruism, Binary String Cover Problem, Evolution of Cooperation

Evolutionary problem decomposition techniques divide a complex problem into simpler subproblems, evolve individuals to produce subcomponents that solve the subproblems, and then assemble the subcomponents to produce an overall solution. Ideally, these techniques would automatically decompose the problem and dynamically assemble the subcomponents to form the solution. However, although significant progress in automated problem decomposition has been made, most techniques explicitly assemble the complete solution as part of the fitness function. In this paper, we propose a digital-evolution technique that lays the groundwork for enabling individuals within the population to dynamically decompose a problem and assemble a solution. Specifically, our approach evolves specialists that produce some subcomponents of a problem, cooperate with others to receive different subcomponents, and then assemble the subcomponents to produce an overall solution. We first establish that this technique is able to evolve specialists that cooperate. We then demonstrate that it is more effective to use a generalist strategy, wherein organisms solve the entire problem themselves, on simple problems, but that a specialist strategy is better on complex problems. Finally, we show that our technique automatically selects a generalist or specialist strategy based on the complexity of the problem.

11:3011:55     Combined Structure and Motion Extraction from Visual Data Using Evolutionary Active Learning

Krishnanand N Kaipa, Josh C Bongard, Andrew N Meltzoff

Application, Co-evolution, Evolutionary robotics, System identification

We present a novel stereo vision modeling framework that generates approximate, yet physically-plausible representations of objects rather than creating accurate models that are computationally expensive to generate. Our approach to the modeling of target scenes is based on carefully selecting a small subset of the total pixels available for visual processing. To achieve this, we use the estimation-exploration algorithm (EEA) to create the visual models: a population of three-dimensional models is optimized against a growing set of training pixels, and periodically a new pixel that causes disagreement among the models is selected from the observed stereo images of the scene and added to the training set. We show here that using only 5 % of the available pixels, the algorithm can generate approximate models of compound objects in a scene. Our algorithm serves the dual goals of extracting the 3D structure and relative motion of objects of interest by modeling the target objects in terms of their physical parameters (e.g., position, orientation, shape, etc.), and tracking how these parameters vary with time. We support our claims with results from simulation as well from a real robot lifting a compound object.

 

Paper Presentations                                                      Friday 10 July 14:00 – 15:40

 

 

GP-4: Heuristics

Room: Cartier A

Session Chair: Michael O'Neill (University College Dublin)                              

14:0014:25     Evolving Reusable 3D Packing Heuristics with Genetic Programming

Sam Allen, Edmund K Burke, Matthew Hyde, Graham Kendall

Knapsack Packing, GP, Hyper-Heuristics, Heuristics

This paper compares the quality of reusable heuristics designed by genetic programming (GP) to those designed by human programmers. The heuristics are designed for the three dimensional knapsack packing problem. Evolutionary computation has been employed many times to search for good quality solutions to such problems. However, actually designing heuristics with GP for this problem domain has never been investigated before. In contrast, the literature shows that it has taken years of experience by human analysts to design the very effective heuristic methods that currently exist. Hyper-heuristics search a space of heuristics, rather than directly searching a solution space. GP operates as a hyper-heuristic in this paper, because it searches the space of heuristics that can be constructed from a given set of components. We show that GP can design simple, yet effective, stand-alone constructive heuristics. While these heuristics do not represent the best in the literature, the fact that they are designed by evolutionary computation, and are human competitive, provides evidence that further improvements in this GP methodology could yield heuristics superior to those designed by humans.

14:2514:50     GP-Rush: Using Genetic Programming to Evolve Solvers for the Rush Hour Puzzle

Ami Hauptman, Achiya Elyasaf, Moshe Sipper, Assaf Karmon

Genetic Programming, Heuristics, Rush-Hour Puzzle, Single-Agent Search

We evolve heuristics to guide IDA* search for the 6x6 and 8x8 versions of the Rush Hour puzzle, a PSPACE-Complete problem, for which no efficient solver has yet been reported. No effective heuristic functions are known for this domain, and---before applying any evolutionary thinking---we first devise several novel heuristic measures, which improve (non-evolutionary) search for some instances, but hinder search substantially for many other instances. We then turn to genetic programming (GP) and find that evolution proves immensely efficacious, managing to combine heuristics of such highly variable utility into composites that are nearly always beneficial, and far better than each separate component. GP is thus able to beat both the human player of the game and also the human designers of heuristics.

14:5015:15     Incorporating Expert Knowledge in Evolutionary Search: A Study of Seeding Methods

Michael D Schmidt, Hod Lipson

Symbolic Regression, Prior Knowledge

We investigated several methods for utilizing expert knowledge in evolutionary search, and compared their impact on performance and scalability into increasingly complex problems. We collected data over one thousand randomly generated problems. We then simulated collecting expert knowledge for each problem by optimizing an approximated version of the exact solution. We then compared six different methods of seeding the approximate model in to the genetic program, such as using the entire approximate model at once or breaking it into pieces. Contrary to common intuition, we found that inserting the complete expert solution into the population is not the best way to utilize that information; using parts of that solution is often more effective. Additionally, we found that each method scaled differently based on the complexity and accuracy of the approximate solution. Inserting randomized pieces of the approximate solution into the population scaled the best into high complexity problems and was the most invariant to the accuracy of the approximate solution. Furthermore, this method produced the least bloated solutions of all methods. In general, methods that used randomized parameter coefficients scaled best with the approximate error, and methods that inserted entire approximate solutions scaled worst with the problem complexity.

15:1515:40     Evolving an Edge Selection Formula for Ant Colony Optimization

Andrew Runka

Genetic Programming, Ant Colony Optimization, Edge Selection

This project utilizes the evolutionary process found in Genetic Programming to evolve an improved decision formula for the Ant System algorithm. Two such improved formulae are discovered, one which uses the typical roulette wheel selection found in all well-known Ant Colony Optimization algorithms, and one which uses a greedy-style selection mechanism. The evolution of each formula is trained using the Ant System algorithm to solve a small Travelling Salesman Problem (TSP) and tested using larger unseen TSP instances.

 

GA-3: Algorithm Design II

Room: Cartier B

Session Chair: Juergen Branke (University of Warwick)                                  

14:0014:25     Evolutionary Algorithms and Dynamic Programming

Benjamin Doerr, Anton Eremeev, Christian Horoba, Frank Neumann, Madeleine Theile

combinatorial optimization, dynamic programming, evolutionary algorithms

Recently, it has been proven that evolutionary algorithms produce good results for a wide range of combinatorial optimization problems. Some of the considered problems are tackled by evolutionary algorithms that use a representation, which enables them to construct solutions in a dynamic programming fashion. We take a general approach and relate the construction of such algorithms to the development of algorithms using dynamic programming techniques. Thereby, we give general guidelines on how to develop evolutionary algorithms that have the additional ability of carrying out dynamic programming steps.

14:2514:50     Three Interconnected Parameters for Genetic Algorithms

Pedro A. Diaz-Gomez, Dean F. Hougen

Performance Analysis, Empirical Study, Evolution Dynamics, Genetic Algorithms, Theory, Working Principles of Evolutionary Computing, Parameter Tuning, Schema Theorem

When an optimization problem is encoded using genetic algorithms, one must address issues of population size, crossover and mutation operators and probabilities, stopping criteria, selection operator and pressure, and fitness function to be used in order to solve the problem. This paper tests a relationship between (1) crossover probability, (2) mutation probability, and (3) selection pressure using two problems. This relationship is based on the schema theorem proposed by Holland and reflects the fact that the choice of parameters and operators for genetic algorithms needs to be problem specific.

14:5015:15     Centric Selection: a Way to Tune the Exploration/Exploitation Trade-off

David Simoncini, Sébastien Verel, Philippe Collard, Manuel Clergue

Cellular evolutionnary algorithms, Selective pressure, Selection operators, Theoretical model

In this paper, we study the exploration / exploitation trade-off in cellular genetic algorithms. We define a new selection scheme, the centric selection, which is tunable and allows controlling the selective pressure with a single parameter. The equilibrium model is used to study the influence of the centric selection on the selective pressure and a new model which takes into account problem dependent statistics and selective pressure in order to deal with the exploration / exploitation trade-off is proposed: the punctuated equilibria model. Performances on the quadratic assignment problem and NK-Landscapes put in evidence an optimal exploration / exploitation trade-off on both of the classes of problems. The punctuated equilibria model is used to explain these results.

15:1515:40     Classification and Regression-based Surrogate Model-assisted Interactive Genetic Algorithm with Individual's Fuzzy Fitness

Xiao Yan Sun, Dun Wei Gong, Su Bei Li

Optimization, interactive genetic algorithm, fuzzy fitness, surrogate model, support vector machine

Interactive genetic algorithms with individual s fuzzy fitness well portray the fuzzy uncertainties of a user s cognition. In this paper, we propose an efficient surrogate model-assisted one to alleviate user fatigue by building a classifier and a regressor to approximate the user s cognition. Two reliable training data sets are obtained based on the user s evaluation credibility. Then a support vector classification machine and a support vector regression machine are trained as the surrogate models with these samples. Specifically, the input trained samples are the individuals evaluated by the user, and the output training samples of the classifier and the regressor are widths and centers of these individuals fuzzy fitness assigned by the user, respectively. These two surrogate models are simultaneously applied to the subsequent evolutions with enlarged population size so as to alleviate user fatigue and enhance the search ability of the algorithm. We constantly update the training data sets and the surrogate models in order to guarantee the approximation precision. Furthermore, we quantitatively analyze the algorithm s performance in alleviating user fatigue and increasing more opportunities to find the optimal solutions. We also apply it to a fashion evolutionary design system to show its efficiency.

 

Evolutionary Computation in Practice-3

Room: Bonsecours

Session Chair: To be announced                                                                       

14:0015:40    

 

Human-Competitive Results

Room: Victoria

Session Chair:                                                                        

14:0015:40    

 

 

Theory

Room: Versailles

Session Chair: Thomas Jansen (University College Cork)                               

14:0014:25     On the Performance Effects of Unbiased Module Encapsulation

R. Paul Wiegand, Gautham Anil, Ivan I Garibay, Ozlem O Garibay, Annie S. Wu

module encapsulation, runtime analysis, search space bias

A recent theoretical investigation of modular representations shows that certain modularizations can introduce a distance bias into a landscape. This was a static analysis, and empirical investigations were used to connect formal results to performance. Here we replace this experimentation with an introductory runtime analysis of performance. We study a base-line, unbiased modularization that makes use of a complete module set (CMS), with special focus on strings that grow logarithmically with the problem size. We learn that even unbiased modularizations can have profound effects on problem performance. Our (1+1) CMS-EA optimizes a generalized OneMax problem in Omega(n^2) time, provably worse than a (1+1) EA. More generally, our (1+1) CMS-EA optimizes a particular class of concatenated functions in O(2^(l_m) k n) time, where l_m is the length of module strings and k is the number of module positions, when the modularization is aligned with the problem separability. We compare our results to known results for traditional EAs, and develop new intuition about modular encapsulation. We observe that search in the CMS-EA is essentially conducted at two levels (intra- and extra-module) and use this observation to construct a module trap, requiring super-polynomial time for our CMS-EA and O(n ln n) for the analogous EA.

14:2514:50     Geometric Differential Evolution

alberto moraglio, julian togelius

differential evolution

Geometric Particle Swarm Optimization (GPSO) is a recently introduced formal generalization of traditional Particle Swarm Optimization (PSO) that applies naturally to both continuous and combinatorial spaces. Differential Evolution (DE) is similar to PSO but it uses different equations governing the motion of the particles. This paper generalizes the DE algorithm to combinatorial search spaces extending its geometric interpretation to these spaces, analogously as what was done for the traditional PSO algorithm. Using this formal algorithm, Geometric Differential Evolution (GDE), we formally derive the specific GDE for the Hamming space associated with binary strings and present experimental results on a standard benchmark of problems.

14:5015:15     Free Lunches in Pareto Coevolution é

Travis C Service, Daniel Tauritz

No Free Lunch, Pareto Coevolution, Multi-Objective Optimization

Recent work in test based coevolution has focused on employing ideas from multi-objective optimization in coevolutionary domains. So called Pareto coevolution treats the coevolving set of test cases as objectives to be optimized in the sense of multi-objective optimization. Pareto coevolution can be seen as a relaxation of traditional multi-objective evolutionary optimization. Rather than being forced to determine the outcome of a particular individual on every objective, pareto coevolution allows the examination of an individual's outcome on a particular objective. By introducing the notion of certifying pareto dominance and mutual non-dominance, this paper proves for the first time that free lunches exist for the class of pareto coevolutionary optimization problems. This theoretical result is of particular interest because we explicitly provide an algorithm for pareto coevolution which has better performance, on average, than all traditional multi-objective algorithms in the relaxed setting of pareto coevolution. The notion of certificates of preference/non-preference has potential implications for coevolutionary algorithm design in many classes of coevolution as well as for general multi-objective optimization in the relaxed setting of pareto coevolution.

15:1515:40     Dynamic Evolutionary Optimisation: An Analysis of Frequency and Magnitude of Change é

Philipp Rohlfshagen, Per Kristian Lehre, Xin Yao

Evolutionary algorithms, dynamic evolutionary computation, runtime analysis

In this paper, we rigorously analyse how the magnitude and frequency of change may affect the performance of the algorithm (1+1) EA_dyn on a set of artificially designed pseudo-Boolean functions, given a simple but well-defined dynamic framework. We demonstrate some counter-intuitive scenarios that allow us to gain a better understanding of how the dynamics of a function may affect the runtime of an algorithm. In particular, we present the function Magnitude, where the time it takes for the (1+1) EA_dyn to relocate the global optimum is less than n^2\log n (i.e., efficient) with overwhelming probability if the magnitude of change is large. For small changes of magnitude, on the other hand, the expected time to relocate the global optimum is e^Omega(n) (i.e., highly inefficient). Similarly, the expected runtime of the (1+1) EA_dyn on the function Balance is O(n^2) (efficient) for a high frequencies of change and n^Omega(sqrt(n)) (highly inefficient) for low frequencies of change. These results contribute towards a better understanding of dynamic optimisation problems in general and show how traditional analytical methods may be applied in the dynamic case.

 

EMO-3: Applications

Room: St. Laurent

Session Chair: Oliver Schütze (INRIA Futurs)                                                  

14:0014:25     A Multiobjective GRASP for Rule Selection

Alan P Reynolds, David W Corne, Beatriz de la Iglesia

Multiobjective optimization, GRASP, Data mining, Rule induction, Rule selection

This paper describes the application of a multiobjective GRASP to rule selection, where previously generated simple rules are combined to give rule sets that minimize complexity and misclassfication cost. As rule selection performance depends heavily on the diversity and quality of the previously generated rules, this paper also investigates a range of multiobjective approaches for creating this initial rule set and the effect on the quality of the resulting classifier.

14:2514:50     Using Behavioral Exploration Objectives to Solve Deceptive Problems in Neuro-evolution

Jean-Baptiste Mouret, Stéphane Doncieux

Neural networks, multiobjective evolutionary algorithm, diversity, deceptive problems

Encouraging exploration, typically by preserving the diversity within the population, is one of the most common method to improve the behavior of evolutionary algorithms with deceptive fitness functions. Most of the published approaches to stimulate exploration rely on a distance between genotypes or phenotypes; however, such distances are difficult to compute when evolving neural networks due to (1) the algorithmic complexity of graph similarity measures, (2) the competing conventions problem and (3) the complexity of most neural-network encodings.In this paper, we introduce and compare two conceptually simple, yet efficient methods to improve exploration and avoid premature convergence when evolving both the topology and the parameters of neural networks. The two proposed methods, respectively called behavioral novelty and behavioral diversity, are built on multiobjective evolutionary algorithms and on a user-defined distance between behaviors. They can be employed with any genotype. We benchmarked them on the evolution of a neural network to compute a Boolean function with a deceptive fitness. The results obtained with the two proposed methods are statistically similar to those of NEAT and substantially better than those of the control experiment and of a phenotype-based diversity mechanism.

14:5015:15     Multiobjective Classification with moGEP: An Application in the Network Traffic Domain

Marek Ostaszewski, Pascal Bouvry, Franciszek Seredynski

Gene Expression Programming, Multiobjective classification

The paper proposes a multiobjective approach to the problem of malicious network traffic classification, with specificity and sensitivity criteria as objective functions for the problem. The multiobjective version of Gene Expression Programming (GEP) called moGEP is proposed and applied to find proper classifiers in the multiobjective search space. The purpose of the classifiers is to discriminate information about the network traffic obtained from Idiotypic Network-based Intrusion Detection System (INIDS), transformed into time series. The proposed approach is validated using the network traffic simulator ns2. Classifiers of high accuracy are obtained and their diversity offers interesting possibilities to the domain of network security.

15:1515:40     Comparison of Similarity Measures for the Multi-Objective Vehicle Routing Problem with Time Windows

Abel Garcia-Najera, John A Bullinaria

Multi-objective optimisation, Vehicle routing problem, Similarity measures

The Vehicle Routing Problem can be seen as a fusion of two well known combinatorial problems, the Travelling Salesman Problem and Bin Packing Problem. It has several variants, the one with Time Windows being the case of study in this paper. Its main objective is to find the lowest-distance set of routes to deliver goods to customers, which have service time windows, using a fleet of identical vehicles with restricted capacity. We consider the simultaneous minimisation of the number of routes along with the total travel distance. Although previous research has considered evolutionary methods for solving this problem, none of them has concentrated on the similarity of solutions. We analyse here two methods to measure similarity, which are incorporated into an evolutionary algorithm to solve the multi-objective problem. We have applied this algorithm to a publicly available set of benchmark instances, and when these similarity measures are considered, our solutions are seen to be competitive or better than others previously published.

 

Job Shop

Room: St. Charles

Session Chair:                                                                        

14:0015:40    

 

GBML-4: Learning Classifier Systems - Scalability, Efficiency and Theory

Room: Les Courants

Session Chair: Xavier Llorà (University of Illinois at Urbana-Champaign)               

14:0014:25     On the Scalability of XCS(F)

Patrick O. Stalph, Martin V. Butz, David E. Goldberg, Xavier Llorà

Learning Classifier Systems, XCS, Function Approximation, Recursive Least Squares, LWPR

Many successful applications have proven the potential of Learning Classifier Systems and the XCS classifier system in particular in datamining, reinforcement learning, and function approximation tasks. Recent research has shown that XCS is a highly flexible system, which can be adapted to the task at hand by adjusting its condition structures, learning operators, and prediction mechanisms. However, fundamental theory concerning the scalability of XCS dependent on these enhancements and problem difficulty is still rather sparse and mainly restricted to boolean function problems. In this article we developed a learning scalability theory for XCSF---the XCS system applied to real-valued function approximation problems. We determine crucial dependencies on functional properties and on the developed solution representation and derive a theoretical scalability model out of these constraints. The theoretical model is verified with empirical evidence. That is, we show that given a particular problem difficulty and particular representational constraints XCSF scales optimally. In consequence, we discuss the importance of appropriate prediction and condition structures regarding a given problem and show that scalability properties can be improved by polynomial orders, given an appropriate, problem-suitable representation.

14:2514:50     A Population-Based Approach to Finding the Matchset of a Learning Classifier System Efficiently

Drew Mellor, Steven P Nicklin

Learning classifier systems, LCS, XCS, Efficient matching

Profiling of the learning classifier system XCS has revealed that its execution time tends to be dominated by rule matching, it is therefore important for rule matching to be efficient. To date, the fastest speedups for matching have been achieved by exploiting parallelism, but efficient sequential approaches, such as bitset and "specificity" matching, can be utilised if there is no platform support for the vector instruction sets. Previous sequential approaches have focussed on improving the efficiency of matching individual rules; in this paper, we introduce a population-based approach that partially matches many rules simultaneously. This is achieved by maintaining the rule-base in a rooted 3-ary tree over which a backtracking depth-first search is run to find the matchset. We found that the method generally outperformed standard and specificity matching on raw matching and on several benchmarking tasks. While the bitset approach attained the best speedups on the benchmarking tasks, we give an analysis that shows that it can be the least efficient of the approaches on long rule conditions. A limitation of the new method is that it is inefficient when the proportion of "don't care" symbols in the rule conditions is very large, which could perhaps be remedied by combining the method with the specificity technique.

14:5015:15     Modeling UCS as a Mixture of Experts

Narayanan Unny Edakunni, Tim Kovacs, Gavin Brown, James A. R. Marshall

Learning Classifier System, Probabilistic Modeling, Mixture of Experts, UCS

We present a probabilistic formulation of UCS (a sUpervised Classifier System). UCS is shown to be a special case of mixture of experts where the experts are learned independently and later combined during prediction. In this work, we develop the links between the constituent components of UCS and a mixture of experts, thus lending UCS a strong analytical background. We find during our analysis that mixture of experts is a more generic formulation of UCS and possesses more generalization capability and flexibility than UCS, which is also verified using empirical evaluations. This is the first time that a simple probabilistic model has been proposed for UCS and we believe that this work will form a useful tool to analyse Learning Classifier Systems and gain useful insights into their working.

15:1515:40     A Mixed Discrete-Continuous Attribute List Representation for Large Scale Classification Domains

Jaume Bacardit, Natalio Krasnogor

Evolutionary Algorithms, Learning Classifier Systems, Rule Induction, Large-scale Datasets

Datasets with a large number of attributes are a difficult challenge for evolutionary learning techniques. The recently proposed attribute list rule representation has shown to be able to significantly improve the overall performance (e.g. run-time, accuracy, rule set size) of the BioHEL Iterative Evolutionary Rule Learning system. In this paper we, first, extend the attribute list rule representation so it can handle not only continuous domains, but also datasets with a very large number of mixed discrete-continuous attributes. Secondly, we benchmark the new representation with a diverse set of large-scale datasets and, third, we compare the new algorithms with several well-known machine learning methods. The experimental results we describe in the paper show that the new representation is equal or better than the state-of-the-art in evolutionary rule representations both in terms of the accuracy obtained with the benchmark datasets used, as well as in terms of the computational time requirements needed to achieve these improved accuracies. The new attribute list representation puts BioHEL on an equal footing with other well-established machine learning techniques in terms of accuracy. In the paper, we also analyse and discuss the current weaknesses behind the current representation and indicate potential avenues for correcting them.

 

COM-3: Hyper-heuristics

Room: Verriere A

Session Chair: Gabriela Ochoa (University of Nottingham)                              

14:0014:25     Solving the Sorting Network Problem Using Iterative Optimization with Evolved Hypermutations

JiYí Kubalík

Evolutionary algorithms, sorting networks, optimization

This paper presents an application of a prototype optimization with evolved improvement steps algorithm (POEMS) to the well-known problem of optimal sorting network design. The POEMS is an iterative algorithm that seeks the best variation of the current solution in each iteration. The variations, also called hypermutations, are evolved by means of an evolutionary algorithm. We compared the POEMS to two mutation-based optimizers, namely the ($\mu+\lambda$)- and ($1+\lambda$)-evolution strategies. For experimental evaluation 10-input, 12-input, 14-input and 16-input instances of the sorting network problem were used. Results show that the proposed POEMS approach clearly outperforms both compared algorithms. Moreover, POEMS was able to find several perfect networks that are equivalent w.r.t. the number of comparators to the best known solutions for the 10-input, 12-input, 14-input, and 16-input problems. Finally, we propose a modification to the POEMS approach that might further improve its performance.

14:2514:50     Stable Solving of CVRPs Using Hyperheuristics

Pablo Garrido, Carlos Castro

Hyperheuristics, Vehicle Routing Problem, Heuristic Search, Metaheuristics, Combinatorial Optimisation

In this paper we present a hill-climbing based hyperheuristic which is able to solve instances of the capacitated vehicle routing problem. The hyperheuristic manages a sequence of constructive-perturbative pairs of low-level heuristics which are applied sequentially in order to construct and improve partial solutions. We present some design considerations that we have taken into account to find the most promising sequence and allow the collaboration among low-level heuristics. Our approach has been tested using some standard state-of-the-art benchmarks and we have compared them with several well-known methods proposed in the literature. We have obtained, on average, stable and good quality solutions after solving various types of problems. Thus, we conclude that our collaborative framework is an interesting approach as it has proved to be: (1) able to adapt itself to different problem instances by choosing a suitable combination of low-level heuristics and (2) capable of preserving stability when solving different types of problems.

14:5015:15     A Weight-Coded Genetic Algorithm For The Capacitated Arc Routing Problem

Matthew J. W. Morgan, Christine L. Mumford

genetic algorithms, heuristics, optimization, transportation

In this paper we present a weight coded genetic algorithm (GA) based approach to the capacitated arc routing problem (CARP). In comparison to metaheuristic algorithms, simple constructive heuristic algorithms often produce poor quality solutions to the CARP. Using a novel weight coding model in conjunction with a series of standard CARP heuristics, acting as a solution engine, we demonstrate how these simple heuristic procedures can be `duped' into producing better solutions to the CARP. The algorithm is tested on a set of problem instances drawn from the literature. Initial results for our GA show that it is possible to reliably produce an uplift in solution quality of between 7.3% and 14.3% above the standard heuristics, the GA identifying 47 optimum or best known solutions from the 57 problem instances tested.

15:1515:40     Analyzing the Landscape of a Graph Based Hyper-heuristic for Timetabling Problems

Gabriela Ochoa, Rong Qu, Edmund K Burke

hyper-heuristics, landscape analysis, graph coloring heuristics, timetabling

Hyper-heuristics can be thought of as ``heuristics to choose heuristics". They are concerned with adaptively finding solution methods, rather than directly producing a solution for the particular problem at hand. Hence, an important feature of hyper-heuristics is that they operate on a search space of heuristics rather than directly on a search space of problem solutions. A motivating aim is to build systems which are fundamentally more generic than is possible today. Understanding the structure of these heuristic search spaces is therefore, a research direction worth exploring. In this paper, we use the notion of fitness landscapes in the context of constructive hyper-heuristics. We conduct a landscape analysis on a heuristic search space conformed by sequences of graph coloring heuristics for timetabling. Our study reveals that these landscapes have a high level of neutrality and positional bias. Furthermore, although rugged, they have the encouraging feature of a globally convex or big valley structure, which indicates that an optimal solution would not be isolated but surrounded by many local minima. We suggest that using search methodologies that explicitly exploit these features may enhance the performance of constructive hyper-heuristics.

 

ALIFE-3: Robotics

Room: Verriere B

Session Chair: Wolfgang Banzhaf (Department of Computer Science, Memorial University of Newfoundland)                                                                  

14:0014:25     Evolution of Functional Specialization in a Morphologically Homogeneous Robot

Joshua Auerbach, Josh C Bongard

Evolutionary robotics, embodied cognition, artificial intelligence

A central tenet of embodied artificial intelligence is that intelligent behavior arises out of the coupled dynamics between an agent's body, brain and environment. It follows that the complexity of an agents's controller and morphology must match the complexity of a given task. However, more complex task environments require the agent to exhibit different behaviors, which raises the question as to how to distribute responsibility for these behaviors across the agents's controller and morphology. In this work a robot is trained to locomote and manipulate an object, but the assumption of functional specialization is relaxed: the robot has a segmented body plan in which the front segment may participate in locomotion and object manipulation, or it may specialize to only participate in object manipulation. In this way, selection pressure dictates the presence and degree of functional specialization rather than such specialization being enforced a priori. It is shown that for the given task, evolution tends to produce functionally specialized controllers, even though successful generalized controllers can also be evolved. Moreover, the robot's initial conditions and training order have little effect on the frequency of finding specialized controllers, while the inclusion of additional proprioceptive feedback increases this frequency.

14:2514:50     Learning Complex Robot Control Using Evolutionary Behavior Based Systems

Yohannes Kassahun, Jakob Schwendner, Jose de Gea, Mark Edgington, Frank Kirchner

Neuroevolution, Kalman Filter, Behavior Based Robotics

Evolving a monolithic solution for complex robotic problems is hard. One of the reasons for this is the difficulty of defining a global fitness function that leads to a solution with desired operating properties. The problem with a global

fitness function is that it may not reward intermediate solutions that would ultimately lead to the desired operating properties. A possible way to solve such a problem is to decompose the solution space into smaller subsolutions with lower number of intrinsic dimensions. In this paper, we apply the design principles of behavior based systems to decompose a complex robot control task

into subsolutions and show how to incrementally modify the fitness function that (1) results in desired operating properties as the subsolutions are learned, and (2) avoids the need to learn the coordination of behaviors separately. We demonstrate our method by learning to control a quadrocopter flying vehicle.

14:5015:15     Neuroevolutionary Reinforcement Learning for Generalized Helicopter Control

Rogier Koppejan, Shimon Whiteson

neural networks, evolutionary computation, reinforcement learning, robot control

Helicopter hovering is an important challenge problem in the field of reinforcement learning. This paper considers several neuroevolutionary approaches to discovering robust controllers for a generalized version of the problem used in the 2008 Reinforcement Learning Competition, in which wind in the helicopter's environment varies from run to run. We present the simple model-free strategy that won first place in the competition and also describe several more complex model-based approaches. Our empirical results demonstrate that neuroevolution is effective at optimizing the weights of multi-layer perceptrons, that linear regression is faster and more effective than evolution for learning models, and that model-based approaches can outperform the simple model-free strategy, especially if prior knowledge is used to aid model learning.

 

Paper Presentations and Special Sessions            Friday 10 July 16:10 – 17:50

 

 

GP-1: Best Paper Nominees

Room: Cartier A

Session Chair: Marc Ebner                                                                                

16:1016:35     A Genetic Programming Approach to Automated Software Repair é

Stephanie Forrest, ThanhVu Nguyen, Westley Weimer, Claire Le Goues

software repair, genetic programming, software engineering

Genetic programming is combined with program analysis methods to repair bugs in off-the-shelf legacy C programs. Fitness is defined using negative test cases that exercise the bug to be repaired and positive test cases that encode program requirements. Once a successful repair is discovered, structural differencing algorithms and delta debugging methods are used to minimize its size. Several modifications to the GP technique contribute to its success: (1) genetic operations are localized to the nodes along the execution path of the negative test case; (2) high-level statements are represented as single nodes in the program tree; (3) genetic operators use existing code in other parts of the program, so new code does not need to be invented. The paper describes the method, reviews earlier experiments that repaired 11 bugs in over 60, 000 lines of code, reports results on new bug repairs, and describes experiments that analyze the performance and efficacy of the evolutionary components of the algorithm.

16:3517:00     Developmental Plasticity in Linear Genetic Programming é

Nicholas Freitag McPhee, Ellery Crane, Sara E. Lahr, Riccardo Poli

Genetic Programming, N-grams, Plasticity, Development

Biological organisms exhibit numerous types of plasticity, where they respond both developmentally and behaviorally to environmental factors. In some organisms, for example, environmental conditions can lead to the developmental expression of genes that would otherwise remain dormant, leading to significant phenotypic variation and allowing selection to act on these otherwise ``invisible'' genes. In contrast to biological plasticity, the vast majority of evolutionary computation systems, including genetic programming, are rigid and can only adapt to very limited external changes. In this paper we extend the N-gram GP system, a recently introduced estimation of distribution algorithm for program evolution, using Incremental Fitness-based Development (IFD), a novel technique which allows for developmental plasticity in the generation of linear-GP style programs. Tests with a large set of problems show that the new system outperforms the original N-gram GP system and is competitive with standard GP. Analysis of the evolved programs indicates that IFD allows for the generation of more complex programs than standard N-gram GP, with the generated programs often containing several separate sequences of instructions that are reused multiple times, often with variations.

 

 

GA-4: Performance

Room: Cartier B

Session Chair: Carlos Cotta (University of Málaga)                                          

16:1016:35     Theoretical Analysis of Fitness-Proportional Selection: Landscapes and Efficiency

Frank Neumann, Pietro Simone Oliveto, Carsten Witt

Running time analysis, Genetic algorithms, Selection, Theory

We investigate theoretically how the fitness landscape influences the optimization process of population-based evolutionary algorithms using fitness-proportional selection. Considering the function OneMax, we show that it cannot be optimized in polynomial time with high probability regardless of the population size. This is proved by a generalization of drift analysis. For populations of at most logarithmic size, the negative result transfers to any function with unique optimum. Based on these insights, we investigate the effect of scaling the objective function in combination with a population that is not too small and show that then such algorithms compute optimal solutions for a wide range of problems in expected polynomial time. Finally, relationships with (1+\lambda)-EAs and (1, \lambda)-EAs are described.

16:3517:00     Markov Chain Analysis of Genetic Algorithms in a Wide Variety of Noisy Environments

Takehiko Nakama

perturbed fitness functions, genetic algorithms, Markov chain analysis, additive noise, evolutionary computation, convergence, noisy (uncertain) environments

We examine the convergence properties of genetic algorithms (GAs) in a wide variety of noisy environments where fitness perturbation can occur in any form for example, fitness functions can be concurrently perturbed by additive and multiplicative noise. We reveal the convergence properties of such GAs by constructing and analyzing a Markov chain that explicitly models the evolution of the algorithms. We compute the one-step transition probabilities of the chain and show that the chain has only one positive recurrent communication class. Based on this property, we establish a condition that is necessary and sufficient for GAs to eventually find a globally optimal solution with probability 1. We also identify a condition that is necessary and sufficient for GAs to eventually with probability 1 fail to find any globally optimal solution. Our analysis also shows that in all the noisy environments, the chain converges to stationarity: It has a unique stationary distribution that is also its steady-state distribution. We describe how this property and the one-step transition probabilities of the chain can be used to compute the exact probability that a GA is guaranteed to select a globally optimal solution upon completion of each iteration.

17:0017:25     Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Power-Law Interactions

Martin Pelikan, Helmut G. Katzgraber

spin glass, power-law interactions, Sherrington-Kirkpatrick spin glass, hierarchical BOA, hBOA, genetic algorithm, GA, crossover, estimation of distribution algorithms, evolutionary computation, hybridization, physics

This paper provides an in-depth empirical analysis of several hybrid evolutionary algorithms on the one-dimensional spin glass model with power-law interactions. The considered spin glass model provides a mechanism for tuning the effective range of interactions, what makes the problem interesting as an algorithm benchmark. As algorithms, the paper considers the genetic algorithm (GA) with twopoint and uniform crossover, and the hierarchical Bayesian optimization algorithm (hBOA). hBOA is shown to outperform both variants of GA, whereas GA with uniform crossover is shown to perform worst. The differences between the compared algorithms become more significant as the problem size grows and as the range of interactions decreases. Unlike for GA with uniform crossover, for hBOA and GA with twopoint crossover, instances with short-range interactions are shown to be easier. The paper also points out interesting avenues for future research.

17:2517:50     Performance of Evolutionary Algorithms on NK Landscapes with Nearest Neighbor Interactions and Tunable Overlap

Martin Pelikan, Kumara Sastry, David E. Goldberg, Martin V. Butz, Mark Hauschild

NK fitness landscape, hierarchical BOA, hBOA, genetic algorithm, performance analysis, scalability, crossover, hybridization, fitness landscape, problem difficulty

This paper presents a class of NK landscapes with nearest-neighbor interactions and tunable overlap. The considered class of NK landscapes is solvable in polynomial time using dynamic programming; this allows us to generate a large number of random problem instances with known optima. Several genetic and evolutionary algorithms are then applied to the generated problem instances. The results are analyzed and related to scalability theory for genetic algorithms and estimation of distribution algorithms.

 

 

RWA-4: Finance 2

Room: Victoria

Session Chair: Anthony Brabazon (University College Dublin)                                

16:1016:35     Evolutionary Inference of Rule-Based Trading Agents from Real-World Stock Price Histories and Their Use in Forecasting

louis charbonneau, Nawwaf Kharma

Artificial market inference, market inversion, artificial market-based forecasting

We propose a representation of the stock-trading market as a group of rule-based trading agents, with the agents evolved using past prices. We encode each rule-based agent as a genome, and then describe how a steady-state genetic algorithm can evolve a group of these genomes (i.e. an inverted market) using past stock prices. This market is then used to generate forecasts of future stocks prices, which are compared to actual future stock prices. We show how our method outperforms standard financial time-series forecasting models, such as ARIMA and Lognormal, on actual stock price data taken from real-world archives. Track: Real World Applications (RWA).

16:3517:00     Multi-objective Optimization with an Evolutionary Artificial Neural Network for Financial Forecasting

Matthew Butler, Ali Daniyal

Genetic Programming, Neural Networks, Multi-objective Optimization, Financial Forecasting, NEAT

In this paper, we attempt to make accurate predictions of the movement of the stock market with the aid of an evolutionary artificial neural network (EANN). To facilitate this objective we constructed an EANN for multi-objective optimization (MOO) that was trained with macro-economic data and its effect on market performance. Experiments were conducted with EANNs that updated connection weights through genetic operators (crossover and mutation) and/or with the aid of back-propagation. The results showed that the optimal performance was achieved under natural complexification of the EANN and that back-propagation tended to over fit the data. The results also suggested that EANNs trained with multi-objectives were more robust than that of a single optimization approach. The MOO approach produced superior investment returns during training and testing over a single objective optimization (SOO).

17:0017:25     Using Memetic Algorithms To Improve Portfolio Performance In Static And Dynamic Trading Scenarios

Claus de Castro Aranha, Hitoshi Iba

Memetic Algorithm, Portfolio, Operational Research

The Portfolio Optimization problem consists of the selection of a group of assets to a long-term fund in order to minimize the risk and maximize the return of the investment. This is a multi-objective (risk, return) resource allocation problem, where the aim is to correctly assign weights to the set of available assets, which determines the amount of capital to be invested in each asset. In this work, we introduce a Memetic Algorithm for portfolio optimization. Our system is based on a tree-structured genome representation which selects assets from the market and establish relationships between them, and a local hill climbing function which uses the information available from the tree-structure to calculate the weights of the selected assets. We use simulations based on historical data to test our system and compare it to previous approaches. In these experiments, our system shows that it is able to adapt to aggressive changes in the market, like the crash of 2008, with reduced trading cost.

 

GP-5: Neutrality, Plasticity and Probabilistic Model Building

Room: Versailles

Session Chair: Leonardo Vanneschi (University of Milano-Bicocca)                       

16:1016:35     Binary Encoding for Prototype Tree of Probabilistic Model Building GP

Toshihiko Yanase, Yoshihiko Hasegawa, Hitoshi Iba

Genetic Programming, Probabilistic Model Building Genetic Programming, Probabilistic Prototype Tree

In recent years, program evolution algorithms based on the estimation of distribution algorithm (EDA) have been proposed to improve search ability of genetic programming (GP) and to overcome GP-hard problems. One such method is the probabilistic prototype tree (PPT) based algorithm. The PPT based method explores the optimal tree structure by using the full tree whose number of child nodes is maximum among possible trees. This algorithm, however, suffers from problems arising from function nodes having different number of child nodes. These function nodes cause intron nodes, which do not affect the fitness function. Moreover, the function nodes having many child nodes increase the search space and the number of samples necessary for properly constructing the probabilistic model. In order to solve this problem, we propose binary encoding for PPT. Here, we convert each function node to a subtree of binary nodes where the converted tree is correct in grammar. Our method reduces ineffectual search space, and the binary encoded tree is able to express the same tree structures as the original method. The effectiveness of the proposed method is demonstrated through the use of two computational experiments.

16:3517:00     Neutrality and Variability: Two Sides of Evolvability in Linear Genetic Programming

Ting Hu, Wolfgang Banzhaf

Evolvability, Rate of Evolution, Neutrality, Variability

The notion of evolvability has been put forward to describe the ``core mechanism'' of natural and artificial evolution. Recently, studies have revealed the influence of the environment upon a system's evolvability. In this contribution, we study the evolvability of a system in various environmental situations. We consider neutrality and variability as two sides of evolvability. The former makes a system tolerant to mutations and provides a hidden staging ground for future phenotypic changes. The latter produces explorative variations yielding phenotypic improvements. Which of the two dominates is influenced by the environment. We adopt two tools for this study of evolvability: 1) the rate of adaptive evolution, which captures the observable adaptive variations driven by evolvability; and 2) the variability of individuals, which measures the potential of an individual to vary functionally. We apply these tools to a Linear Genetic Programming system and observe that evolvability is able to exploit its two sides in different environmental situations.

17:0017:25     Estimating the Distribution and Propagation of Genetic Programming Building Blocks through Tree Compression

Robert I McKay, Xuan Hoai Nguyen, James R. Cheney, MinHyeok Kim, Naoki Mori, Tuan Hao Hoang

Genetic Programming, Compression, Regularity, Building Blocks

Shin et al and McKay et al previously applied tree compression and semantics-based simplification to study the distribution of building blocks in evolving Genetic Programming populations. However their method could only give static estimates of the degree of repetition of building blocks in one generation at a time, supplying no information about the flow of building blocks between generations. Here, we use a state-of-the-art tree compression algorithm, xmlppm, to estimate the extent to which frequent building blocks from one generation are still in use in a later generation. While they compared the behaviour of different GP algorithms on one specific problem -- a simple symbolic regression problem -- we extend the analysis to a more complex problem, a symbolic regression problem to find a Fourier approximation to a sawtooth wave, and to a Boolean domain, odd parity.

 

EMO-4: Multiobjectivization and Stopping

Room: St. Laurent

Session Chair: Hernan Eduardo Aguirre Duran (Shinshu University, Fiber-Nanotech Young Researcher Empowerment Program)                                                  

16:1016:35     Evolutionary Continuation Methods for Optimization Problems

O. Schuetze, A. Lara, C. A. Coello Coello

continuation method, scalar optimization, multi-objective optimization, root finding

In this paper we develop evolutionary strategies for numerical continuation which we apply to scalar and multi-objective optimization problems. To be more precise, we will propose two different methods---an embedding algorithm and a multi-objectivization approach---which are designed to follow an implicitly defined curve where the aim can be to detect the endpoint of the curve (e.g., a root finding problem) or to approximate the entire curve (e.g., the Pareto set of a multi-objective optimization problem). We demonstrate that the novel approaches are very robust in finding the set of interest (point or curve) on several examples.

16:3517:00     Evolutionary Algorithms and Multi-Objectivization for theTravelling Salesman Problem

Martin Jähne, Xiaodong Li, Jürgen Branke

Multi-objective optimization, multi-objectivization, Travelling Salesman Problem, Genetic Algorithms

This paper studies the multi-objectivization of single-objective optimization problems (SOOP) using evolutionary multi-objective algorithms (EMOAs). In contrast to the single-objective case, diversity can be introduced by the multi-objective view of the algorithm and the dynamic use of objectives. Using the travelling salesman problem as an example we illustrate that two basic approaches, a) the addition of new objectives to the existing problem and b) the decomposition of the primary objective into sub-objectives, can improve performance compared to a single-objective genetic algorithm when objectives are used dynamically. Based on decomposition we propose the concept\Multi-Objectivization via Segmentation" (MOS), at which the original problem is reassembled. Experiments reveal that this new strategy clearly outperforms both the traditional genetic algorithm (GA) and the algorithms based on existing multi-objective approaches even without changing objectives.

17:0017:25     A Stopping Criterion Based on Kalman Estimation Techniques with Several Progress Indicators

Jose L Guerrero, Jesus Garcia, Jose M Molina, Luis Marti, Antonio Berlanga

MOOP, MOEAs, Stopping criterion, Kalman filtering, Fusion architectures

The need for a stopping criterion in MOEA s is a repeatedly mentioned matter in the domain of MOOP s, even though it is usually left aside as secondary, while stopping criteria are still usually based on an a-priori chosen number of maximum iterations. In this paper we want to present a stopping criterion for MOEA s based on three different indicators already present in the community. These indicators, some of which were originally designed for solution quality measuring (as a function of the distance to the optimal Pareto front), will be processed so they can be applied as part of a global criterion, based on estimation theory to achieve a cumulative evidence measure to be used in the stopping decision (by means of a Kalman filter). The implications of this cumulative evidence are analyzed, to get a problem and algorithm independent stopping criterion (for each individual indicator). Finally, the stopping criterion is presented from a data fusion perspective, using the different individual indicators stopping criteria together, in order to get a final global stopping criterion.

 

RWA-5: Malware & Software Tools

Room: St. Charles

Session Chair: Paul Walsh (Cork Institute of Technology)                                

16:1016:35     Evolvable Malware

Sadia Noreen, Shafaq Murtaza, M. Zubair Shafiq, Muddassar Farooq

Artificial Evolution, Computer Virus, Genetic Algorithm

The concept of artificial evolution has been applied to numerous real world applications in different domains. In this paper, we use this concept in the domain of virology to evolve computer viruses. We call this domain as Evolvable Malware . To this end, we propose an evolutionary framework that consists of three modules: (1) a code analyzer that generates a high-level genotype representation of a virus from its machine code, (2) a genetic algorithm that uses the standard selection, cross-over and mutation operators to evolve viruses, and (3) the code generator converts the genotype of a newly evolved virus to its machinelevel code. In this paper, we validate the notion of evolution in viruses on a well-known virus family, called Bagle. The results of our proof-of-concept study show that we have successfully evolved new viruses previously unknown and known-variants of Bagle starting from a random population of individuals. To the best of our knowledge, this is the first empirical work on evolution of computer viruses. In future, we want to improve this proof-of-concept framework into a full-blown virus evolution engine.

16:3517:00     Bringing Evolutionary Computation to Industrial Applications with GUIDE

Luis Da Costa, Marc Schoenauer

Evolutionary Computation; Software Development; Algorithms.

Evolutionary Computation is an exciting research field with the power to assist researchers in the task of solving hard optimization problems (i.e., problems where the exploitable knowledge about the solution space is very hard and/or expensive to obtain). However, Evolutionary Algorithms are rarely used outside the circle of knowledgeable practitioners, and in that way have not achieved a status of useful enough tool to assist "general" researchers. We think that part of the blame is the lack of practical implementations of research efforts reflecting a unifying common ground in the field. In this communication we present GUIDE, a software framework incorporating some of the latest results from the EC research community and offering a Graphical User Interface that allows the straightforward manipulation of evolutionary algorithms. From a high-level description provided by the user it generates the code that is needed to run an evolutionary algorithm in a specified existing library (as of March 2009, EO and ECJ are the possible targeted libraries). GUIDE's GUI allows users to acquire a straightforward understanding of EC ideas, while at the same time providing them with a sophisticated research tool. In this communication we present~3 industrial case studies using GUIDE as one of the main tools in order to perform software testing on large, complex systems.

17:0017:25     IMAD: In-Execution Malware Analysis and Detection

Syed Bilal Mehdi, Ajay Kumar Tanwani, Muddassar Farooq

System Call, Malware, Classification

The sophistication of computer malware is becoming a serious threat to the information technology infrastructure, which is the backbone of modern e-commerce systems. We, therefore, advocate the need for developing sophisticated, efficient, and accurate malware classification techniques that can detect a malware on the first day of its launch -- commonly known as ``zero-day malware detection''. To this end, we present a new technique, IMAD, that can not only identify zero-day malware without any apriori knowledge but can also detect a malicious process while it is executing (in-execution detection). The capability of in-execution malware detection empowers an operating system to immediately kill it before it can cause any significant damage. IMAD is a realtime, dynamic, efficient, in-execution zero-day malware detection scheme, which analyzes the system call sequence of a process to classify it as malicious or benign. We use Genetic Algorithm to optimize system parameters of our scheme. The evolutionary algorithm is evaluated on real world synthetic data extracted from a Linux system. The results of our experiments show that IMAD achieves more than 90% accuracy in classifying in-execution processes as benign or malicious. Moreover, our scheme can classify approximately 50% of malicious processes within first 20% of their system calls.

17:2517:50     An Extended Evolution Strategy for the Characterization of Fracture Conductivities from Well Tests

Jérémie Bruyelle, Arnaud Lange

Inverse problem, CMA-ES, Evolutionary algorithm, Fracture, Characterization

The characterization of fractured reservoirs involves: (1) the design of geological models integrating statistical and/or deterministic fracture properties; (2) the validation of flow simulation models by calibrating with dynamic field data e.g. well tests. The latter validation step is critical since it also validates the underlying geological model, it allows one to reduce some uncertainties among the fracture geometrical and distribution properties, and it is often the only mean to characterize fracture conductivities. However this is usually an ill-posed inverse problem: field data are usually not sufficient to fully characterize the fracture system. It is of interest to explore the parameters space effectively, so that multiple solutions may be characterized, and many production development scenarii may be studied. This paper presents a well tests inversion method to characterize fracture sets conductivities. The Covariance Matrix Adaptation-Evolution Strategy (CMA-ES) has been used as the optimization algorithm. It has been tested with some local optimization algorithms for comparison, and extended in order to detect several solutions simultaneously using a local proxy of the response surface. Moreover, uncertainty analyses are performed in regions of interest. Applications are presented for a fracture system with two fracture sets.

 

GBML-1: Best Paper Nominees

Room: Les Courants

Session Chair: Pier Luca Lanzi (Politecnico di Milano)                                     

16:1016:35     Neural Network Ensembles for Time Series Forecasting é

Victor M Landassuri-Moreno, John A. Bullinaria

Evolutionary Programming, Evolutionary Neural Networks, Ensemble Neural Networks, Time Series Forecasting

This work provides an analysis of using the evolutionary algorithm EPNet to create ensembles of artificial neural networks to solve a range of forecasting tasks. Several previous studies have tested the EPNet algorithm in the classification field, taking the best individuals to solve the problem and creating ensembles to improve the performance. But no studies have analyzed the behavior of the algorithm in detail for time series forecasting, nor used ensembles to try to improve the predictions. Thus, the aim of this work is to compare the ensemble approach, using two linear combination methods to calculate the output, against the best individual found. Since there are several parameters to adjust, experiments are set up to optimize them and improve the performance of the algorithm. The algorithm is tested on 21 time series of different behaviors. The experimental results show that, for time series forecasting, it is possible to improve the performance by using the ensemble method rather than using the best individual. This demonstrates that the information contained in the EPNet population is better than the information carried by any one individual.

16:3517:00     Learning Sensorimotor Control Structures with XCSF é

Martin V. Butz, Gerulf K.M. Pedersen, Patrick O. Stalph

Learning Classifier Systems, XCS, Function Approximation, Recursive Least Squares, LWPR, Dynamic Systems

XCS has been shown to be an effective genetics-based classification, datamining, and reinforcement learning tool. The systems learns suitable, compact, maximally general problem solutions online. In the robotics and cognitive systems domains, however, applications of XCSF are very sparse and mostly restricted to small, symbolic problems. Recently, a sensorimotor XCSF system was applied to cognitive arm control. In this paper, we show how this XCSF-based arm-control mechanisms can be extended (1) to efficiently exploit redundant behavioral alternatives and (2) to guide the control of dynamic arm plants. The XCSF system encodes redundant alternatives in its inverse control representations and resolves the encoded redundancies dependent on current constraints---such as arm posture preferences---on the fly. An adaptive PD controller translates the XCSF-based direction and distance commands into actual motor commands for dynamic arm control. We apply the complete system to the control of a simulated, physical arm with three degrees of freedom in a two-dimensional environment and to a simulation of the industrial KR16 Kuka arm with ODE-based physics engine.

17:0017:25     New Entropy Model for Extraction of Structural Information from XCS Population é

WonKyung Park, Jae C. Oh

Speaker Identification, Learning Classifier Systems, Information theory

We show that when XCS is applied to complex real-valued problems, the XCS populations contain structural information. This information exists in the underlying classifier space as the degree of uncertainty associated to the problem space. Therefore, we can use structural information to improve the overall system performance. We take an information theoretic approach, introducing a new entropy model for XCS to extract the structural information from dynamically forming substructures. Using this entropy model, we can collectively emphasize or de-emphasize the effect of an individual input. For a complex problem domain, we chose the speaker identification (SID) problem. The SID problem challenges XCS with a complex problem space that may force the learning classifier system to evolve large and highly overlapping population. The entropy model improved the system performance up to 5-10% in large-set SID problems. Furthermore, the entropy model has the ability to assist the population initialization in the beginning of the learning process by assuring a certain level of overall diversity.

 

COM-4: Theory

Room: Verriere A

Session Chair: Darrell Whitley (Colorado State University)                              

16:1016:35     Partial Neighborhoods of Elementary Landscapes

L. Darrell Whitley, Andrew M. Sutton

Fitness Landscapes, Elementary Landscapes

This paper introduces a new component based model that makes it relatively simple to prove that certain types of landscapes are elementary. We use the model to reconstruct proofs for the Traveling Salesman Problem, Graph Coloring and Min-Cut Graph Partitioning. The same model is then used to efficiently compute the average values over particular partial neighborhoods for these same problems. For Graph Coloring and Min-Cut Graph Partitioning, this computation can be used to focus search on those moves that are most likely to yield an improving move, ignoring moves that cannot yield an improving move. Let $x$ be a candidate solution with objective function value $f(x)$. The mean value of the objective function over the entire landscape is denoted $\bar{f}$. Normally in an elementary landscape one can only be sure that a neighborhood includes an improving move (assuming minimization) if $f(x) > \bar{f}$. However, by computing the expected value of an appropriate partial neighborhood it is sometimes possible to know that an improving move exists in the partial neighborhood even when $f(x) < \bar{f}$.

16:3517:00     A Polynomial Time Computation of the Exact Correlation Structure of k-Satisfiability Landscapes

Andrew M. Sutton, L. Darrell Whitley, Adele E. Howe

Combinatorial Optimization, Fitness Landscapes

The autocorrelation function and related correlation length are statistical quantities that capture the ruggedness of the fitness landscape: a measure that is directly related to the hardness of a problem for certain heuristic search algorithms. Typically, these quantities are estimated empirically by sampling along a random walk. In this paper, we show that a polynomial-time Walsh decomposition of the k-satisfiability evaluation function allows us to compute the exact autocorrelation function and correlation length for any given k-satisfiability instance. We also use the decomposition to compute a theoretical expectation for the autocorrelation function and correlation length over the ensemble of instances generated uniformly at random. We find that this expectation is invariant to the constrainedness of the problem as measured by the ratio of clauses to variables. However, we show that filtered problems, which are typically used in local search studies, have a bias that causes a significant deviation from the expected correlation structure of unfiltered, uniformly generated problems.

17:0017:25     Improved Analysis Methods for Crossover-Based Algorithms

Benjamin Doerr, Madeleine Theile

combinatorial optimization, Crossover, evolutionary algorithm

We deepen the theoretical analysis of the genetic algorithm for the all-pairs shortest path problem proposed by \mbox{Doerr}, Happ and Klein (GECCO 2008). We show that the growth of the paths through crossover operations can be analyzed without the previously used approach of waiting until all paths of a certain length are present in the population. This allows to prove an improved guarantee for the optimization time of $O(n^{3.25} \log^{1/4}(n))$. We also show that this bound is asymptotically tight. Besides the mere run-time result, our analysis is a step towards understanding how crossover works and how it can be analyzed with rigorous methods.

 

ALIFE-4: Digital Organisms

Room: Verriere B

Session Chair: Stefano Cagnoni (University of Parma, Italy)                                   

16:1016:35     Novelty of Behaviour as a Basis for the Neuro-evolution of Operant Reward Learning

Andrea Soltoggio, Ben Jones

Neuro-evolution, Learning, Artificial Life, Neuro-evolution, Adaptation, Learning, Neuromodulation, Artificial Life

An agent that deviates from a usual or previous course of action can be said to display novel or varying behaviour. Novelty of behaviour can be seen as the result of real or apparent randomness in decision making, which prevents an agent from repeating exactly past choices. In this paper, novelty of behaviour is considered as an evolutionary precursor of the exploring skill in reward learning, and conservative behaviour as the precursor of exploitation. Novelty of behaviour in neural control is hypothesised to be an important factor in the neuro-evolution of operant reward learning. Agents capable of varying behaviour, as opposed to conservative, when exposed to reward stimuli appear to acquire on a faster evolutionary scale the meaning and use of such reward information. The hypothesis is validated by comparing the performance during evolution in two environments that either favour or are neutral to novelty. Following these findings, we suggest that neuro-evolution of operant reward learning is fostered by environments where behavioural novelty is intrinsically beneficial, i.e. where varying or exploring behaviour is associated with low risk.

16:3517:00     Evolving Quorum Sensing in Digital Organisms

Benjamin E Beckmann, Philip K McKinley

Arti cial life, digital evolution, quorum sensing, multi-agent system, cooperative behavior, self-organization

For centuries it was thought that bacteria live asocial lives. However, recent discoveries show many species of bacteria communicate in order to perform tasks previously thought to be limited to multicellular organisms. Central to this capability is quorum sensing, whereby organisms detect cell density and use this information to trigger group behaviors. Quorum sensing is used by bacteria in the formation of bio lms, secretion of digestive enzymes and, in the case of pathogenic bacteria, release of toxins or other virulence factors. Indeed, methods to disrupt quorum sensing are currently being investigated as possible treatments for numerous diseases, including cystic brosis, epidemic cholera, and methicillin-resistant Staphylococcus aureus. In this paper we demonstrate the evolution of a quorum sensing behavior in populations of digital organisms. Speci cally, we show that digital organisms are capable of evolving a strategy to collectively suppress self-replication, when the population density reaches a speci c, evolved threshold. We present the evolved genome of an organism exhibiting this behavior and analyze the collective operation of this algorithm. Finally, through a set of experiments we demonstrate that the behavior scales to populations up to 400 times larger than those in which the behavior evolved.

17:0017:25     Liposome Logic

James Smaldon, Natalio Krasnogor, Cameron Alexander, Marian Gheorghe

Biomolecular Computing, Synthetic Biology, Systems Biology, Dissipative Particle Dynamics, CUDA, GPU Computing, Liposome

VLSI research, in its continuous push toward further miniaturisation, is seeking to break through the limitations of current circuit manufacture techniques by moving towards biomimetic methodologies that rely on self-assembly, selforganisation and evodevo-like processes. On the other hand, Systems and Synthetic biology s quest to achieve ever more detailed (multi)cell models are relying more and more on concepts derived from computer science and engineering such as the use of logic gates, clocks and pulse generator analogs to describe a cell s decision making behavior. This paper is situated at the crossroad of these two enterprises. That is, a novel method of non-conventional computation based on the encapsulation of simple gene regulatory-like networks within liposomes is described. Three transcription Boolean logic gates were encapsulated and simulated within liposomes self-assembled from DMPC (dimyristoylphosphatidylcholine) amphiphiles using an implementation of Dissipative Particle Dynamics (DPD) created with the NVIDIA CUDA framework, and modified to include a simple collision chemistry in a stochastic environment. The response times of the AND, OR and NOT gates were shown to be positively effected by the encapsulation within the liposome inner volume.

 

 

 

 

Paper Presentations and Special Sessions         Saturday 11 July 8:30 – 10:10

 

 

GP-6: New Paradigms and Grammatical Evolution

Room: Cartier A

Session Chair: Robert I McKay (Seoul National University)                             

8:308:55         Genetic Programming in the Wild: Evolving Unrestricted Bytecode

Michael Orlov, Moshe Sipper

Java bytecode, Software evolution

We describe a methodology for evolving Java bytecode, enabling the evolution of *extant*, *unrestricted* Java programs, or programs in other languages that compile to Java bytecode. Bytecode is evolved directly, without any intermediate genomic representation. Our approach is based upon the notion of *compatible crossover*, which produces correct programs by performing operand stack-, local variables-, and control flow-based compatibility checks on source and destination bytecode sections. This is in contrast to existing work that uses restricted subsets of the Java bytecode instruction set as a representation language for individuals in genetic programming. Given the huge universe of unrestricted Java bytecode, *as is* programs, our work enables the applications of evolution within this realm. We experimentally validate our methodology by both extensively testing the correctness of compatible crossover on arbitrary bytecode, and by running evolution on a program that exploits the richness of the Java virtual machine architecture and type system.

8:559:20         Graph Structured Program Evolution with Automatically Defined Nodes

Shinichi Shirakawa, Tomoharu Nagao

Automatic Programming, Genetic Programming, Automatically Defined Function, Graph-based Genetic Programming, Graph Structured Program Evolution, Evolutionary Algorithm, Genetic Algorithm, Recursive Program

Currently, various automatic programming techniques have been proposed and applied in various fields. Graph Structured Program Evolution (GRAPE) is a recent automatic programming technique with graph structure. This technique can generate complex programs automatically. In this paper, we introduce the concept of automatically defined functions, called automatically defined nodes (ADN), in GRAPE. The proposed GRAPE program has a main program and several subprograms. We verified the effectiveness of ADN through several program evolution experiments, and report the results of evolution of recursive programs using GRAPE modified with ADN.

9:209:45         Evolving Stochastic Processes Using Feature Tests and Genetic Programming

Brian J. Ross, Janine Imada

stochastic process, process algebra, time series, feature tests, genetic programming

The synthesis of stochastic processes using genetic programming is investigated. Stochastic process behaviours take the form of time series data, in which quantities of interest vary over time in a probabilistic, and often noisy, manner. A suite of statistical feature tests are performed on time series plots from example processes, and the resulting feature values are used as targets during evolutionary search. A process algebra, the stochastic pi-calculus, is used to denote processes. Investigations consider variations of GP representations for a subset of the stochastic pi-calculus, for example, the use of channel unification, and various grammatical constraints. Target processes of varying complexity are studied. Results show that the use of grammatical GP with statistical feature tests can successfully synthesize stochastic processes. Success depends upon a selection of appropriate feature tests for characterizing the target behaviour, and the complexity of the target process.

9:4510:10       Shape Grammars and Grammatical Evolution for Evolutionary Design

Michael O'Neill, John Mark Swafford, James McDermott, Jonathan Byrne, Anthony Brabazon, Elizabeth Shotton, Ciaran McNally, Martin Hemberg

shape grammars, evolutionary design, grammatical genetic programming, grammatical evolution

We describe the first steps in the adoption of Shape Grammars with Grammatical Evolution for application in Evolutionary Design. Combining the concepts of Shape Grammars and Genetic Programming opens up the exciting possibility of truly generative design assist tools. In this initial study we provide some background on the adoption of grammar-based Genetic Programming for Evolutionary Design, describe Shape Grammars, and give a brief overview of Grammatical Evolution before detailing how Grammatical Evolution used Shape Grammars to successfully rediscover some benchmark target structures.

 

GA-5: Dynamic Environments and Aging

Room: Cartier B

Session Chair: Marc Ebner                                                                                

8:308:55         Prediction in Evolutionary Algorithms for Dynamic Environments Using Markov Chains and Nonlinear Regression

Anabela Simões, Ernesto Costa

Evolutionary Algorithms, Dynamic Environments, Prediction, Markov Chains, Nonlinear Regression

The inclusion of prediction mechanisms in Evolutionary Algorithms (EAs) used to solve dynamic environments allows forecasting the future and this way we can prepare the algorithm to the changes. Prediction is a difficult task, but if some recurrence is present in the environment, it is possible to apply statistical methods which use information from the past to estimate the future. In this work we enhance a previously proposed computational architecture, incorporating a new predictor based on nonlinear regression. The system uses a memory-based EA to evolve the best solution and a predictor module based on Markov chains to estimate which possible environments will appear in the next change. Another prediction module is responsible to estimate when next change will happen. In this work important enhancements are introduced in this module, replacing the linear predictor by a nonlinear one. The performance of the EA is compared using no prediction, using predictions supplied by linear regression and by nonlinear regression. The results show that this new module is very robust allowing to accurately predicting when next change will occur in different types of change periods.

8:559:20         Improving Prediction in Evolutionary Algorithms for Dynamic Environments

Anabela Simões, Ernesto Costa

Evolutionary Algorithms, Dynamic Environments, Prediction, Markov Chains, Linear Regression

The addition of prediction mechanisms in Evolutionary Algorithms (EAs) applied to dynamic environments is essential in order to anticipate the changes in the landscape and maximize its adaptability. In previous work, a combination of a linear regression predictor and a Markov chain model was used to enable the EA to estimate when next change will occur and to predict the direction of the change. Knowing when and how the change will occur, the anticipation of the change was made introducing useful information before it happens. In this paper we introduce mechanisms to dynamically adjust the linear predictor in order to achieve higher adaptability and robustness. We also extend previous studies introducing nonlinear change periods in order to evaluate the predictor's accuracy.

9:209:45         Steady-State ALPS for Real-Valued Problems

Gregory S. Hornby

age, premature convergence, numerical optimization, evolutionary algorithm

The objectives of this paper are to describe a steady-state version of the Age-Layered Population Structure (ALPS) Evolutionary Algorithm (EA) and to compare it against other GAs on real-valued problems. Motivation for this work comes from our previous success in demonstrating that a generational version of ALPS greatly improves search performance on a Genetic Programming problem. In making steady-state ALPS, some modifications were made to the method for calculating age and the method for moving individuals up age layers. To demonstrate that ALPS works well on real-valued problems we compare it against CMA-ES and Differential Evolution (DE) on five challenging, real-valued functions and on one real-world problem. While CMA-ES and DE outperform ALPS on the two unimodal test functions, ALPS is much better on the three multimodal test problems and on the real-world problem. Further examination shows that, unlike the other GAs, ALPS maintains a genotypically diverse population throughout the entire search process. These findings strongly suggest that the ALPS paradigm is better able to avoid premature convergence then the other GAs.

9:4510:10       Adaptive Terrain-Based Memetic Algorithms

Carlos R. B. Azevedo, V. Scott Gordon

Memetic algorithms, adaptation, terrain-based models

The Terrain-Based Memetic Algorithm (TBMA) is a diffusion MA in which the local search (LS) behavior depends on the topological distribution of memetic material over a grid (terrain). In TBMA, the spreading of meme values such as LS step sizes emulates cultural differences which often arise in sparse populations. In this paper, adaptive capabilities of TBMAs are investigated by meme diffusion: individuals are allowed to move in the terrain and/or to affect their environment, by either following more effective memes or by transmitting successful meme values to nearby cells. In this regard, four TBMA versions are proposed and evaluated on three image vector quantizer design instances. The TBMAs are compared with K-Means and a Cellular MA. The results strongly indicate that utilizing dynamically adaptive meme evolution produces the best solutions using fewer fitness evaluations for this application.

 

BIO-1: Including Best Paper Nominees

Room: Vitre

Session Chair: Clare Bates Congdon (University of Southern Maine)                     

8:308:55         Modeling Evolutionary Fitness for DNA Motif Discovery é

Sven Rahmann, Tobias Marschall, Frank Behler, Oliver Kramer

Computational biology, Motif discovery, DNA, Transcription factor, Evolutionary algorithms, EA, Evolution strategies, ES, Local search

The motif discovery problem consists of finding over-represented patterns in a collection of sequences. Its difficulty stems partly from the large number of possibilities to define both the motif space to be searched and the notion of over-representation. Since the size of the search space is generally exponential in the motif length, many heuristic methods, including evolutionary algorithms, have been developed. However, comparatively little attention has been devoted to the adequate evaluation of motif quality, especially when comparing motifs of different lengths. We propose an evolution strategy to solve the motif discovery problem based on a new fitness function that simultaneously takes into account (1) the number of motif occurrences, (2) the motif length, and (3) its information content. Experimental results show that the proposed method succeeds in uncovering the correct motif positions and length with high accuracy.

8:559:20         Learning Regulation Functions of Metabolic Systems by Artificial Neural Networks é

Alberto Castellini, Vincenzo Manca

Systems Biology, Metabolic Systems, P Systems, Biological Modeling, Neural Networks, Memetic Algorithms, Regression, Optimization, Evolutionary Algorithms

Metabolic P systems, also called MP systems, are discrete dynamical systems which proved to be effective for modeling biological systems. Their dynamics is generated by means of a metabolic algorithm based on "flux regulation functions". A significant problem related to the generation of MP models from experimental data concerns the synthesis of these functions. In this paper we introduce a new approach to the synthesis of MP fluxes relying on neural networks as universal function approximators, and on evolutionary algorithms as learning techniques. This methodology is successfully tested in the case study of mitotic oscillator in early amphibian embryos.

9:209:45         Multiobjectivization for Parameter Estimation: a Case-Study on the Segment Polarity Network of Drosophila

Tim Hohm, Eckart Zitzler

Parameter Estimation, Multiobjectivization

Mathematical modeling for gene regulative networks (GRNs) provides an effective tool for hypothesis testing in biology. A necessary step in setting up such models is the estimation of model parameters, i.e., an optimization process during which the difference between model output and given experimental data is minimized. This parameter estimation step is often difficult, especially for larger systems due to often incomplete quantitative data, the large size of the parameter space, and non-linearities in system behavior. Addressing the task of parameter estimation, we investigate the influence multiobjectivization can have on the optimization process. On the example of an established model for the segment polarity GRN in Drosophila, we test different multiobjectivization scenarios compared to a singleobjective function proposed earlier for the parameter optimization of the segment polarity network model. Since, instead of a single optimal parameter setting, a set of optimal parameter settings exists for this GRN, the comparison of the different optimization scenarios focuses on the capabilities of the different scenarios to identify optimal parameter settings showing good diversity in the parameter space. By embedding the objective functions in an evolutionary algorithm (EA), we show the superiority of the multiobjective approaches in exploring the model s parameter space.

9:4510:10       Biologically-implemented Genetic Algorithm for Protein Engineering

Hiroshi Someya, Kensaku Sakamoto, Masayuki Yamamura

genetic algorithm, protein engineering, molecular evolution, biological implementation, DNA computing, combinatorial optimization, modeling, simulation

Protein engineering, developing novel proteins with a desired activity, has become increasingly important in many fields. This paper presents two studies in protein engineering: (i) a biological implementation of a genetic algorithm, with an observed in vitro evolution, and (ii) its preliminary computer simulation using a prototypical probabilistic model based on a random walk. The steady evolution of the fitness distribution of the mutant proteins that appeared in the biological experiments has provided some convincing evidence about the search behavior and the fitness landscape. The computer simulation and the simple probabilistic model have indicated their future potential for providing a practical alternative to the time-consuming manual operations in the biological experiments. Successful experimental results in the two studies have raised expectations of their further development and mutually beneficial interactions.

 

Late Breaking Papers 1: New Frameworks

Room: Bonsecours

Session Chair:                                                                        

8:308:45         Opposition Based Initialization in Particle Swarm Optimization (O-PSO)

Hajira Jabeen, Zunera Jalil, Abdul Rauf Baig

Opposition based learning, PSO, Swarm Intelligence, Optimization, Initialization

Particle Swarm Optimization, a population based optimization technique has been used in wide number of application areas to solve optimization problems. This paper presents a new algorithm for initialization of population in standard PSO called Opposition based Particle Swarm Optimization (O-PSO). The performance of proposed initialization algorithm is compared with the existing PSO variants on several benchmark functions and the experimental results reveal that O-PSO outperforms existing approaches to a large extent.

8:459:00         PEPPA - A Project for Evolutionary Predator Prey Algorithms

Hendrik Blom, Christiane Küch, Katja Losemann, Chris Schwiegelshohn

Evolutionary Algorithms, Multi-objective Optimization, , Objectoriented Programming, , Framework

The predator-prey model---based on aspects of the natural interplay of predators and prey---has become an alternative method for tackling multi-objective optimization problems. In this process, each predator targets a single objective, and it is expected that the joint influence of all predators affects the prey population in such a way that good solutions survive. This paper describes PEPPA, a modular software framework for designing and analyzing predator-prey models. It allows to model arbitrary world environments, complex predator behavior and dynamic prey adaptation. Further, PEPPA provides various tools for modeling, visualization and parallelization. We explain the architecture and handling of the framework and provide exemplary results on a simple multi-objective benchmark problem.

9:009:15         A New Multi-Objective Algorithm, Pareto Archived DDS

Masoud Asadzadeh, Bryan A. Tolson

Pareto Archive, Test Problems, Dynamically Dimensioned Search, Parsimony, Crowding Distance, Convergence, Multi-Objective optimization

The dynamically Dimensioned Search (DDS) continuous global optimization algorithm [5] is modified to solve continuous multi-objective unconstrained optimization problems. Inspired by Pareto Archived Evolution Strategy (PAES), the proposed multi-objective optimization, PA-DDS uses DDS as a search engine and archives all the non-dominated solutions during the search. In order to maintain the diversity of solutions, PA-DDS, which is single solution based, samples from less crowded parts of the external set of non-dominated solutions in each iteration. This tool inherits the parsimonious characteristic of DDS, so it has only one algorithm parameter from DDS, which does not need tuning, and one new parameter that defines the portion of computational budget for finding individual minima. PA-DDS uses crowding distance measure to sample from less populated parts of the tradeoff. The performance of the proposed tool is assessed in solving two test problems ZDT4 and ZDT6 [8] that have multiple local Pareto fronts. Results show that PA-DDS is promising relative to two high quality benchmark algorithms NSGA-II [3, 7] and AMALGAM [7].

9:159:30         NEAT in Increasingly Non-Linear Control Situations

Matthias J. Linhardt, Martin V. Butz

Neuroevolution, Adaptive control, Dynamic control, NEAT

Evolution of neural networks, as implemented in NEAT, has proven itself successful on a variety of low-level control problems such as pole balancing and vehicle control. Nonetheless, high-level control problems still seem to trouble neuroevolution approaches. This paper presents such a complex task and explores how different aspects of problem difficulty have varying strong influences on NEAT's performance. Based on these findings, the question is discussed why certain problem domains are less beneficial for neuroevolution approaches' performance, which may provide useful insights into how to design the next generation of neuroevolution algorithms.

9:309:45         A Concurrent Evolutionary Approach for Rich Combinatorial Optimization

Teodor Gabriel Crainic, Gloria Cerasela Crisan, Michel Gendreau, Nadia Lahrichi, Walter Rei

combinatorial optimization, multi-attribute problems, concurrent evolution, cooperative algorithms, rich vrp

In this paper, we propose a meta-heuristic method based on the concurrent evolution of heterogeneous populations, decomposition/recomposition principles and specialized operators to address multi-attribute, rich, combinatorial optimization problems. We illustrate the method through an application to a rich Vehicle Routing Problem that considers duration and capacity constraints as well as time windows, multiple periods and multiple depots.

 

PES-2: Implementation

Room: Victoria

Session Chair: Enrique Alba (University of Málaga)                                         

8:308:55         Coarse Grain Parallelization of Evolutionary Algorithms on GPGPU Cards with EASEA

Ogier Maitre, Laurent A Baumes, Nicolas Lachiche, Avelino Corma, Pierre Collet

Parallelization, evolutionary computation, genetic algorithms, GPGPU, GPU, Graphic Processing Unit, EASEA, many-core, multi-core

This paper presents a straightforward implementation of a standard evolutionary algorithm that evaluates its population in parallel on a GPGPU card. Tests done on a benchmark and a real world problem using an old NVidia 8800GTX card and a newer but not top of the range GTX260 card show a roughly 30x (resp. 100x) speedup for the whole algorithm compared to the same algorithm running on a standard 3.6GHz PC. Knowing that much faster hardware is already available, this opens new horizons to evolutionary computation, as search spaces can now be explored 2 or 3 orders of magnitude faster, depending on the number of used GPGPU cards. Since these cards remains very difficult to program, the knowhow has been integrated into the old EASEA language, that can now output code for GPGPU ({\tt -cuda} option).

8:559:20         pCMALib: a Parallel FORTRAN 90 Library for the Evolution Strategy with Covariance Matrix Adaptation

Christian L. Mueller, Benedikt Baumgartner, Georg Ofenbeck, Birte Schrader, Ivo F. Sbalzarini

CMA-ES, evolution strategies, software library, parallel island model

We present pCMALib, a parallel software library that implements the Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). The library is written in Fortran 90/95 and uses the Message Passing Interface (MPI) for efficient parallelization on shared and distributed memory machines. It allows single CMA-ES optimization runs, embarrassingly parallel CMA-ES runs, and coupled parallel CMA-ES runs using a cooperative island model. As one instance of an island model CMA-ES, the recently presented Particle Swarm CMA-ES (PS-CMA-ES) is included using collaborative concepts from Swarm Intelligence for the migration model. Special attention has been given to an efficient design of the MPI communication protocol, a modular software architecture, and a user-friendly programming interface. The library includes a Matlab interface and is supplemented with an efficient Fortran implementation of the official CEC 2005 set of 25 real-valued benchmark functions. This is the first freely available Fortran implementation of this standard benchmark test suite. We present test runs and parallel scaling benchmarks on Linux clusters and multi-core desktop computers, showing good parallel efficiencies and superior computational performance compared to the reference implementation.

9:209:45         Characterizing the Genetic Programming Environment for FIFTH (GPE5) on a High Performance Computing Cluster

Kenneth Holladay

Genetic Programming, High Performance Computing

Solving complex, real-world problems with genetic programming (GP) can require extensive computing resources. However, the highly parallel nature of GP facilitates using a large number of resources simultaneously, which can significantly reduce the elapsed wall clock time per GP run. This paper explores the performance characteristics of an MPI version of the Genetic Programming Environment for FIFTH (GPE5) on a high performance computing cluster. The implementation is based on the island model with each node running the GP algorithm asynchronously. In particular, we examine the effect of several configurable properties of the system including the ratio of migration to crossover, the migration cycle of programs between nodes, and the number of processors used. The problems employed in the study were selected from the fields of symbolic regression, finite algebra, and digital signal processing.

9:4510:10       An Asynchronous Parallel Implementation of a Cellular Genetic Algorithm for Combinatorial Optimization

Gabriel Luque, Enrique Alba, Bernabé Dorronsoro

Asynchronous cellular GAs, parallelism, combinatorial optimization

Cellular genetic algoritms (cGAs) are characterized by its grid structure population, in which individuals can only interact with their neighbors. This kind of algorithms has demonstrated to have a high numerical performance thanks to the good exploration/exploitation balance they perform in the search space. Although cGAs seem very appropriate for parallelism, there is a low number of works proposing or studing parallel models for clusters of computers. This is probably because the model requires a high communication level between sub-populations due to the tight interactions among individuals. These parallel versions are however needed to cope with the high computational requirements of the current real-world problems. This article proposes a new parallel cellular genetic algorithm which maintains (or even improves because its asynchronicity) the numerical behaviour of a serial cGA, while at the same time it provokes an important reduction on the execution time for finding the optimal solution.

 

ACO-2: Particle Swarm Optimization

Room: Versailles

Session Chair: Riccardo Poli (University of Essex)                                          

8:308:55         The Singly-Linked Ring Topology for the Particle Swarm Optimization Algorithm

Angel Eduardo Muñoz Zavala, Arturo Hernández Aguirre, Enrique Raúl Villa Diharce

PSO, Neighborhood, Singly-linked ring, Global Optimization

This paper introduces a new neighborhood structure for Particle Swarm Optimization, called Singly-Linked Ring. The approach proposes a neighborhood whose members share the information at a different rate. The objective is to avoid the premature convergence of the flock and stagnation into local optimal. The approach is applied at a set of global optimization problems commonly used in the literature. The singly-linked structure is compared against the state-of-the-art neighborhoods structures. The proposal is easy to implement, and its results and its convergence performance are better than other structures.

8:559:20         Swarming to Rank for Information Retrieval

Ernesto Diaz-Aviles, Wolfgang Nejdl, Lars Schmidt-Thieme

Learning to Rank, Particle Swarm Optimization, Ranking Function, Swarm Intelligence

This paper presents an approach to automatically optimize the retrieval quality of ranking functions. Taking a Swarm Intelligence perspective, we present a novel method, SwarmRank, which is well-founded in a Particle Swarm Optimization framework. SwarmRank learns a ranking function by optimizing the combination of various types of evidences such content and hyperlink features, while directly maximizing Mean Average Precision, a widely used evaluation measure in Information Retrieval. Experimental results on well-established Learning To Rank benchmark datasets show that our approach significantly outperformed standard approaches (i.e., BM25) that only use basic statistical information derived from documents collections, and is found to be competitive with Ranking SVM and RankBoost in the task of ranking relevant documents at the very top positions.

9:209:45         Visualizing the Search Process of Particle Swarm Optimization

Yong-Hyuk Kim, Kang Hoon Lee, Yourim Yoon

Particle swarm optimization, visualization, data mapping

It is a hard problem to understand the search process of particle swarm optimization over high-dimensional domain. The visualization depicts the total search process and then it will allow better understanding of how to tune the algorithm. For the investigation, we adopt Sammon's mapping, which is a well-known distance-preserving mapping. We demonstrate the usefulness of the proposed methodology by applying it to some function optimization problems.

9:4510:10       VISPLORE: A Toolkit to Explore Particle Swarms by Visual Inspection

Namrata Khemka, Christian Jacob

Visualization, population-based methods, particle swarm optimization, user interfaces

VISPLORE is an interactive toolkit to visualize data generated from population-based optimization algorithms. In particular, we demonstrate VISPLORE's capabilities by exploring solutions from particle swarm optimization on different levels - from individual solutions, to populations (as sets of solutions), to experiments (as sets of populations), and to collections of experiments. Users can control aspects of the various visual representations to view multi-dimensional data produced over time. Furthermore, our application includes a large range of automatic skimming tools, controlled by manual and automated sliders, and supports interactive manipulations. By using dynamic visualization techniques, we provide instant visualizations customizable by the user for data exploration tasks.

 

RWA-6: Embedded Systems

Room: St. Charles

Session Chair: Giovanni Squillero                                                                      

8:308:55         Evolutionary Algorithms for the Mapping of Pipelined Applications onto Heterogeneous Embedded Systems

Marco Branca, Lorenzo Camerini, Fabrizio Ferrandi, Pier Luca Lanzi, Christian Pilato, Donatella Sciuto, Antonino Tumeo

FPGA, pipelining, mapping, BOA, GA, TS, SA

In this paper, we compare four algorithms for the mapping of pipelined applications on a heterogeneous multiprocessor platform implemented using Field Programmable Gate Arrays (FPGAs) with customizable processors.Initially, we describe the framework and the model of pipelined application we adopted. Then, we focus on the problem of mapping a set of pipelined applications onto a heterogeneous multiprocessor platform and consider four search algorithms: Tabu Search, Simulated Annealing, Genetic Algorithms, and the Bayesian Optimization Algorithm. We compare the performance of these four algorithms on a set of synthetic problems and on two real-world applications (the JPEG image encoding and the ADPCM sound encoding). Our results show that on our framework the Bayesian Optimization Algorithm outperforms all the other three methods for the mapping of pipelined applications.

8:559:20         A Highly Configurable Test System for Evolutionary Black-Box Testing of Embedded Systems

Peter M Kruse, Joachim Wegener, Stefan Wappler

Testing infrastructure, Evolutionary Testing, Hardware-in-the-loop-testing, Antilock-braking-system, Functional Testing

During the development of electronic control units (ECU) in domains like the automotive industry, tests are performed on various test platforms, such as model-in-the-loop, software-in-the-loop, processor-in-the-loop, and hardware-in-the-loop platforms in order to find faults in early development stages. Test cases must be specified to verify the properties demanded of the system on these test platforms. This is an expensive and non-trivial task. Evolutionary black-box testing, a recent approach to automating the creation of interesting test cases, can solve this task completely automatically. This paper describes our evolutionary test system and how to apply it to the test of functional and non-functional properties of embedded systems. Our system supports the aforementioned test platforms and allows for the reuse of the generated test cases across them. In a case study with an antilock braking system, we demonstrate the operation of the system.

9:209:45         Mixed Heuristic and Mathematical Programming Using Reference Points for Dynamic Data Types Optimization in Multimedia Embedded Systems

José L. Risco-Martín, Ignacio Hidalgo, David Atienza, Juan Lanchares, Oscar Garnica

Multi-Objective Optimization, Particle Swarm Optimization, Evolutionary Computation, Mathematical Programming, Embedded Systems Design

New multimedia embedded applications are becoming increasingly dynamic. Thus, they cannot only rely on static data allocation, and must employ Dynamically-allocated Data Types (DDTs) to store their data and efficiently use the limited physical resources of embedded devices. However, the optimization of the DDTs for each target embedded system is a very time-consuming process due to the large design space of possible DDTs implementations and selection for the memory hierarchy of each specific embedded device. Thus, new suitable exploration methods for embedded design metrics (memory accesses, usage and power consumption) need to be developed. In this paper we analyze the benefits of two different exploration techniques for DDTs optimization: Multi-Objective Particle Swarm Optimization (MOPSO) and a Mixed Integer Linear Program (MILP). Furthermore, we propose a novel MOPSO exploration method, OMOPSO*, which uses MILP solutions, as reference points, to guide a MOPSO exploration and reach solutions closer to the real Pareto front of solutions. Our experiments with two real-life embedded applications show that our algorithm achieves 40% better coverage and set of solutions than state-of-the-art optimization methods for DDTs (MOGAs and other MOPSOs).

9:4510:10       Optimization of Dynamic Memory Managers for Embedded Systems Using Grammatical Evolution

José L. Risco-Martín, David Atienza, Rubén Gonzalo, Ignacio Hidalgo

Grammatical Evolution, Genetic Programming, Evolutionary Computation, Embedded Systems Design

New portable consumer embedded devices must execute multimedia applications (e.g., 3D games, video players and signal processing software, etc.) that demand extensive memory accesses and memory usage at a low energy consumption. Moreover, they must heavily rely on Dynamic Memory (DM) due to the unpredictability of the input data and system behavior. Within this context, consistent design methodologies that can tackle efficiently the complex DM behavior of these multimedia applications are in great need. In this article, we present a novel design framework, based on genetic programming, which allows us to design custom DM management mechanisms, optimizing memory accesses, memory use and energy consumption for the target embedded system. First, we describe the large design space of DM management decisions for multimedia embedded applications. Then, we propose a suitable way to traverse this design space using grammatical evolution and construct custom DM managers that minimize the DM used by these highly dynamic applications. As a result, our methodology achieves significant improvements in memory accesses (23% less on average), memory usage (38% less on average) and energy consumption (reductions of 21% on average) in real case studies over the current state-of-the-art DM managers used for these types of dynamic applications. To the best of our knowledge, this is the first approach to efficiently design DM managers for embedded systems using evolutionary computation and grammar evolution.

 

GBML-5: Other learning paradigms

Room: Les Courants

Session Chair: Tim Kovacs (University of Bristol)                                             

8:308:55         Uncertainty Handling CMA-ES for Reinforcement Learning

Verena Heidrich-Meisner, Christian Igel

reinforcement learning, direct policy search, covariance matrix adaptation evolution strategy, uncertainty handling

The covariance matrix adaptation evolution strategy (CMAES) has proven to be a powerful method for reinforcement learning (RL). Recently, the CMA-ES has been augmented with an adaptive uncertainty handling mechanism. Because uncertainty is a typical property of RL problems this new algorithm, termed UH-CMA-ES, is promising for RL. The UH-CMA-ES dynamically adjusts the number of episodes considered in each evaluation of a policy. It controls the signal to noise ratio such that it is just high enough for a sufficiently good ranking of candidate policies, which in turn allows the evolutionary learning to find better solutions. This significantly increases the learning speed as well as the robustness without impairing the quality of the final solutions. We evaluate the UH-CMA-ES on fully and partially observable Markov decision processes with random start states and noisy observations. A canonical natural policy gradient method and random search serve as a baseline for comparison.

8:559:20         Improving Markov Chain Classification Using StringTransformations and Evolutionary Search

Timothy Meekhof, Terence Soule, Robert B. Heckendorn

Markov Chain Modeling, Genetic Algorithm Search

Markov chain classi cation or n-gram modeling, as it is sometimes called, is a very common and powerful tool for many problems that involve sequences of nite tokens. It has been used in a wide range of tasks, including natural language modeling, author identi cation, protein similarity searches, and even bird-song recognition. Clearly, an im- provement in the Markov chain classi cation will have broad implications in many elds. Our new system, called SCS, improves upon Markov chain classi cation by introducing a preprocessing step in which an arbitrary set of transforma- tion functions are performed on the input sequences. Since the space of possible transformations is unbounded, a genetic algorithm search is used to search for functions that improve classi cation. We show that GA is able to consistently nd preprocessing functions that substantially improve the performance of the Markov chain model.

9:209:45         Evolutionary-Based Learning of Generalised Policies for AI Planning Domains

Michelle Galea, John Levine, Dave Humphreys, Henrik Westerberg

Inductive Learning, Decision List Learning, Rule Order Optimisation, Iterative Rule Learning, Automated Planning

This work investigates the application of Evolutionary Computation (EC) to the induction of generalised policies used to solve AI planning problems. A policy is defined as an ordered list of rules that specifies which action to perform under which conditions; a solution (plan) to a planning problem is a sequence of actions suggested by the policy. We compare an evolved policy with one produced by a state-of-the art approximate policy iteration approach. We discuss the relative merits of the two approaches with a focus on the impact of the knowledge representation and the learning strategy. In particular we note that a strategy commonly and successfully used for the induction of classification rules, that of Iterative Rule Learning, is not necessarily an optimal strategy for the induction of generalised policies aimed at minimising the number of actions in a plan.

9:4510:10       A PSO-Based Framework for Dynamic SVM Model Selection

Marcelo N. Kapp, Robert Sabourin, Patrick Maupin

Particle Swarm Optimization, Dynamic Optimization, Support Vector Machines, Model Selection

Support Vector Machines (SVM) are very powerful classifiers in theory but their efficiency in practice rely on an optimal selection of hyper-parameters. A naïve or ad hoc choice of values for the latter can lead to poor performance in terms of generalization error and high complexity of parameterized models obtained in terms of the number of support vectors identified. This hyper-parameter estimation with respect to the aforementioned performance measures is often called the model selection problem in the SVM research community. In this paper we propose a strategy to select optimal SVM models in a dynamic fashion in order to attend that when knowledge about the environment is updated with new observations and previously parameterized models need to be re-evaluated, and in some cases discarded in favour of revised models. This strategy combines the power of the swarm intelligence theory with the conventional grid-search method in order to progressively identify and sort out potential solutions using dynamically updated training datasets. Experimental results demonstrate that the proposed method outperforms the traditional approaches tested against it while saving considerable computational time.

 

GDS-1: Best Papers in Generative and Developmental Systems

Room: Verriere A

Session Chair: Kenneth Owen Stanley (University of Central Florida)                    

8:308:55         Evolving Symmetric and Modular Neural Networks for Distributed Control é

Vinod K Valsalam, Risto Miikkulainen

indirect encoding, modularity, symmetry, group theory, multilegged robots, controllers

Problems such as the design of distributed controllers are characterized by modularity and symmetry. However, the symmetries useful for solving them are often difficult to determine analytically. This paper presents a nature-inspired approach called Evolution of Network Symmetry and mOdularity (ENSO) to solve such problems. It abstracts properties of generative and developmental systems, and utilizes group theory to represent symmetry and search for it systematically, making it more evolvable than randomly mutating symmetry. This approach is evaluated by evolving controllers for a quadruped robot in physically realistic simulations. On flat ground, the resulting controllers are as effective as those having hand-designed symmetries. However, they are significantly faster when evolved on inclined ground, where the appropriate symmetries are difficult to determine manually. The group-theoretic symmetry mutations of ENSO were also significantly more effective at evolving such controllers than random symmetry mutations. Thus, ENSO is a promising approach for evolving modular and symmetric solutions to distributed control problems, as well as multiagent systems in general.

8:559:20         The Sensitivity of HyperNEAT to Different Geometric Representations of a Problem é

Jeff Clune, Charles Ofria, Robert T Pennock

HyperNEAT, NEAT, Geometry, Generative Encoding, Indirect Encoding, Developmental Encoding, Neuroevolution, Artificial Neural Networks

HyperNEAT, a generative encoding for evolving artificial neural networks (ANNs), has the unique and powerful ability to exploit the geometry of a problem (e.g., symmetries) by encoding ANNs as a function of a problem's geometry. This paper provides the first extensive analysis of the sensitivity of HyperNEAT to different geometric representations of a problem. Understanding how geometric representations affect the quality of evolved solutions should improve future designs of such representations. HyperNEAT has been shown to produce coordinated gaits for a simulated quadruped robot with a specific two-dimensional geometric representation. Here, the same problem domain is tested, but with different geometric representations of the problem. Overall, experiments show that the quality and kind of solutions produced by HyperNEAT can be substantially affected by the geometric representation. HyperNEAT outperforms a direct encoding control even with randomized geometric representations, but performs even better when a human engineer designs a representation that reflects the actual geometry of the robot. Unfortunately, even choices in geometric layout that seem to be inconsequential a priori can significantly affect fitness. Additionally, a geometric representation can bias the type of solutions generated (e.g., make left-right symmetry more common than front-back symmetry). The results suggest that HyperNEAT practitioners can obtain good results even if they do not know how to geometrically represent a problem, and that further improvements are possible with a well-chosen geometric representation.

9:209:45         Scalability, Generalization and Coevolution - Experimental Comparisons Applied to Automated Facility Layout Planning é

Marcus Furuholmen, Kyrre Harald Glette, Mats Erling Hovin, Jim Torresen

Coevolution, Development, Gene Expression Programming, Facility Layout Problem

Several practical problems in industry are difficult to optimize, both in terms of scalability and representation. Heuristics designed by domain experts are frequently applied to such problems. However, designing optimized heuristics can be a non-trivial task. One such difficult problem is the Facility Layout Problem (FLP) which is concerned with the allocation of activities to space. This paper is concerned with the block layout problem, where the activities require a fixed size and shape (modules). This problem is commonly divided into two sub problems; one of creating an initial feasible layout and one of improving the layout by interchanging the location of activities. We investigate how to extract novel heuristics for the FLP by applying an approach called Cooperative Coevolutionary Gene Expression Programming (CCGEP). By taking advantage of the natural problem decomposition, one species evolves heuristics for pre-scheduling, and another for allocating the activities onto the plant. An experimental, comparative approach investigates various features of the CCGEP approach. The results show that the evolved heuristics converge to suboptimal solutions as the problem size grows. However, coevolution has a positive effect on optimization of single problem instances. Expensive fitness evaluations may be limited by evolving generalized heuristics applicable to unseen fitness cases of arbitrary sizes.

9:4510:10       Evolution of Cartesian Genetic Programs Capable of Learning é

Gul Muhammad Khan, Julian F Miller

Computational Development, Cartesian Genetic Programming, Co-evolution, Artificial Neural Networks, Checkers

We propose a new form of Cartesian Genetic Programming (CGP) that develops into a computational network capable of learning. The developed network architecture is inspired by the brain. When the genetically encoded programs are run, a networks develops consisting of neurons, dendrites, axons, and synapses which can grow, change or die. We have tested this approach on the task of learning how to play checkers. The novelty of the research lies mainly in two aspects: Firstly, chromosomes are evolved that encode programs rather than the network directly and when these programs are executed they build networks which appear to be capable of learning and improving their performance over time solely through interaction with the environment. Secondly, we show that we can obtain learning programs much quicker through co-evolution in comparison to the evolution of agents against a minimax based checkers program. Also, co-evolved agents show signi cantly increased learning capabilities compared to those that were evolved to play against a minimax-based opponent.

 

EDA-1: Best Paper Nominees

Room: Verriere B

Session Chair: Peter A.N. Bosman (Centre for Mathematics and Computer Science)                   

8:308:55         EDA-RL: Estimation of Distribution Algorithms for ReinforcementLearning Problems é

Hisashi Handa

Estimation of Distribution Algorithms, Reinforcement Learning Problems, Conditional Random Fields

By making use of probabilistic models, EDAs can outperform conventional evolutionary computations. In this paper, EDAs are extended to solve reinforcement learning problems which arise naturally in a framework for autonomous agents. In reinforcement learning problems, we have to find out better policies of agents such that the rewards for agents in the future are increased. In general, such a policy can be represented by conditional probabilities of the agents' actions, given the perceptual inputs. In order to estimate such a conditional probability distribution, Conditional Random Fields (CRFs) by Lafferty et al. is newly introduced into EDAs in this paper. The reason for adopting CRFs is that CRFs are able to learn conditional probabilistic distributions from a large amount of input-output data, i.e., episodes in the case of reinforcement learning problems. On the other hand, conventional reinforcement learning algorithms can only learn incrementally. Computer simulations of Probabilistic Transition Problems and Perceptual Aliasing Maze Problems show the effectiveness of EDA-RL.

8:559:20         Difficulty of Linkage Learning in Estimation of Distribution Algorithms é

Si-Cheng Chen, Tian-Li Yu

Genetic Algorithms, Linkage Learning, Estimation of Distribution Algorithms, Parity Function

This paper investigates the difficulty of linkage learning, an essential core, in EDAs. Specifically, it examines allelic-pairwise independent functions including the parity, parity-with-trap, and Walsh-code functions. While the parity function was believed to be difficult for EDAs in previous work, our experiments indicate that it can be solved by CGA within a polynomial number of function evaluations to the problem size. Consequently, the apparently difficult parity-with-trap function can be easily solved by ECGA, even though the linkage model is incorrect. A convergence model for CGA on the parity function is also derived to verify and support the empirical findings. Finally, this paper proposes a so-called Walsh-code function, which is more difficult than the parity function. Although the proposed function does deceive the linkage-learning mechanism in most EDAs, EDAs are still able to solve it to some extent.

9:209:45         Approximating the Search Distribution to the Selection Distribution in EDAs é

S. Ivvan Valdez-Peña, Arturo Hernández-Aguirre, Salvador Botello-Rionda

Estimation Distribution Algorithms

In an Estimation of Distribution Algorithm (EDA) with an infinite sized population the selection distribution equals the search distribution. For a finite sized population these distributions are different. In practical EDAs the goal of the search distribution learning algorithm is to approximate the selection distribution. The source data is the selected set, which is derived from the population by applying a selection operator. The new approach described here eliminates the explicit use of the selection operator and the selected set. We rewrite for a finite population the selection distribution equations of four selection operators. The new equation is called the empirical selection distribution. Then we show how to build the search distribution that gives the best approximation to the empirical selection distribution. Our approach gives place to practical EDAs which can be easily and directly implemented from well established theoretical results. This paper also shows how common EDAs with discrete and real variables are adapted to take advantage of the empirical selection distribution. A comparison and discussion of performance is presented.

9:4510:10       Why One Must Use Reweighting in Estimation of Distribution Algorithms é

Fabien Teytaud, Olivier Teytaud

Estimation of Distribution Algorithms, Reweighting, premature convergence

We study the update of the distribution in Estimation of Distribution Algorithms, and show that a simple modification leads to unbiased estimates of the optimum. The simple modification (based on a proper reweighting of estimates) leads to a strongly improved behavior in front of premature convergence.

 

 

Paper Presentations and Special Sessions      Saturday 11 July 10:40 – 12:20

 

 

GP-7: Operators and Bloat

Room: Cartier A

Session Chair: Mengjie Zhang (Victoria University of Wellington)                            

10:4011:05     Operator Equalisation, Bloat and Overfitting

Sara Silva, Leonardo Vanneschi

Genetic Programming, Bloat, Overfitting, Operator Equalisation, Real-World Application, Human Oral Bioavailability, Regression

Operator equalisation was recently proposed as a new bloat control technique for genetic programming. By controlling the distribution of program lengths inside the population, it can bias the search towards smaller or larger programs. In this paper we propose a new implementation of operator equalisation and compare it to a previous version, using a hard real-world regression problem where bloat and overfitting are major issues. The results show that both implementations of operator equalisation are completely bloat-free, producing smaller individuals than standard genetic programming, without compromising the generalization ability. We also show that the new implementation of operator equalisation is more efficient and exhibits a more predictable and reliable behavior than the previous version. We advance some arguable ideas regarding the relationship between bloat and overfitting, and support them with our results.

11:0511:30     Approximating Geometric Crossover in Semantic Space

Krzysztof Krawiec, Pawel Lichocki

Geometric Crossover, Global Convexity, Fitness-distance Correlation, Genetic Programming, Program Semantics

We propose a crossover operator that works with genetic programming trees and is approximately geometric crossover in the semantic space. By defining semantic as program's evaluation profile with respect to a set of fitness cases and constraining to a specific class of metric-based fitness functions, we cause the fitness landscape in the semantic space to have perfect fitness-distance correlation. The proposed approximately geometric semantic crossover exploits this property of the semantic fitness landscape by an appropriate sampling. We demonstrate also how the proposed method may be conveniently combined with hill climbing. We discuss the properties of the methods, and describe an extensive computational experiment concerning logical function synthesis and symbolic regression.

11:3011:55     Using Crossover Based Similarity Measure to Improve Genetic Programming Generalization Ability

Leonardo Vanneschi, Steven Gustafson

Genetic Programming, Generalization, Crossover Based Similarity/Dissimilarity Measure

Generalization is a very important issue in Machine Learning. In this paper, we present a new idea for improving Genetic Programming generalization ability. The idea is based on a dynamic two-layered selection algorithm and it is tested on a real-life drug discovery regression application. The algorithm begins using root mean squared error as fitness and the usual tournament selection. A list of individuals called ``repulsors'' is also kept in memory and initialized as empty. As an individual is found to overfit the training set, it is inserted into the list of repulsors. When the list of repulsors is not empty, selection becomes a two-layer algorithm: individuals participating to the tournament are not randomly chosen from the population but are themselves selected, using the average dissimilarity to the repulsors as a criterion to be maximized. Two kinds of similarity/dissimilarity measures are tested for this aim: the well known structural (or edit) distance and the recently defined subtree crossover based similarity measure. Although simple, this idea seems to improve Genetic Programming generalization ability and the presented experimental results show that Genetic Programming generalizes better when subtree crossover based similarity measure is used, at least for the test problems studied in this paper.

 

 

GA-6: Coevolution

Room: Cartier B

Session Chair: Erik Goodman (Michigan State University)                              

10:4011:05     Overlapped Community Detection in Complex Networks

Clara Pizzuti

Genetic algorithms, data mining, clustering, complex networks

Extracting and understanding community structure in complex networks is one of the most intensively investigated problems in recent years. In this paper we propose a genetic based approach to discover overlapping communities. The algorithm optimizes a fitness function able to identify densely connected groups of nodes by employing it on the line graph corresponding to the graph modelling the network. The method generates a division of the network in a number of groups in an unsupervised way. This number is automatically determined by the optimal value of the fitness function. Experiments on synthetic and real life networks show the capability of the method to successfully detect the network structure.

11:0511:30     Bayesian Network Structure learning using Cooperative Coevolution

Olivier Barriere, Evelyne Lutton, Pierre-Henri Wuillemin

Cooperative Coevolution, Bayesian Network Structure learning, Independence Model, Parisian Evolution

We propose a cooperative-coevolution -- Parisian trend -- algorithm, IMPEA (Independence Model based Parisian EA), to the problem of Bayesian networks structure estimation. It is based on an intermediate stage which consists of evaluating an independence model of the data to be modelled. The Parisian cooperative coevolution is particularly well sui\-ted to the structure of this intermediate problem, and allows to represent an independence model with help of a whole population, each individual being an independence statement, i.e. a component of the independence model. Once an independence model is estimated, a Bayesian network can be built. This two level resolution of the complex problem of Bayesian network structure estimation has the major advantage to avoid the difficult problem of direct acyclic graph representation within an evolutionary algorithm, which causes many troubles related to constraints handling and slows down algorithms. Comparative results with a deterministic algorithm, PC, on two test cases (including the Insurance BN benchmark), prove the efficiency of IMPEA, which provides better results than PC in a comparable computation time, and which is able to tackle more complex issues than PC.

11:3011:55     Pareto Cooperative Coevolutionary Genetic Algorithm Using Reference Sharing Collaboration

Min Shi, Haifeng Wu

Genetic algorithm, Cooperative coevolution, Pareto dominance, Function optimization, Epistasis

Epistasis has been a well-known hard problem in optimization solved by evolution, especially by cooperative coevolution. Standard cooperative coevolution usually gets worse performance than standard evolution for optimization problems with epistasis. In this work, we propose a much improved version of cooperative coevolutionary model by using reference sharing collaboration. Pareto dominance is used for measuring the performance of individuals in our algorithm. We evaluate and compare our method with standard evolution and cooperative coevolution on a suite of test problems with and without epistasis interaction. Our experimental results show that the proposed algorithm outperforms the compared methods in most of the cases, and especially, it is superior to the standard evolution to handle epistasis.

11:5512:20     Cheating for Problem Solving: A Genetic Algorithm with Social Interactions

Rafael Lahoz-Beltra, Gabriela Ochoa, Uwe Aickelin

genetic algorithms, social interaction, game theory, knapsack problems

We propose a variation of the standard genetic algorithm that incorporates social interaction between the individuals in the population. Our goal is to understand the evolutionary role of social systems and its possible application as a non-genetic new step in evolutionary algorithms. In biological populations, i.e. animals, even human beings and microorganisms, social interactions often affect the fitness of individuals. It is conceivable that the perturbation of the fitness via social interactions is an evolutionary strategy to avoid trapping into local optimum, thus avoiding a fast convergence of the population. We model the social interactions according to Game Theory. The population is, therefore, composed by cooperator and defector individuals whose interactions produce payoffs according to well known game models (prisoner s dilemma, chicken game, and others). Our results on Knapsack problems show, for some game models, a significant performance improvement as compared to a standard genetic algorithm.

 

BIO-2: Bioinformatics and Computational Biology

Room: Vitre

Session Chair: Martin Middendorf (University of Leipzig)                                 

10:4011:05     A Memetic Algorithm for Gene Selection and Molecular Classification of Cancer

Béatrice Duval, Jin-Kao Hao, Jose Crispin Hernandez Hernandez

memetic algorithm, specialized crossover, local search, classification, gene selection

Choosing a small subset of genes that enables a good classification of diseases on the basis of microarray data is a difficult optimization problem. This paper presents a memetic algorithm, called MAGS, to deal with gene selection for supervised classification of microarray data. MAGS is based on an embedded approach for attribute selection where a classifier tightly interacts with the selection process. The strength of MAGS relies on the synergy created by combining a problem specific crossover operator and a dedicated local search procedure, both being guided by relevant information from a SVM classifier. Computational experiments on 8 well-known microarray datasets show that our memetic algorithm is very competitive compared with some recently published studies.

11:0511:30     Enhancing Search Space Diversity in Multi-Objective Evolutionary Drug Molecule Design using Niching

Johannes W. Kruisselbrink, Alexander Aleman, Michael T.M. Emmerich, Ad P. IJzerman, Andreas Bender, Thomas Baeck, Eelke van der Horst

Evolutionary algorithms, Molecules, Drug design

There exist several applications of multi-objective evolutionary algorithms for drug design, however, a common drawback in recent approaches is that the diversity of resulting molecule populations is relatively low. This paper seeks to overcome this problem by introducing niching as a technique to enhance search space diversity. A single population approach with dynamic niche identification is studied in the application domain. In order to apply niching in molecular spaces a metric for measuring the dissimilarity of molecules will be introduced. The approach will be validated in case studies and compared with results of an NSGA-II algorithm without niching in the search space.

11:3011:55     Swarming Along the Evolutionary Branches Sheds Light on Genome Rearrangement Scenarios

Nikolay Vyahhi, Adrien Goëffon, Macha Nikolski, David James Sherman

Genome rearrangement, Ant Colony Optimization

A genome rearrangement scenario describes a series of chromosome fusion, fission, and translocation operations that suffice to rewrite one genome into another. Exact algorithmic methods for this important problem focus on providing one solution, while the set of distance-wise equivalent scenarios is very large. Moreover, no criteria for filtering for biologically plausible scenarios is currently proposed. We present an original metaheuristic method that uses Ant Colony Optimization to randomly explore the space of optimal and suboptimal rearrangement scenarios. It improves on the state of the art both by permitting large-scale enumeration of optimal scenarios, and by labeling each with metrics that can be used for post-processing filtering based on biological constraints.

11:5512:20     Evolutionary Hypernetwork Classifiers for Protein-ProteinInteraction Sentence Filtering

Jakramate Bootkrajang, Sun Kim, Byoung-Tak Zhang

Hypernetwork classifier, Evolutionary learning, Protein-protein interaction sentence filtering

Protein-Protein Interaction (PPI) extraction, among ongoing biomedical text mining challenges, is becoming a topic in focus because of its crucial role in providing a starting point to understand biological processes. Machine learning (ML) techniques have been applied to extract the PPI information from biomedical literature. Although they have provided reasonable performance so far, more features are required for real use. In particular, many ML-approaches lack human understandability for learned models. Here, we propose a novel method for classifying PPI sentences. Our approach utilizes the modified hypernetwork model, a hypergraph with weighted hyperedges that are calibrated via an evolutionary learning method. The evolutionary hypernetwork memorizes fragments of training patterns while self-adjusting its own structure for detecting PPI sentences. For experiments, we show that our approach provides competitive performance compared to other ML methods. Apart from its superior classification performance, the evolving hypernetwork model comes with a highly interpretable structure. We show how significant PPI patterns can be naturally extracted from the learned model. We also analyze the discovered patterns.

 

Evolutionary Computation in Practice-4

Room: Bonsecours

Session Chair: To be announced                                                                       

10:4012:20    

 

PES-1: Best Paper Nominees

Room: Victoria

Session Chair: Enrique Alba (University of Málaga)                                         

10:4011:05     Overcoming Partitioning in Large Ad Hoc Networks Using Genetic Algorithms é

Grégoire Danoy, Bernabé Dorronsoro, Pascal Bouvry

Cooperative coevolutionary GAs, cellular GAs, topology control

We deal in this paper with the important problem of partitioning in ad hoc networks. In our approach, we assume that some devices might have other communication interfaces rather than Wi-Fi and/or Bluetooth allowing to connect remote devices (e.g., technologies such as GPRS or HSDPA). This would allow us to build hybrid networks for overcoming the network partitioning. Hence, the problem considered in this work is to establish remote links between devices (called bypass links) in order to maximize the QoS of the network by optimizing its properties to make it small world. Additionally, the number of this kind of links in the network should be minimized as well, since we consider that not all the devices have these communication capabilities, or it could be a requirement to minimize the use of the long range network (for example, in the case its use supposes some cost). We face the problem with four different GAs (both parallel and sequential) and compare their behaviors on six different network instances. All the algorithms were tested with a new encoding of the problem, which is demonstrated to provide more accurate results than the previously existing one.

11:0511:30     Strategies to Minimise the Total Run Time of Cyclic Graph Based Genetic Programming with GPUs é

Tony E Lewis, George D Magoulas

Genetic Programming, Cyclic, Cartesian Genetic Programming, Graph Based, Graphics Card, Graphics Processing Unit, CUDA

In this paper, we describe our work to investigate how much cyclic graph based Genetic Programming (GP) can be accelerated on one machine using currently available mid-range Graphics Processing Units (GPUs). Cyclic graphs pose different problems for evaluation than do trees and we describe how our CUDA based, "population parallel" evaluator tackles these problems. Previous similar work has focused on the evaluation alone. Unfortunately large reductions in the evaluation time do not necessarily translate to similar reductions in the total run time because the time spent on other tasks becomes more significant. We show that this problem can be tackled by having the GPU execute in parallel with the Central Processing Unit (CPU) and with memory transfers. We also demonstrate that it is possible to use a second graphics card to further improve the acceleration of one machine. These additional techniques are able to reduce the total run time of the GPU system by up to 2.83 times. The combined architecture completes a full cyclic GP run 434.61 times faster than the single-core CPU equivalent. This involves evaluating at an average rate of 3.85 billion GP operations per second over the course of the whole run.

11:3011:55     Distributed Hyper-Heuristics for Real Parameter Optimization é

Marco Biazzini, Balazs Banhelyi, Alberto Montresor, Mark Jelasity

hyper-heuristics, differential evolution, distributed computing

Hyper-heuristics (HHs) are heuristics that work with an arbitrary set of search operators or algorithms and combine these algorithms adaptively to achieve a better performance than any of the original heuristics. While HHs lend themselves naturally for distributed deployment, relatively little attention has been paid so far on the design and evaluation of distributed HHs. To our knowledge, our work is the first to present a detailed evaluation and comparison of distributed HHs for real parameter optimization in an island model. Our set of test functions includes well-known benchmark functions and two realistic space-probe trajectory optimization problems. The set of algorithms available to the HHs include several variants of differential evolution, and uniform random search. Our main conclusion is that some of the simplest HHs are surprisingly successful in a distributed environment, and the best HHs we tested provide a robust and stable good performance over a wide range of scenarios and parameters.

 

ACO-3: Ant Colony Optimization

Room: Versailles

Session Chair: Ana Lucia C. Bazzan (Universidade Federal do Rio Grande do Sul, Porto Alegre RS - Brasil)                                                                     

10:4011:05     The Bee Colony-inspired Algorithm (BCiA) - A Two-Stage Approach for Solving the Vehicle Routing Problem With Time Windows

Sascha Häckel, Patrick Dippold

Swarm Intelligence, Multi-objective optimization, Vehicle Routing Problem, Combinatorial Optimization, Bee Colony

The paper presents a new optimization algorithm, which adapts the behavior of honey bees during their search for nectar. In addition to the established ant algorithms, bee-inspired algorithms represent a relatively young form of solution procedures, whose applicability to the solution of complex optimization problems has already been shown. The herein presented two-stage approach belongs to the class of metaheuristics to control a construction heuristic and has been applied successfully to the NP-hard Vehicle Routing Problem with Time Windows (VRPTW). Within the paper, evaluation results are presented, which compare the developed algorithm to some of the most successful procedures for the solution of benchmark problems. The pursued approach gives the best results so far for a metaheuristic to control a construction heuristic.

11:0511:30     Self-Synchronized Duty-Cycling in Sensor Networks with Energy Harvesting Capabilities: The Static Network Case

Hugo Hernández, Christian Blum

Sensor networks, Swarm Intelligence, Duty-cycling

Biological studies have shown that some species of ants rest quite large fractions of their time. Interestingly, not only single ants show this behaviour, but whole ant colonies exhibit synchronized activity phases resulting from self-organization. Inspired by this behaviour, we previously introduced an adaptive and self-synchronized duty-cycling mechanism for mobile sensor networks with energy harvesting capabilities. In this paper, we focus on the study of this mechanism in the context of static sensor networks, because most sensor networks deployed in practice are static. We consider various scenarios that result from the combination of different network topologies and sizes. Our results show that our mechanism also works in the case of static sensor networks with energy harvesting capabilities.

11:3011:55     An Ant Colony Optimization Approach to the Traveling Tournament Problem

David C Uthus, Patricia J Riddle, Hans W Guesgen

Ant colony optimization, traveling tournament problem, constraint processing

The traveling tournament problem has proven to be a difficult problem for the ant colony optimization metaheuristic, with past approaches showing poor results. This is due to the unusual problem structure and feasibility constraints. We present a new ant colony optimization approach to this problem, hybridizing it with a forward checking and conflict-directed backjumping algorithm while using pattern matching and other constraint satisfaction strategies. The approach improves on the performance of past ant colony optimization approaches, finding better quality solutions in shorter time, and exhibits results comparable to other state-of-the-art approaches.

11:5512:20     An Ant Based Algorithm for Task Allocation in Large-Scale and Dynamic Multiagent Scenarios

Fernando dos Santos, Ana L. C. Bazzan

Artificial Intelligence, Multiagent Systems, Swarm Intelligence

This paper addresses the problem of multiagent task allocation in extreme teams. An extreme team is composed by a large number of agents with overlapping functionality operating in dynamic environments with possible inter-task constraints. We present eXtreme-Ants, an approximate algorithm for task allocation in extreme teams. The algorithm is inspired by the division of labor in social insects and in the process of recruitment for cooperative transport observed in ant colonies. Division of labor offers fast and efficient decision-making, while the recruitment ensures the allocation of tasks that require simultaneous execution. We compare eXtreme-Ants with two other algorithms for task allocation in extreme teams and we show that it achieves balanced efficiency regarding quality of the solution, communication, and computational effort.

 

RWA-7: Games, Decision Strategies and the Environment

Room: St. Charles

Session Chair: Pier Luca Lanzi (Politecnico di Milano)                                     

10:4011:05     Generative relations for evolutionary equilibria detection

D. Dumitrescu, Rodica Ioana Lung, Tudor Dan Mihoc

Games, Equilibrium, evolutionary detection

A general technique for detecting equilibria in finite non cooperative games is proposed. Fundamental idea is that every equilibrium is characterized by a binary relation on the game strategies. This relation - called generative relation - induces an appropriate domination concept. Game equilibrium is described as the set of non dominated strategies with respect to the generative relation. Slight generalizations of some well known equilibrium concepts are proposed. A population of strategies is evolved according to a domination-based ranking in oder to produce better and better equilibrium approximations. Eventually the process converges towards the game equilibrium. The proposed technique opens an way for qualitative approach of game equilibria. In order to illustrate the proposed evolutionary technique different equilibria for different continuous games are studied. Numerical experiments indicate the potential of the proposed concepts and technique.

11:0511:30     A Genetic Algorithm for Analyzing Choice Behavior with Mixed Decision Strategies

Jella Pfeiffer, Dejan Duzevik, Franz Rothlauf, Koichi Yamamoto

decision making, consumer behavior, e-commerce, genetic algorithm, choice task, decision strategy

In the field of decision-making a fundamental problem is how to uncover people s choice behavior. While choices them- selves are often observable, our underlying decision strategies determining these choices are not entirely understood. Previous research defined a number of decision strategies and conjectured that people do not apply only one strategy but switch strategies during the decision process. To the best of our knowledge, empirical evidence for the latter conjecture is missing. This is why we monitored the pur- chase decisions 624 consumers shopping online. We study how many of the observed choices can be explained by the existing strategies in their pure form, how many decisions can be explained if we account for switching behavior, and investigate switching behavior in detail. Since accounting for switching leads to a large search space of possible mixed decision strategies, we apply a genetic algorithm to find the set of mixed decision strategies which best explains the observed behavior. The results show that mixed strategies are used more often than pure ones and that a set of four mixed strategies is able to explain 93.9% of choices in a scenario with 4 alternatives and 75.4% of choices in a scenario with 7 alternatives.

11:3011:55     Simulating Human Grandmasters: Evolution and Coevolution of Evaluation Functions

Omid David-Tabibi, H. Jaap van den Herik, Moshe Koppel, Nathan S. Netanyahu

Computer chess, Fitness evaluation, Games, Genetic Algorithms, Parameter tuning

This paper demonstrates the use of genetic algorithms for evolving a grandmaster-level evaluation function for a chess program. This is achieved by combining supervised and unsupervised learning. In the supervised learning phase the organisms are evolved to mimic the behavior of human grandmasters, and in the unsupervised learning phase these evolved organisms are further improved upon by means of coevolution. While past attempts succeeded in creating a grandmaster-level program by mimicking the behavior of existing computer chess programs, this paper presents the first successful attempt at evolving a state-of-the-art evaluation function by learning only from databases of games played by humans. Our results demonstrate that the evolved program outperforms a two-time World Computer Chess Champion.

11:5512:20     Genetic Programming Methodology that Synthesize Vegetation Indices for the Estimation of Soil Cover

Cesar Puente, Gustavo Olague, Stephen V. Smith, Stephen Bullock, Miguel A. Gonzalez, Alejandro Hinojosa

Vegetation index, Genetic programming, Remote sensing, Soil erosion, RUSLE

Remote sensing has become a powerful tool to derive biophysical properties of plants. One of the most popular methods for extracting vegetation information from remote sensing data is through vegetation indices. Models to predict soil erosion like the ``Revised Universal Soil Loss Equation'' (RUSLE) can use vegetation indices as input to measure the effects of soil cover. Several studies correlate vegetation indices with RUSLE's cover factor to get a linear mapping that describes a broad area. The results are considered as incomplete because most indices only detect healthy vegetation. The aim of this study is to devise a genetic programming approach to synthetically create vegetation indices that detect healthy, dry, and dead vegetation. In this work, the problem is posed as a search problem where the objective is to find the best indices that maximize the correlation of field data with Landsat5-TM imagery. Thus, the algorithm builds new indices by iteratively recombining primitive-operators until the best indices are found. This article outlines a GP methodology that was able to design new vegetation indices that are better correlated than traditional man-made indices. Experimental results demonstrate through a real world example using a survey at "Todos Santos" Watershed, that it is viable to design novel indices that achieve a much better performance than common indices such as NDVI, EVI, and SAVI.

 

GBML-6: Artificial Immune Systems

Room: Les Courants

Session Chair: Julie Greensmith (University of Nottingham)                                   

10:4011:05     Gene Transposon Based Clonal Selection Algorithm for Clustering

Ruochen Liu, zhengchun sheng, Licheng Jiao

Clustering, Clone Selection Algorithm, Gene Transposon, PBM Index

Inspired by the principle of gene transposon proposed by Barbara McClintock, a new immune computing algorithm for clustering multi-class data sets named as Gene Transposition based Clone Selection Algorithm (GTCSA) is proposed in this paper, The proposed algorithm does not require a prior knowledge of the numbers of clustering; an improved variant of the clonal selection algorithm has been used to determine the number of clusters as well as to refine the cluster center. a novel operator called antibody transposon is introduced to the framework of clonal selection algorithm which can realize to find the optimal number of cluster automatically. The proposed method has been extensively compared with Variable-string-length Genetic Algorithm(VGA)based clustering techniques over a test suit of several real life data sets and synthetic data sets. The results of experiments indicate the superiority of the GTCSA over VGA on stability and convergence rate, when clustering multi-class data sets.

11:0511:30     Integrating Real-Time Analysis With The Dendritic Cell Algorithm Through Segmentation

Feng Gu, Julie Greensmith, Uwe Aickelin

Dendritic Cell Algorithm, Intrusion Detection Systems, Real-Time Analysis, Segementation

As an immune inspired algorithm, the Dendritic Cell Algorithm (DCA) has been applied to a range of problems, particularly in the area of intrusion detection. Ideally, the intrusion detection should be performed in real-time, in order to continuously detect misuses, as soon as they occur. Consequently, the analysis process performed by an intrusion detection system must operate in real-time or near-to real-time. The analysis process of the DCA is currently performed offline, therefore to improve the algorithm's performance we suggest the development of a real-time analysis component. The initial step of the development is to apply segmentation to the DCA. This involves segmenting the current output of the DCA into slices and performing the analysis in various ways. Two segmentation approaches are introduced and tested in this paper, namely antigen based segmentation (ABS) and time based segmentation (TBS). The results of the corresponding experiments suggest that applying segmentation produces different and significantly better results in some cases, when compared to the standard DCA without segmentation. Therefore, we conclude that the segmentation is applicable to the DCA for the purpose of real-time analysis.

11:3011:55     Geometrical Insights into the Dendritic Cell Algorithm

Robert Oates, Thomas Stibor, Graham Kendall, Jonathan M Garibaldi

Artificial Immune Systems, Dendritic Cell Algorithm, Analysis, Classifiers

This work examines the dendritic cell algorithm (DCA) from a mathematical perspective. By representing the signal processing phase of the algorithm using the dot product it is shown that the signal processing element of the DCA is actually a collection of linear classifiers. It is further shown that the decision boundaries of these classifiers have the potentially serious drawback of being parallel, severely limiting the applications for which the existing algorithm can be potentially used on. These ideas are further explored using artificially generated data and a novel visualisation technique that allows an entire population of dendritic cells to be inspected as a single classifier. The paper concludes that the applicability of the DCA to more complex problems is highly limited.

 

ES/EP-1: Best Paper Nominees

Room: Verriere A

Session Chair: Nikolaus Hansen (Evolution Strategies, Evolutionary Programming)                      

10:4011:05     On the Behaviour of Weighted Multi-Recombination Evolution Strategies Optimising Noisy Cigar Functions é

Dirk V. Arnold, Hans-Georg Beyer, Alexander Melkozerov

evolution strategy, weighted recombination, cumulative step length adaptation, cigar function, noise

Cigar functions are convex quadratic functions that are characterised by the presence of only two distinct eigenvalues of their Hessian, the smaller one of which occurs with multiplicity one. Their ridge-like topology makes them a useful test case for optimisation strategies. This paper extends previous work on modelling the behaviour of evolution strategies with isotropically distributed mutations optimising cigar functions by considering weighted recombination as well as the effects of noise on optimisation performance. It is found that the same weights that have previously been seen to be optimal for the sphere and parabolic ridge functions are optimal for cigar functions as well. The influence of the presence of noise on optimisation performance depends qualitatively on the trajectory of the search point, which in turn is determined by the strategy's mutation strength as well as its population size and recombination weights. Analytical results are obtained for the case of cumulative step length adaptation.

11:0511:30     Efficient Natural Evolution Strategies é

Yi Sun, Daan Wierstra, Tom Schaul, Juergen Schmidhuber

evolution strategies, natural gradient, optimization

Efficient Natural Evolution Strategies (eNES) is a novel alternative to conventional evolutionary algorithms, using the natural gradient to adapt the mutation distribution. Unlike previous methods based on natural gradients, eNES uses a fast algorithm to calculate the inverse of the exact Fisher information matrix, thus increasing both robustness and performance of its evolution gradient estimation, even in higher dimensions. Additional novel aspects of eNES include optimal fitness baselines and importance mixing (a procedure for updating the population with very few fitness evaluations). The algorithm yields competitive results on both unimodal and multimodal benchmarks.

11:3011:55     Cooperative Micro-Differential Evolution for High-Dimensional Problems é

Konstantinos E. Parsopoulos

Differential Evolution, Cooperative Algorithms, Evolutionary Algorithms, Micro-Evolutionary Algorithms

High-dimensional optimization problems appear very often in demanding applications. Although evolutionary algorithms constitute a valuable tool for solving such problems, their standard variants exhibit deteriorating performance as dimension increases. In such cases, cooperative approaches have proved to be very useful, since they divide the computational burden to a number of cooperating subpopulations. In contrast, Micro-evolutionary approaches constitute light versions of the original evolutionary algorithms that employ very small populations of just a few individuals to address optimization problems. Unfortunately, this property is usually accompanied by limited efficiency and proneness to get stuck in local minima. In the present work, an approach that combines the basic properties of cooperation and Micro-evolutionary algorithms is presented for the Differential Evolution algorithm. The proposed Cooperative Micro-Differential Evolution approach employs small cooperative subpopulations to detect subcomponents of the original problem solution concurrently. The subcomponents are combined through cooperation of subpopulations to build complete solutions of the problem. The proposed approach is illustrated on high-dimensional instances of five widely used test problems with very promising results. Comparisons with the standard Differential Evolution algorithm are also reported and their statistical significance is analyzed.

11:5512:20     On Strategy Parameter Control by Meta-ES é

Hans-Georg Beyer, Martin Dobler, Christian Hämmerle, Philip Masser

Evolution Strategies, Adaptation, Meta-ES, Performance Evaluation, Progress Rate

This paper introduces simple control rules for the mutation strength and the parental population size using the Meta-ES approach. An in-depth analysis is presented on the mutation strength control using the sphere model. A heuristic formula for the outer mutation parameter will be proposed based on the theoretical analysis. Finally, a new evolutionary control strategy for the parental population size is proposed and evaluated empirically.

 

EDA-2: Gaussian EDAs & Model Building and Mining

Room: Verriere B

Session Chair: Kumara Sastry (Intel Corp)                                                       

10:4011:05     Convergence Analysis of UMDAC with Finite Populations: A Case Study on Flat Landscapes

Bo Yuan, Marcus Gallagher

EDAs, UMDAc, Theory, Finite Population

This paper presents some new analytical results on the continuous Univariate Marginal Distribution Algorithm (UMDAC), which is a well known Estimation of Distribution Algorithm based on Gaussian distributions. As the extension of the current theoretical work built on the assumption of infinite populations, the convergence behavior of UMDAC with finite populations is formally analyzed. We show both analytically and experimentally that, on flat landscapes, the Gaussian model in UMDAC tends to collapse with high probability, which is an important fact that is not well understood before.

11:0511:30     On Empirical Memory Design, Faster Selection of Bayesian Factorizations and Parameter-Free Gaussian EDAs

Peter A.N. Bosman

Evolutionary Algorithms, Estimation of Distribution Algorithms, Numerical Optimization, Memory, Learning

Often, Estimation-of-Distribution Algorithms (EDAs) are praised for their ability to optimize a broad class of problems. EDA applications are however still limited. Often heard criticism is that a large population size is required and that distribution estimation takes long. Here we look at possibilities for improvements in these areas. We first discuss the use of a memory to aggregate information over multiple generations and reduce the population size. The approach we take, empirical risk minimization to perform non-linear regression of memory parameters, may well be generalizable to other EDAs. We design a memory this way for a Gaussian EDA and observe smaller population size requirements and fewer evaluations used. We also speed up the selection of Bayesian factorizations for Gaussian EDAs by sorting the entries in the covariance matrix. Finally, we discuss parameter-free Gaussian EDAs for real-valued single-objective optimization. We propose to not only increase the population size in subsequent runs, but to also divide it over parallel runs across the search space. On some multimodal problems improvements are thereby obtained.

11:3011:55     Correlation Guided Model Building

David Icl nzan, D. Dumitrescu, Béat Hirsbrunner

model-building, correlation analysis, efficiency enhancement

The intrinsic feature of Estimation of Distribution Algorithms lies in their ability to learn and employ probabilistic models over the input spaces. Discovery of the appropriate model usually implies a computationally expensive comprehensive search, where many models are proposed and evaluated in order to find the best value of some model discriminative scoring metric. This paper presents basic results demonstrating how simple variable correlation data can be extended and used to efficiently guide the model search, decreasing the number of model evaluations by several orders of magnitude and without significantly affecting model quality. As a case study, the $O(n^3)$ model building of the Extended Compact Genetic Algorithm is successfully replaced by a correlation guided search of linear complexity which infers the perfect problem structures on the test suites.

11:5512:20     Mining Probabilistic Models Learned by EDAs in the Optimization of Multi-objective Problems

Roberto Santana, Concha Bielza, Jose A. Lozano, Pedro Larranaga

probabilistic modeling, Estimation of distribution algorithms, problem structure, visualization

One of the uses of the probabilistic models learned by estimation of distribution algorithms is to reveal previous unknown information about the problem structure. In this paper we investigate the mapping between the problem structure and the dependencies captured in the probabilistic models learned by EDAs for a set of multi-objective satisfiability problems. We present and discuss the application of different data mining and visualization techniques for processing and visualizing relevant information from the structure of the learned probabilistic models. We show that also in the case of multi-objective optimization problems, some features of the original problem structure can be translated to the probabilistic models and unveiled by using algorithms that mine the model structures.

 

Paper Presentations and Special Sessions      Saturday 11 July 14:00 – 15:40

 

 

GP-8: Applications

Room: Cartier A

Session Chair: Pierre Collet (Université de Strasbourg)                                    

14:0014:25     Evolutionary Learning of Local Descriptor Operators for Object Recognition

Cynthia B. Perez, Gustavo Olague

SIFT, Local descriptors, Object recognition, Matching

Nowadays, object recognition is widely studied under the paradigm of matching local features.

This work describes a genetic programming methodology that synthesizes mathematical expressions that are used to improve a well known local descriptor algorithm. It follows the idea that object recognition in the cerebral cortex of primates makes use of features of intermediate complexity that are largely invariant to change in scale, location, and illumination. These local features have been previously designed by human experts using traditional representations that have a clear, preferably mathematically, well-founded definition. However, it is not clear that these same representations are implemented by the natural system with the same structure. Hence, the possibility to design novel operators through genetic programming represents an open research avenue where the combinatorial search of evolutionary algorithms can largely exceed the ability of human experts. This paper provides evidence that genetic programming is able to design new features that enhance the overall performance of the best available local descriptor. Experimental results confirm the validity of the proposed approach using a widely accept testbed and an object recognition application.

14:2514:50     Genetic Programming based Image Segmentation with Applications to Biomedical Object Detection

Tarundeep Singh, Nawwaf Kharma, Mohmmad Daoud, Rabab Ward

Image Segmentation, Genetic Programming

Image segmentation is an essential process in many image analysis applications and is mainly used for automatic object recognition purposes. In this paper, we define a new genetic programming based image segmentation algorithm (GPIS). It uses a primitive image-operator based approach to produce linear sequences of MATLAB® code for image segmentation. We describe the evolutionary architecture of the approach and present results obtained after testing the algorithm on a biomedical image database for cell segmentation. We also compare our results with another EC-based image segmentation tool called GENIE Pro. We found the results obtained using GPIS were more accurate as compared to GENIE Pro. In addition, our approach is simpler to apply and evolved programs are available to anyone with access to MATLAB®.

14:5015:15     Animated Drawings Rendered By Genetic Programming

Perry Barile, Vic Ciesielski, Marsha Berry, Karen Trist

Evolutionary Search, Non-Photorealistic Rendering, Evolved Art, Genetic Art

We describe an approach to generating animations of drawings that start as a random collection of strokes and gradually resolve into a recognizable subject. The strokes are represented as tree based genetic programs. An animation is generated by rendering the best individual in a generation as a frame of a movie. The resulting animations have an engaging characteristic in which the target slowly emerges from a random set of strokes. We have generated two qualitatively different kinds of animations, ones that use grey level straight line strokes and ones that use binary Bezier curve stokes. Around 100, 000 generations are needed to generate engaging animations. Population sizes of 2 and 4 give the best convergence behaviour. Convergence can be accelerated by using information from the target in drawing a stroke. Our approach provides a large range of creative opportunities for artists. Artists have control over choice of target and the various stroke parameters.

 

GDS-2: Generative and Developmental Systems

Room: Cartier B

Session Chair: Kenneth Owen Stanley (University of Central Florida)                    

14:0014:25     Evolution, Development and Learning Using Self-Modifying Cartesian Genetic Programming

Simon Harding, Julian F Miller, Wolfgang Banzhaf

Developmental systems, genetic programming

Self-Modifying Cartesian Genetic Programming (SMCGP) is a form of genetic programming that integrates developmental (self-modifying) features as a genotype-phenotype mapping. This paper asks: Is it possible to evolve a learning algorithm using SMCGP?

14:2514:50     The Challenge of Irrationality: Fractal Protein Recipes for PI

Jean Krohn, Peter J Bentley, Hooman Shayani

Fractal Proteins, Computational Development, evolutionary computation, PI, irrational numbers, pattern

Computational development traditionally focuses on the use of an iterative, generative mapping process from genotype to phenotype in order to obtain complex phenotypes which comprise regularity, repetition and module reuse. This work examines whether an evolutionary computational developmental algorithm is capable of producing a phenotype with no known pattern at all: the irrational number PI. The paper summarizes the fractal protein algorithm, provides a new analysis of how fractals are exploited by the developmental process, then presents experiments, results and analysis showing that evolution is capable of producing an approximate algorithm for PI that goes beyond the limits of precision of the data types used.

14:5015:15     Facilitating Evolutionary Innovation by Developmental Modularity and Variability

Rene Doursat

Artificial Embryogeny, Evolutionary Development, Bio-Inspired Engineering, Spatial Computing, Complex Systems, Systems Design, Self-Organization, Modularity, Robotics, Architectures

Natural complex adaptive systems show many examples of self-organization and decentralization, such as pattern formation or swarm intelligence. Yet, only multicellular organisms possess the genuine architectural capabilities needed in many engineering application domains, from nanotechnologies to reconfigurable and swarm robotics. Biological development thus offers an important paradigm for a new breed of "evo-devo" computational systems. This work explores the evolutionary potential of an original multi-agent model of artificial embryogeny through differently parametrized simulations. It represents a rare attempt to integrate both self-organization and regulated architectures. Its aim is to illustrate how a developmental system, based on a truly indirect mapping from a modular genotype to a modular phenotype, can facilitate the generation of variations, thus structural innovation.

15:1515:40     Evolving Specific Network Statistical Properties using a Gene Regulatory Network Model

Miguel Nicolau, Marc Schoenauer

Regulatory Networks, Small-world Topology, Evolutionary Computation

The generation of network topologies with specific, user-specified statistical properties is addressed using an Evolutionary Algorithm that is seeded by an Artificial Gene Regulatory Network Model. The work presented here extends previous work where the proposed approach was demonstrated to be able to evolve scale-free topologies. The present results reinforce the applicability of the proposed method, showing that the evolution of small-world topologies is also possible, but requires a carefully crafted fitness function.

 

COM-5: Optimizing Algorithm Performance

Room: Vitre

Session Chair: Franz Rothlauf (University of Mainz)                                        

14:0014:25     Evolving Heuristically Difficult Instances of Combinatorial Problems

Bryant A Julstrom

Problem instances, heuristics, evolving difficult instances, quadratic knapsack problem

When evaluating a heuristic for a combinatorial problem, randomly generated instances of the problem may not provide a thorough exploration of the heuristic's performance, and it may not be obvious what kinds of instances challenge or confound the heuristic. An evolutionary algorithm can search a space of problem instances for cases that are heuristically difficult. Evaluation in such an EA requires an exact algorithm for the problem, which limits the sizes of the instances that can be explored, but the EA's (small) results can reveal misleading patterns or structures that can be replicated in larger instances. As an example, a genetic algorithm searches for instances of the quadratic knapsack problem that are difficult for a straightforward greedy heuristic. The GA identifies such instances, which in turn reveal patterns that mislead the heuristic.

14:2514:50     New Insights into the OCST Problem: Integrating Node Degrees and their Location in the Graph

Wolfgang Steitz, Franz Rothlauf

optimal communications spanning tree, heuristics, evolutionary algorithm, initialization

This paper considers the Euclidean variant of the optimal communciation spanning tree (OCST) problem. Researches have analyzed the structure of the problem and found that high quality solutions prefer edges of low cost. Further, edges pointing to the center of the network are more likely to be included in good solutions. We add to the literature and provide additional insights into the structure of the OCST problem. Therefore, we investigate properies of the whole tree, such as node degrees and the Wiener index. The results reveal that optimal solutions are structured in a star-like manner. There are few nodes with high node degrees, these nodes are located next to the graph's center. The majority of the nodes have very low node degrees. Especially, nodes with degree one are very common and located far away of the center. We exploit these insights to develop a construction heuristic, which builds spanning trees with similar properties. Experiments indicate a high solution quality for the OCST problem. In a next step, we seed the initial population of an evolutionary algorithm (EA) with solutions constructed with our method. An experimental study demonstrates the merits of using a biased initialization: the algorithm is faster, better compared to the same algorithm using random starting solutions.

14:5015:15     An Experimental Investigation of Model-Based Parameter Optimisation: SPO and Beyond

Frank Hutter, Holger H Hoos, Kevin Leyton-Brown, Kevin P Murphy

Parameter Tuning, Noisy Optimization, Sequential Experimental Design, Gaussian Processes, Active Learning

This work experimentally investigates model-based approaches for optimising the performance of parameterised randomised algorithms. We restrict our attention to procedures based on Gaussian process models, the most widely-studied family of models for this problem. We evaluated two approaches from the literature, and found that sequential parameter optimisation (SPO) [4] offered the most robust performance. We then investigated key design decisions within the SPO paradigm, characterising the performance consequences of each. Based on these findings, we propose a new version of SPO, dubbed SPO+, which extends SPO with a novel intensification procedure and log-transformed response values. Finally, in a domain for which performance results for other (model-free) parameter optimisation approaches are available, we demonstrate that SPO+ achieves state-of-the-art performance.

 

Evolutionary Computation in Practice-5

Room: Bonsecours

Session Chair: To be announced                                                                       

14:0015:40    

 

PES-3: Models

Room: Victoria

Session Chair: Enrique Alba (University of Málaga)                                         

14:0014:25     Data-Intensive Computing for Competent Genetic Algorithms: A Pilot Study using Meandre

Xavier Llorà

Genetic algorithms, estimation of distribution algorithms, data-intensive computing, parallel computing

Data-intensive computing has positioned itself as a valuable programming paradigm to efficiently approach problems requiring processing very large volumes of data. This paper presents a pilot study about how to apply the data-intensive computing paradigm to evolutionary computation algorithms. Two representative cases (selectorecombinative genetic algorithms and estimation of distribution algorithms) are presented, analyzed, and discussed. This study shows that equivalent data-intensive computing evolutionary computation algorithms can be easily developed, providing robust and scalable algorithms for the multicore-computing era. Experimental results show how such algorithms scale with the number of available cores without further modification.

14:2514:50     A Memetic Algorithm and a Parallel Hyperheuristic Island-based Model for a 2D Packing Problem

Coromoto Leon, Gara Miranda, Carlos Segura

Memetic Algorithms, Island-Based Models, Hyperheuristics, Cutting and Packing Problems

This work presents several approaches used to deal with the 2D packing problem proposed in the GECCO 2008 contest session. A memetic algorithm, together with the specifically designed local search and variation operators, are presented. A novel parallel model was used to parallelize the approach. The model is a hybrid algorithm which combines a parallel island-based scheme with a hyperheuristic approach. An adaptive behavior is added to the island-based model by applying the hyperheuristic procedure. The main operation of the island-based model is kept, but the configurations of the memetic algorithms executed on each island are dynamically mapped. The model grants more computational resources to those configurations that show a more promising behavior. For this purpose a specific criterion was designed in order to select the configurations with better success expectations. Computational results obtained for the contest problem demonstrate the validity of the proposed model. The best reported solutions for the problem contest instance have been achieved by using the here presented approaches.

14:5015:15     An Island Model for High-Dimensional Genomes using Phylogenetic Speciation and Species Barcoding

Paul Grouchy, Jekanthan Thangavelautham, Gabriele M.T. D Eleuterio

speciation, island model, species barcoding, phylogenetic species concept, parallel evolutionary algorithms, genetic algorithms

A new speciation method for parallel evolutionary computation is presented, designed specifically to handle high-dimensional data. Taking inspiration from the natural sciences, the Phylogenetic Relations Island Speciation Model (PRISM) uses common ancestry and a novel species barcoding system to detect new species and move them to separate islands. Simulation experiments were performed on Multidimensional Knapsack Problems with different fitness landscapes requiring 100-dimensional genomes. PRISM's performance with various parameter settings and on the various landscapes is analyzed and preliminary results show that PRISM can consistently produce optimal or near-optimal solutions, outperforming the standard Genetic Algorithm and Island Model in all the performed experiments.

15:1515:40     Genotypic Differences and Migration Policies in an Island Model

Lourdes Araujo, Juan Julian Merelo, Antonio Mora, Carlos Cotta

Genetic algorithms, Parallelism, Island model, Migration policy, Diversity

In this paper we compare different policies to select individuals to migrate in an island model. Our thesis is that choosing individuals in a way that exploits genotypic differences between populations can enhance diversity, and improve the system performance. This has lead us to propose a family of policies that we call multikulti, in which nodes exchange individuals different "enough" among them. In this paper we present a policy according to which the receiver node chooses the most different individual among the sample received from the sending node. This sample is randomly built but only using individuals with a fitness above a threshold. This threshold is previously established by the receiving node. We have tested our system in two problems previously used in the evaluation of parallel systems, presenting different degree of difficulty. The multikulti policy presented herein has been proved to be more robust than other usual migration policies, such as sending the best or a random individual.

 

ACO-1: Best Paper Nominees

Room: Versailles

Session Chair: Christian Blum (Universitat Politècnica de Catalunya)                     

14:0014:25     Parallel Shared Memory Strategies for Ant-Based Optimization Algorithms é

Thang N Bui, ThanhVu Nguyen, Joseph R Rizzo Jr.

Ant-Based Algorithms, Distributed Memory, Shared Memory, OpenMP, MPI, Max Clique

This paper describes a general scheme to convert sequential ant-based algorithms into parallel shared memory algorithms. The scheme is applied to an ant-based algorithm for the maximum clique problem. Extensive experimental results indicate that the parallel version provides noticeable improvements to the running time while maintaining comparable solution quality to that of the sequential version.

14:2514:50     Particle Swarm Optimization Based Multi-Prototype Ensembles é

Ammar Mohemmed, Mark Johnston, Mengjie Zhang

Particle Swarm Optimization, Ensemble, Classification

This paper proposes and evaluates a Particle Swarm Optimization (PSO) based ensemble classifier. The members of the ensemble are Nearest Prototype Classifiers generated sequentially using PSO and combined by a majority voting mechanism. Two necessary requirements for good performance of an ensemble are accuracy and diversity of error. Accuracy is achieved by PSO minimizing a fitness function representing the error rate as the members are created. The diversity of error is promoted by using a different initialization of PSO each time to create a new member and by adopting decorrelated training where a penalty term is added to the fitness function to penalize particles that make the same errors as previously generated classifiers. Simulation experiments on different classification problems show that the ensemble has better performance than a single classifier and are effective in generating diverse ensemble members.

14:5015:15     An Evaporation Mechanism for Dynamic and Noisy Multimodal Optimization é

Jose Luis Fernandez-Marquez, Josep Lluis Arcos

Particle Swarm Optimization, Multimodal dynamic environments, Noisy functions

Dealing with imprecise information is a common characteristic in real-world problems. Specifically, when the source of the information are physical sensors, a level of noise in the evaluation has to be assumed. Particle Swarm Optimization is a technique that presented a good behavior when dealing with noisy fitness functions. Nevertheless, the problem is still open. In this paper we propose the use of the evaporation mechanism for managing with dynamic multi-modal optimization problems that are subject to noisy fitness functions. We will show how the evaporation mechanism does not require the detection of environment changes and how can be used for improving the performance of PSO algorithms working in noisy environments.

 

RWA-8: Privacy & Security

Room: St. Charles

Session Chair: Hoai Nguyen Xuan (SNU, Korea)                                             

14:0014:25     A Multi-Objective Approach to Data Sharing with Privacy Constraints and Preference Based Objectives

Rinku Dewri, Darrell Whitley, Indrajit Ray, Indrakshi Ray

Disclosure control, Anonymization bias, Constraint handling, Multi-objective optimization

Public data sharing is utilized in a number of businesses to facilitate the exchange of information. Privacy constraints are usually enforced to prevent unwanted inference of information, specially when the shared data contain sensitive personal attributes. This, however, has an adverse effect on the utility of the data for statistical studies. Thus, a requirement while modifying the data is to minimize the information loss. Existing methods employ the notion of "minimal distortion" where the data is modified only to the extent necessary to satisfy the privacy constraint, thereby asserting that the information loss has been minimized. However, given the subjective nature of information loss, it is often difficult to justify this assertion. In this paper, we propose an evolutionary algorithm to explicitly minimize an achievement function given constraints on the privacy level of the transformed data. Privacy constraints specified in terms of anonymity models are modeled as additional objectives and an evolutionary multi-objective approach is proposed. We highlight the requirement to minimize any bias induced by the anonymity model and present a scalarization incorporating preferences in information loss and privacy bias as the achievement function.

14:2514:50     A Hybrid GA-PSO Fuzzy System for User Identification on Smart Phones

Muhammad Shahzad, Saira Zahid, Muddassar Farooq

Hybrid GA-PSO-Fuzzy, Authentication, Chromagent

The major contribution of this paper is a hybrid GA-PSO fuzzy user identification system, UGuard, for smart phones. Our system gets 3 phone usage features as input to identify a user or an imposter. We show that these phone usage features for different users are diffused; therefore, we justify the need of a front end fuzzy classifier for them. We further show that the fuzzy classifier must be optimized using a back end online dynamic optimizer. The dynamic optimizer is a hybrid of Particle Swarm Optimizer (PSO) and Genetic Algorithm (GA). We have collected phone usage data of 10 real users having Symbian smart phones for 8 days. We evaluate our UGuard system on this dataset. The results of our experiments show that UGuard provides on the average an error rate of 2% or less. We also compared our system with four classical classifiers Na¨1ve Bayes, Back Propagation Neural Networks, J48 Decision Tree, and Fuzzy System and three evolutionary schemes fuzzy system optimized by ACO, PSO, and GA. To the best of our knowledge, the current work is the first system that has achieved such a small error rate. Moreover, the system is simple and efficient; therefore, it can be deployed on real world smart phones.

14:5015:15     Application of Evolutionary Algorithms in Detection of SIP based Flooding Attacks

M. Ali Akbar, Muddassar Farooq

Denial of Service, Network Security, Session Initiation Protocol, IP Multimedia Subsystem

The Session Initiation Protocol (SIP) is the de facto standard for user s session control in the next generation Voice over Internet Protocol (VoIP) networks based on the IP Multimedia Subsystem (IMS) framework. In this paper, we first analyze the role of SIP based floods in the Denial of Service (DoS) attacks on the IMS. Afterwards, we present an online intrusion detection framework for detection of such attacks. We analyze the role of different evolutionary and non-evolutionary classifiers on the classification accuracy of the proposed framework. We have evaluated the performance of our intrusion detection framework on a traffic in which SIP floods of varying intensities are injected. The results of our study show that the evolutionary classifiers like sUpervised Classifier System (UCS) and Genetic clASSIfier sySTem (GAssist) can even detect low intensity SIP floods in realtime. Finally, we formulate a set of specific guidelines that can help VoIP service providers in customizing our intrusion detection framework by selecting an appropriate classifier depending on their requirements in different service scenarios.

15:1515:40     A Genetic Approach to Statistical Disclosure Control

Jim E Smith, Alistair R Clark, Andrea T Staggemeier

Statistical Disclosure Control

Statistical Disclosure Control is the collective name for a range of tools that data providers such as government departments use to protect the confidentiality of individuals or organizations. When the published tables contain magnitude data such as turnover or health statistics, the preferred method is to suppress the values of certain cells. Assigning a cost to the information lost by suppressing any given cell creates the Cell Suppression Problem . This consists of finding the minimum cost solution which meets the confidentiality constraints. Solving this problem simultaneously for all of the sensitive cells in a table is NP-hard and not possible for medium to large sized tables. In this paper, we describe the development of a heuristic tool for this problem which hybridizes Linear Programming (to solve a relaxed version for a single sensitive cell) with a Genetic Algorithm (to seek an order for considering the sensitive cells which minimizes the final cost). Considering a range of real-world and representative artificial datasets, we show that the method is able to provide relatively low cost solutions for far larger tables than is possible for the optimal approach to tackle. We show that our genetic approach is able to significantly improve on the initial solutions provided by existing heuristics for cell ordering, and outperforms local search.

 

EMO-5: Preference Handling

Room: Les Courants

Session Chair: Anne AUGER (INRIA)                                                               

14:0014:25     Study of Preference Relations in Many-Objective Optimization

Antonio Lopez Jaimes, Carlos Artemio Coello Coello

Multiobjective optimization, many-objective optimization, preference relations

This paper presents a quantitative analysis of different preference relations proposed to deal with problems with a high number of objectives. Since the relations stress different subsets of the Pareto front, we based the comparison on the Tchebycheff distance of the approximation set to the ``knee'' of the Pareto front. Additionally, the convergence induced by the preference relations is studied by analyzing the generational distance observed at each generation of the search. The results show that some preference relations contribute to converge quickly to the Pareto front, but they promote the generation of solutions far from the knee region. Moreover, even if a preference relation generates solutions near the knee, there exists a trade-off between convergence and the extension of the Pareto front covered.

14:2514:50     Investigating and Exploiting the Bias of the Weighted Hypervolume to Articulate User Preferences

Anne Auger, Johannes Bader, Dimo Brockhoff, Eckart Zitzler

Hypervolume indicator, preference articulation

Optimizing the hypervolume indicator within evolutionary multiobjective optimizers has become popular in the last years. Recently, the indicator has been generalized to the weighted case to incorporate various user preferences into hypervolume-based search algorithms. There are two main open questions in this context: (i) how does the specified weight influence the distribution of a fixed number of points that maximize the weighted hypervolume indicator? (ii) how can the user articulate her preferences easily without specifying a certain weight distribution function? In this paper, we tackle both questions. First, we theoretically investigate optimal distributions of $\mu$ points that maximize the weighted hypervolume indicator. Second, based on the obtained theoretical results, we propose a new approach to articulate user preferences within biobjective hypervolume-based optimization in terms of specifying a desired density of points on a predefined (imaginary) Pareto front. Within this approach, a new exact algorithm based on dynamic programming is proposed which selects the set of $\mu$ points that maximizes the (weighted) hypervolume indicator. Experiments on various test functions show the usefulness of this new preference articulation approach and the agreement between theory and practice.

 

ES/EP-2: New Algorithms and Applications

Room: Verriere A

Session Chair: Hans-Georg Beyer (FH Vorarlberg)                                          

14:0014:25     A Novel Approach to Adaptive Isolation in Evolution Strategies

Dirk V. Arnold, Anthony S. Castellarin

hierarchically organised evolution strategies, reproductive isolation, step length adaptation

Hierarchically organised evolution strategies have been seen to be able to successfully adapt step lengths where mutative self-adaptation fails. However, the computational costs of such strategies are high due to the need to evolve several subpopulations in isolation, and their performance depends crucially on the length of the isolation periods. This paper proposes a novel approach to adapting the length of the isolation periods that is found to robustly generate good settings across a range of test functions.

14:2514:50     Combining Evolution Strategy and Gradient Descent Method for Discriminative Learning of Bayesian Classifiers

Xuefeng Chen, Xiabi Liu, Yunde Jia

Discriminative learning, Evolution strategy, Gradient descent, Handwritten digit recognition, Max-Min Posterior pseudo-probabilities (MMP)

The optimization method is one of key issues in discriminative learning of pattern classifiers. This paper proposes a hybrid approach of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and the gradient decent method for optimizing Bayesian classifiers under the SOFT target based Max-Min posterior Pseudo-probabilities (Soft-MMP) learning framework. In our hybrid optimization approach, the weighted mean of the parent population in the CMA-ES is adjusted by exploiting the gradient information of objective function, based on which the offspring is generated. As a result, the efficiency and the effectiveness of the CMA-ES are improved. We apply the Soft-MMP with the proposed hybrid optimization approach to handwritten digit recognition. The experiments on the CENPARMI database show that our handwritten digit classifier outperforms other state-of-the-art techniques. Furthermore, our hybrid optimization approach behaved better than not only the single gradient decent method but also the single CMA-ES in the experiments.

14:5015:15     Hybrid Differential Evolution based on Fuzzy C-means Clustering

Wenyin Gong, Zhihua Cai, Charles X. Ling, Jun Du

Differential evolution, fuzzy C-means clustering, global optimization

In this paper, we propose a hybrid Differential Evolution (DE) algorithm based on the fuzzy C-means clustering algorithm, referred to as FCDE. The fuzzy C-means clustering algorithm is incorporated with DE to utilize the information of the population efficiently, and hence it can generate good solutions and enhance the performance of the original DE. In addition, the population-based algorithmgenerator is adopted to efficiently update the population with the clustering offspring. In order to test the performance of our approach, 13 high-dimensional benchmark functions of diverse complexities are employed. The results show that our approach is effective and efficient. Compared with other state-of-the-art DE approaches, our approach performs better, or at least comparably, in terms of the quality of the final solutions and the reduction of the number of fitness function evaluations (NFFEs).

15:1515:40     Tone Mapping by Interactive Evolution

Stephen B. Chisholm, Dirk V. Arnold, Stephen Brooks

tone mapping, interactive evolution

Tone mapping is a computational task of significance in the context of displaying high dynamic range images on low dynamic range devices. While a number of tone mapping algorithms have been proposed and are in common use, there is no single operator that yields optimal results under all conditions. Moreover, obtaining satisfactory mappings often requires the manual tweaking of parameters. This paper proposes interactive evolution as a computational tool for tone mapping. An evolution strategy that blends the results from several tone mapping operators while at the same time adapting their parameters is found to yield promising results with little effort required of the user.

 

EDA-3: Efficiency Enhancements & EDAs for Classifier Systems

Room: Verriere B

Session Chair: Martin Pelikan (University of Missouri in St. Louis)                          

14:0014:25     Effects of a Deterministic Hill Climber on hBOA

Elizabeth Radetic, Martin Pelikan, David E. Goldberg

Hierarchical BOA, local search, spin glass, trap-5, MAXSAT, hybrid evolutionary algorithms, estimation of distribution algorithms, efficiency enhancement

Hybridization of global and local search algorithms is a well-established technique for enhancing the efficiency of search algorithms. Hybridizing estimation of distribution algorithms (EDAs) has been repeatedly shown to produce better performance than either the global or local search algorithm alone. The hierarchical Bayesian optimization algorithm (hBOA) is an advanced EDA which has previously been shown to benefit from hybridization with a local searcher. This paper examines the effects of combining hBOA with a deterministic hill climber (DHC). Experiments reveal that allowing DHC to find the local optima makes model building and decision making much easier for hBOA. This reduces the minimum population size required to find the global optimum, which substantially improves overall performance.

14:2514:50     Initial-Population Bias in the Univariate Estimation of Distribution Algorithm

Martin Pelikan, Kumara Sastry

estimation of distribution algorithms, EDAs, univariate marginal distribution algorithm, UMDA, population bias, time to convergence, onemax, noisy onemax, scalability, population size

This paper analyzes the effects of an initial-population bias on the performance of the univariate marginal distribution algorithm (UMDA). The analysis considers two test problems: (1)~onemax and (2)~noisy onemax. Theoretical models are provided and verified with experiments. Intuitively, biasing the initial population toward the global optimum should improve performance of UMDA, whereas biasing the initial population away from the global optimum should have the opposite effect. Both theoretical and experimental results confirm this intuition. Effects of mutation on performance of UMDA with initial-population bias are also investigated.

14:5015:15     Intelligent Bias of Network Structures in the Hierarchical BOA

Mark W Hauschild, Martin Pelikan

bayesian optimization algorithm, hierarchical BOA, efficiency enhancement, learning from experience, estimation of distribution algorithms, model complexity

One of the primary advantages of estimation of distribution algorithms (EDAs) over many other stochastic optimization techniques is that they supply us with a roadmap of how they solve a problem. This roadmap consists of a sequence of probabilistic models of candidate solutions of increasing quality. The first model in this sequence would typically encode the uniform distribution over all admissible solutions whereas the last model would encode a distribution that generates at least one global optimum with high probability. It has been argued that exploiting this knowledge should improve EDA performance when solving similar problems. This paper presents an approach to bias the building of Bayesian network models in the hierarchical Bayesian optimization algorithm (hBOA) using information gathered from models generated during previous hBOA runs on similar problems. The approach is evaluated on trap-5 and 2D spin glass problems.

15:1515:40     Evaluation of Population Partitioning Schemes in Bayesian Classifier EDAs

David Wallin, Conor Ryan

EDA, Estimation of Distribution, Evolutionary Computation, Probabilistic Model, Probabilistic Model-Building, Population Partitioning

Several algorithms within the field of Evolutionary Computation have been proposed that effectively turn optimisation problems into supervised learning tasks. Typically such hybrid algorithms partition their populations into three subsets, high performing, low performing and mediocre, where the subset containing mediocre candidates is discarded from the phase of model construction. In this paper we will empirically compare this traditional partitioning scheme against two alternative schemes on a range of difficult problems from the literature. The experiments will show that at small population sizes, using the whole population is often a better approach than the traditional partitioning scheme, but partitioning around the midpoint and ignoring candidates at the extremes, is often even better.

 

Plenary Session                                                        Saturday 11 July 16:10 – 17:50

 

 

Keynote

Room: Cartier A

Session Chair: To be announced                                                                       

16:1017:50    

 

 

 

 

Plenary Sessions                                                                                  Sunday 12 July

 

 

SIGEVO Meeting and Awards Presentations

Room: Cartier A

Session Chair: Darrell Whitley (Colorado State University)                              

8:3010:10      

 

Keynote

Room: Cartier A

Session Chair: Franz Rothlauf (University of Mainz)                                        

10:4012:20    

 

 

Paper Presenations and Special Sessions        Sunday 12 July 14:00 – 15:40

 

Late Breaking Papers 2: Operators and Representations

Room: Cartier A

Session Chair:                                                                        

14:0014:15     An Entropy Based Heuristic Model for Predicting Functional Sub-type Divisions of Protein Families

Deniz Yorukoglu, Yasin Bakis, Ugur Sezerman

Protein Function, Classification, Multiple Sequence Alignment

Multiple sequence alignments of protein families are often used for locating residues that are widely apart in the sequence, which are considered as influential for determining functional specificity of proteins towards various substrates, ligands, DNA and other proteins. In this paper, we propose an entropy-score based heuristic algorithm model for predicting functional sub-family divisions of protein families, given the multiple sequence alignment of the protein family as input without any functional sub-type or key site information given for any protein sequence. Two of the experimented test-cases are reported in this paper. First test-case is Nucleotidyl Cyclase protein family consisting of guanalyate and adenylate cyclases. And the second test-case is a dataset of proteins taken from six superfamilies in Structure-Function Linkage Database (SFLD). Results from these test-cases are reported in terms of confirmed sub-type divisions with phylogeny relations from former studies in the literature.

14:1514:30     Optimization of Morphological Data in Numerical Taxonomy Analysis Using Genetic Algorithms Feature Selection Method

Yasin Bakis, Ugur O. Sezerman, Tekin M. Babaç, Cem Meydan

Optimization, Morphological Data, Phylogenetics, Biological Data Mining, Genetic algorithms

Studies in Numerical Taxonomy are carried out by measuring characters as much as possible. The workload over scientists and labor to perform measurements will increase proportionally with the number of variables (or characters) to be used in the study. However, some part of the data may be irrelevant or sometimes meaningless. Here in this study, we introduce an algorithm to obtain a subset of data with minimum characters that can represent original data. Morphological characters were used in optimization of data by Genetic Algorithms Feature Selection method. The analyses were performed on an 18 character*11 taxa data matrix with standardized continuous characters. The analyses resulted in a minimum set of 2 characters, which means the original tree based on the complete data can also be constructed by those two characters.

14:3014:45     An Evolutionary Approach to Constructive Induction for Link Discovery

Tim Weninger, William H. Hsu, Jing Xia, Waleed Aljandal

machine learning, classifiation, genetic programming

This paper presents a genetic programming-based symbolic regression approach to the construction of relational features in link analysis applications. Specifically, we consider the problems of predicting, classifying and annotating friends relations in friends networks, based upon features constructed from network structure and user profile data. We first document a data model for the blog service \textit{LiveJournal}, and define a set of machine learning problems such as predicting existing links and estimating inter-pair distance. Next, we explain how the problem of classifying a user pair in a social network, as directly connected or not, poses the problem of selecting and constructing relevant features. We use genetic programming to construct features, represented by multiple symbol trees with base features as their leaves. In this manner, the genetic program selects and constructs features that may not have been originally considered, but possess better predictive properties than the base features. Finally, we present classification results and compare these results with those of the control and similar approaches.

14:4515:00     A Data-Based Coding of Candidate Strings in the Closest String Problem

Bryant A Julstrom

Closest string problem, data-based coding, genetic algorithm

Given a set of strings $S$ of equal lengths over an alphabet $\Sigma$, the closest string problem seeks a string over $\Sigma$ whose maximum Hamming distance to any of the given strings is as small as possible. A data-based coding of strings for evolutionary search represents candidate closest strings as sequences of indexes of the given strings. The string such a chromosome represents consists of the symbols in the corresponding positions of the indexed strings. A genetic algorithm using this coding was compared with two GAs that encoded candidate strings directly as strings over $\Sigma$. In trials on twenty-five instances of the closest string problem with alphabets ranging is size from 2 to 30, the algorithm that used the data-based representation of candidate strings consistently returned the best results, and its advantage increased with the sizes of the test instances' alphabets.

15:0015:15     Tree-Structure-Aware GP Operators for Automatic Gait Generation of Quadruped Robot

Kisung Seo, Soohwan Hyun, Erik D. Goodman

Genetic Programming, Tree-structure-aware GP operators, Quadruped Robot, Automatic Gait Generation

In this paper, we suggest tree-structure-aware GP operators that heed tree distributions in structure space and their possible structural difficulties. The main idea of the proposed GP operators is to place the generated offspring of crossover and/or mutation in a specified region of tree structure space insofar as possible, taking into account the observation that most solutions are found in that region. To enable that, the proposed operators are designed to utilize information about the region to which the parents belong and node/depth statistics of the subtree selected for modification. The proposed approach is applied to automatic gait generation of quadruped robot to demonstrate the effectiveness of it. The results show that the results using the proposed tree-structure-aware operators are superior to the results of standard GP for gait problem in both fitness and velocity.

15:1515:30     An Investigation Into the Structure of Genomes within an Evolution that uses Embryogenesis

Anthony M Roy, Erik K Antonsson, Andrew A Shapiro

Neural Networks, Embryogenesis, Analysis

Evolutionary algorithms that use embryogenesis in the creation of individuals have several desirable qualities. Such algorithms are able to create complex, modular designs which can scale well to large problems. However, the inner workings of developmental algorithms have not been investigated as thoroughly as their direct-encoding counterparts. More precisely, it would be beneficial to look at how the rules used during embryogenesis evolve alongside the phenotypes they produced. This paper reports on such an investigation into the evolution of a rule set for the growth of an artificial neural network, and identifies several aspects that are desirable for the genomes of a developmental evolutionary algorithm.

 

Late Breaking Papers 3: Learning and Classification

Room: Cartier B

Session Chair:                                                                        

14:0014:15     Robust Speech Recognition Using Evolutionary Class-Dependent LDA

Hossein Moeinzadeh, M-Mehdi Mohammadi, Ahmad Akbari, Babak Nasersharif

Speech Recognition, Linear Discriminate Analysis, MFCC, Harmony Search, Particle Swarm Optimization, Transformation matrix

Linear Discriminant Analysis (LDA) is a feature selection method in speech recognition. LDA finds transformations that maximizes the between-class scatter and minimizes within-class scatter. This transformation can be obtained in a class-dependent or class independent manner. In this paper, we propose a method to improve LDA and also we use it instead of DCT in MFCC extraction. This transformation matrix is computed through three evolutionary methods (GA, HS, and PSO) to optimize class-dependent LDA transformation matrix for robust MFCC extraction. For this purpose, we first use logarithm of clean speech Mel filter bank energies (LMFE) of each class to define within-class scatter for each class and between-class scatter for over all classes. Next, class-dependent transformation matrix is utilized in place of DCT in MFCC feature extraction. The experimental results show that the proposed speech recognition and optimization methods using class-dependent LDA, achieves a significant isolated word recognition rate on Aurora2 database.

14:1514:30     HS-ROBDD: An Efficient Variable Order Binary Decision Diagram

Mehdi Mohammadi, Hossein Pazhoumand-dar, Mohsen Soryani, Hossein Moeinzadeh

Binary Decision Diagram (BDD), , Harmony Search

Reduced Ordered Binary Decision Diagrams (ROBDDs) are frequently used as the representation of choice to solve various CAD problems such as synthesis, digital-system verification and testing. The size of an ROBDD for a function can be increased exponentially by the number of independent variables of the function that is called memory explosion problem . Since the size of an ROBDD heavily depends on the variable order used, there is a strong need to find variable orders that minimize the number of nodes in an ROBDD. As finding the optimal variable ordering is an NP-Complete problem, in this paper, we use Harmony Search (HS) to find an optimal variable ordering in binary decision diagram. Some benchmarks form LGSynth91 are used to evaluate our suggestion method. Obtained results show that this method has the ability to find optimal order of input variable and reduce the size of ROBDD considerably.

14:3014:45     Biasing Evolving Generations in Learning Classifier Systems using Information Theoretic Measures

Karthik Kuber, Chilukuri K Mohan

Genetic Algorithms, Learning Classifier Systems, XCS, Information Theory, Sufficiency, Biased Evolution

This paper presents information-theoretic ideas to modify the course of evolution in Learning Classifier Systems. This approach exploits the possibilities that individuals in each generation contain potentially useful information that is not currently utilized. In particular, we look at the Sufficiency measure of a rule as an information theoretic indicator. We propose the modification of the XCS algorithm using this in the early formative stages of each run in view of these additional indicators of usefulness. The probability of selection during that period would be based on sufficiency. Preliminary simulation results show that the new approach reduces the effort needed to solve the 20-input multiplexer problem.

14:4515:00     Learning in the Time-Dependent Minority Game

David Catteeuw, Bernard Manderick

multi-agent, reinforcement learning, minority games

We study learning in the time-dependent Minority Game (MG). The MG is a repeated conflicting interest game involving a large number of agents. So far, the learning mechanisms studied were rather naive and involved only exploitation of the best strategy so far at the expense of exploring new strategies. Instead, we use a reinforcement learning method called Q-learning and show how it improves the results on MG extensions of increasing difficulty.

15:0015:15     Prisoner's Dilemma on Graphs with Heterogeneous Agents

Lingzhi Luo, Nilanjan Chakraborty, Katia Sycara

Social network, Conflict behavior, Multi-cultural society, Game theory, Prisoner's dilemma, Oscillation, Steady state, Social simulation

The prisoner's dilemma (PD) game has been used as a prototypical model for studying social choice situations with self-interested agents. Although in a single shot PD game, both players playing defect is a Nash equilibrium, in social settings, cooperation is usually observed among self-interested agents. The emergence of cooperation has been shown in the setting of iterated PD games and PD games on graphs. In this paper, motivated by modeling of conflict scenarios in multi-cultural societies, we study the PD game on a graph with multiple types of agents. We assume that there are two types of agents forming the nodes of the graph and the agents play the PD game with neighbors of the other type. The strategy update neighborhood of the agents can consist of either (a) neighbors of its own type only or (b) neighbors of its own type and the other type. We show by simulation that in both the above cases the fraction of players playing defect in the final solution is much more than the conventional case where there is no distinction between the game playing and strategy update neighborhoods (i.e., the agents are of the same type).

15:1515:30     Improving Classification Accuracy Using Evolutionary Fuzzy Transformation

Hossein Moeinzadeh, Babak Nasersharif, Abdolazim Rezaee, Hossein Pazhoumand-dar

Pre-processing, Classification, Evolutionary Algorithm, Membership degree, Genetic Algorithm, Transformation matrix, Particle Swarm Optimization

Selection of a classifier is only one aspect of the problem of data classification. Equally important (if not, more so) is the pre-processing strategy to be employed. In this paper, a pre-processing step is proposed in order to increase accuracy of classification. The aim of this approach is finding a transformation matrix to discriminate between classes by transforming data into a new space. Obviously, this tends to increase the classification accuracy. This transformation matrix is computed through two evolutionary methods (GA and PSO) using fuzzy approach with the aim of increasing membership degree of data to their classes by transforming them into a new space. The transformation matrix is independent of classifier and classifier type has no effect on computation of transformation matrix. Obtained results show that these pre-processing methods increase the accuracy of different classifiers.

 

Evolutionary Computation in Practice-6

Room: Bonsecours

Session Chair: To be announced                                                                       

14:0015:40    

 

Late Breaking Papers 6: Applications C

Room: St. Charles

Session Chair:                                                                        

14:0014:15     A Comparison of Selection, Recombination, and Mutation Parameter Importance over a Set of Fifteen Optimization Tasks

Edwin Roger Banks, Paul Agarwal, Marshall McBride, Claudette Owens

Genetic Programming, Genetic Algorithms, Survey, Recombination, Selection, Mutation, Evolutionary Computation Survey

How does one choose an initial set of parameters for an evolutionary computing algorithm? Clearly some choices are dictated by the problem itself, such as the encoding of a problem solution, or how much time is available for running the evolution. Others, however, are frequently found by trial-and-error. These may include population sizes, number of populations, type of selection, recombination and mutation rates, and a variety of other parameters. Sometimes these parameters are allowed to co-evolve along with the solutions rather than by trial-and-error. But in both cases, an initial setting is needed for each parameter. When there are hundreds of parameters to be adjusted, as in some evolutionary computation tools, one would like to just spend time adjusting those that are believed to be most important, or sensitive, and leave the rest to start with an initial default value. Thus the primary goal of this paper is to establish the relative importance of each parameter. Establishing general guidance to assist in the determination of these initial default values is another primary goal of this paper. We propose to develop this guidance by studying the solutions resulting from variations around the default starting parameters applied across fifteen different application types.

14:1514:30     Toward a Universal Operator Encoding for Genetic Programming

Edwin Roger Banks, Paul Agarwal, Marshall McBride, Claudette Owens

Universal Operators, Genetic Programming, Simplified Arithmetic Operators, Subdition, Diviplication, AddSub, MulDiv, Interpolated Operators

The 2002 CEC paper entitled Genetic Programming with Smooth Operators for Arithmetic Expressions: Diviplication and Subdition by Ursem and Krink proposed to blend certain arithmetic operators by interpolation to smooth the transition from one operator to another in the fitness landscape. Inspired by their idea, herein it is shown how to generalize further by using combinations of more than two operators, requiring log(N) additional parameters for each N operators so combined. Comparative results are reported for the application of this methodology to a variety of optimization tasks including symbolic regression, an aspherical lens system design, a UAV path optimization, and a remote sensor image noise filter design.

14:3014:45     Using Simulated Annealing for Producing Software Architectures

Outi Räihä, Erkki Mäkinen, Timo Poranen

simulated annealing, search-based software design

Automatic design of software architecture by use of genetic algorithms has already been shown to be feasible. A natural problem is to augment if not replace genetic algorithms with some other search method in the process of searching good architectures. The present paper studies the possibilities of simulated annealing in designing software architecture. We start from functional requirements given as a graph of functional responsibilities and consider two quality attributes, modifiability and efficiency. It is concluded that simulated annealing as such does not produce natural architectures, but it is useful as a method of producing initial populations for genetic algorithms.

14:4515:00     The Degree of Dynamism for Workforce Scheduling Problem with Stochastic Task Duration

Yossi Borenstein, Abdullah Alsheddy, Edward Tsang, Nazaraf Shah

Dynamic scheduling, estimation, planning

Real time dispatching strategies in a dynamic environment is a growing area of interest. Most of current work focuses mainly on two dynamic aspects of the problem, namely dynamic arrival of jobs and dynamic travel time. The \emph{degree of dynamism}, for example is defined with respect to dynamic arrival of jobs. This paper focuses on another dynamic aspect, namely the duration of tasks. This aspect becomes important when tasks durations are relatively long and, in addition, one has to respect time windows. We characterize the degree of dynamism of such problems and show that it relates with the expected cost of a static scheduler which is reapplied in light of dynamic events. Furthermore, preliminary experiments indicate that the performance of the scheduler can be improved when the expected duration of a task is overestimated.

15:0015:15     Evaluating Evolution and Monte Carlo for Controlling Air Traffic Flow

Adrian Agogino

Air Traffic Control, Evolution, Genetic Algorithm

The automated optimization of air traffic flow is a critical component of the next generation air traffic system, designed to facilitate the future expansion of air traffic with little increase in infrastructure. While many traditional optimization approaches have been applied to the air traffic flow problem, they have difficulty scaling to large problems and in handling the nonlinearities inherent in the air traffic flow patterns. As a solution, this paper shows how genetic algorithms can be successfully applied to this problem. With this approach, the airspace is broken up into separate control points, with a single gene within a chromosome controlling an individual point. A genetic algorithm can then be used to find a controller that maximizes the performance of the airspace. To validate this approach, we use FACET, an air traffic simulator developed at NASA and used extensively by the FAA and industry. On a scenario composed of one thousand aircraft and two points of congestion, our results show that the evolutionary method provides 60% higher performance than more traditional Monte Carlo methods

15:1515:30     Lessons Learned in Application of Evolutionary Computation to a Set of Optimization Tasks

Edwin Roger Banks, Paul Agarwal, Marshall McBride, Claudette Owens

Lessons Learned, Genetic Programming, Optimization

Many GECCO papers discuss lessons learned in a particular application, but few papers discuss lessons learned over an ensemble of problem areas. A scan of the tables of contents of the Proceedings from GECCO 2005 and 2006 showed no paper title stressing lessons learned although the term pitfall appeared occasionally in abstracts, typically applying to a particular practice. We present in this paper a set of broadly applicable lessons learned in the application of evolutionary computing (EC) techniques to a variety of problem areas and present advice related to encoding, running, monitoring, and managing an evolutionary computing task.

 

Late Breaking Papers 7: Applications D

Room: Les Courants

Session Chair:                                                                        

14:0014:15     Multiobjective Optimization of Technical Market Indicators

Diego J. Bodas-Sagi, Pablo Fernández, J. Ignacio Hidalgo, Francisco J. Soltero, José L. Risco-Martín

Finance, Optimization, Evolutionary Algorithms, Decision making, Stock market Data mining, Technical trading rules

This paper deals with the optimization of technical indicators for stock market investment. Price prediction is a problem of great complexity and usually some technical indicators are used to predict the markets trends. The main difficulty in the use of technical indicators lies in deciding the parameters values. We proposed the use of Evolutionary Algorithms (EAs) to obtain the best parameter values belonging to a collection of indicators that will help in the buying and selling of shares. This paper extends

the work presented on previous works by including additional indicators and applying them to more complex problems. In this way the Moving Averages Convergence-Divergence (MACD) indicator and the Relative Strength Index (RSI) oscillator have been selected to obtain the buying/selling signals. The experimental results indicate that our EAs offer a solution to the problem obtaining results that improve those obtained through technical indicators with their standard parameters.

14:1514:30     Improving SMT Performance: an Application of Genetic Algorithms to Configure Resizable Caches

Josefa Díaz, J. Ignacio Hidalgo, Francisco Fernández, Oscar Garnica, Sonia López

Genetic Algorithms, Simultaneous Multithreading, Optimization, Caches Memories, Adaptive Caches, Reconfigurable Caches, GALS

Simultaneous Multithreading (SMT) is a technology aimed at improving the throughput of the processor core by applying Instruction Level Parallelism (ILP) and Thread Level Parallelism (TLP). Nevertheless a good control strategy is required when resources are shared among different threads, so that throughput is optimized. We study the application of evolutionary algorithms to improve the allocation of configurations on the cache hierarchy over a Simultaneous Multithreading (SMT) processor. In this way, resizable caches have demonstrated their efficiency by adapting their configuration according to workload settings, at runtime. Moreover, some methodologies and a number of techniques, such as dynamic resource allocation, have previously been developed to optimize the cache hit behavior, trying to improve global SMT performance. In this paper we propose the use of a Genetic Algorithm (GA) to optimize dynamically reconfigurable cache designs. Given that different workloads feature different characteristics and needs, we apply a Genetic Algorithm (GA) for cache designing, in order to obtain a better dynamic configuration that increases the number of instructions per cycle (IPC). The obtained results show the feasibility of the approach and the potential of GAs for SMT optimization.

14:3014:45     Using GAs to Balance Technical Indicators on Stock Picking for Financial Portfolio Composition

António Gorgulho, Rui Neves, Nuno Horta

Portfolio, Optimization, Technical Analysis, Evolutionary Algorithms

The building of financial portfolios or funds constitutes a widely known problematic in financial markets which normally requires a rigorous analysis in order to select the most profitable assets. This subject is becoming popular among computer scientists which try to adapt known Intelligent Computation techniques to the market s domain. The presented paper proposes a potential system, based on those techniques, which aims to generate a profitable portfolio by using technical analysis indicators. In order to validate the designed application we performed a comparison against the Buy & Hold strategy and a purely random one. The preliminary results are promising once; the developed approach easily beats the remaining methodologies during Bull Market periods.

14:4515:00     Efficient Trade Execution Using a Genetic Algorithm in an Order Book Based Artificial Stock Market

Wei Cui, Anthony Brabazon, Michael O'Neill

Algorithmic Trading, , Trade Execution, , Volume Weighted Average Price, , Artificial Stock Market, , Genetic Algorithm, , Evolutionary Computation

Although there is a plentiful literature on the use of evolutionary methodologies for the trading of financial assets, little attention has been paid to the issue of efficient trade execution. Trade execution is concerned with the actual mechanics of buying or selling the desired amount of a financial instrument of interest. This paper introduces the concept of trade execution and outlines the limited prior work applying evolutionary computing methods for this task. Furthermore, we build an Agent-based Artificial Stock Market and apply a Genetic Algorithm to evolve an efficient trade execution strategy. Finally we suggest a number of opportunities for future research.

15:0015:15     Cryptanalysis of Four-Rounded DES using Binary ParticleSwarm Optimization

Waseem Shahzad, Abdul Basit Siddiqui, Farrukh Aslam Khan

Cryptanalysis, DES, PSO, Fitness Function, GA

Cryptanalysis of feistel ciphers is difficult due to their high nonlinearity and autocorrelation. On the other hand, substitution ciphers are easily breakable due to their simpler encryption process. In this paper, a highly efficient Binary Particle Swarm Optimization (PSO) based cryptanalysis approach for four-rounded Data Encryption Standard (DES) is presented. Several optimum keys are generated in different runs of the algorithm on the basis of their fitness value and finally, the real key is found by guessing every individual bit. The robustness of the proposed technique is also checked for eight-rounded DES. Our approach shows promising results when compared with the cryptanalysis of DES performed by other techniques such as Genetic Algorithm (GA).

15:1515:30     Motion Detection in Complex Environments by Genetic Programming

Brian Pinto, Andy Song

Genetic Programming, Motion Detection, Video Analysis, Tracking

Detecting motions is an important aspect of machine vision. However real world vision tasks often contain interfering motion information which is not of interest. To tackle this difficult task, we adapted Genetic Programming into this domain. The GP-based methodology presented in this paper does not require the implementation of existing motion detection algorithms. The evolved programs can detect genuine moving objects such as cars and boats, while ignoring background movements such as waving trees, rippling water surface and even pedestrians. These programs provide reliable performance under different lighting conditions, either indoors and outdoors. Furthermore no preprocessing of video input is required which is usually mandatory in conventional vision approaches.

 

Late Breaking Papers 4: Applications A

Room: Verriere A

Session Chair:                                                                        

14:0014:15     Evolving Biochemical Reaction Networks with Stochastic Attributes

Thomas R Kiehl

Genetic Algorithms, Intracellular Signalling, Stochastic Simulation

Biochemical networks display a wide range of behaviors. While many of these networks tend to operate in a steady-state regime, others exhibit distinctly stochastic behaviors. Fitting models to data from these systems challenges many of the linear and steady-state assumptions of typical modeling techniques. The genetic algorithm described herein seeks to generate networks which exhibit desired average/steady-state behaviors while minimizing or maximizing the standard deviation of those behaviors.

14:1514:30     Grisland: A Parallel Genetic Algorithm for Finding Near Optimal Solutions to the Traveling Salesman Problem

Jonatan Gomez, Roberto Poveda, Elizabeth Leon

Parallel Genetic Algorithms, Traveling Salesman Problem

This paper presents nn ca parallel genetic algorithm for finding near optimal solutions to the well-known traveling salesman problem, in a distributed computational environment. The proposed algorithm, called Grisland, combines two well known parallel genetic algorithms: the Grid genetic algorithm (fine grained model) and the Island genetic algorithm, and is improved with a local optimization technique, called 2-opt, especially developed for the traveling salesman problem. The proposed algorithm is tested on a subset of the standard TSPLib data set. Our results show that GridsLand is able to find good solutions for the Traveling Salesman Problem in few parallel iterations of the evolutionary process.

14:3014:45     Exploring an Evolutionary Medical Analytic Wallet

Aaron K Baughman, Neil Katz, Christian Eggenberger, Barry Graham, Chris Dawson, Mweene Monze, Peter Malkin

Biomedical Analytic Wallet, biometrics, Application, Artificial Intelligence, Genetic Algorithm, Pattern Recognition, Health, electronic health record, Virtual World, mobile computing, Life Sciences, pattern recognition, Neural Network

The practice of medicine and biomedical research has become information based, which enhances safety, efficiency, and the effectiveness of the health enterprise. Informational sources such as entire mapped genome systems, advance imaging techniques, health screening technologies and individualized medical plans require advanced analytical engines. Biomedical analytics provides a high throughput system for pattern and feature analysis that delivers personalized or information based medicine. Personal multimedia and medical data need to be shared and combined for proper patient diagnosis and prognosis. A multimodal analytic wallet provides a construct for the encapsulation of personalized information and computational algorithms. This exploratory paper discusses an idea of leveraging neuroplastic fidelity within a virtual and real world biomedical analytic wallet. The key components of the biomedical wallet include a pervasive repository, analytic interface and analytic environment that encompass a neuroplastic fidelity algorithm.

14:4515:00     Cancer Classification Using Microarray and Layered Architecture Genetic Programming

Jung-Yi Lin

Genetic programming, cancer classification, gene expression data, layered architecture genetic programming

An important problem of cancer diagnosis and treatment is to distinguish tumors from malignant or benign. Classifying tumors correctly leads us to target specific therapies properly to maximizing efficiency and reducing toxicity. Through the microarray technology, it is possible that monitoring expression in cells for numerous of genes simultaneously. Therefore we are allowed to use potential information hidden in the gene expression data to build a more accurate and more reliable classification model on tumor samples. In this paper we intend to investigate a new approach for cancer classification using genetic programming and microarray gene expression profiles. The layered architecture genetic programming (LAGEP) is applied to build the classification model. Some typical cancer gene expression datasets are validated to demonstrate the classification accuracy of the proposed model.

15:0015:15     Novel Bio-Inspired Self-Repair Algorithm for Evolvable Fault Tolerant Hardware Systems

Mohammad Samie, Gabriel Dragffy, Tony Pipe

Embryonics, Self-Repair, Bio-Inspired System, Prokaryotic Bio-Inspired Model

This paper investigates and presents a novel self-repair algorithm, based on a prokaryotic bio-inspired artificial model, for implementing evolvable self-healing bio-inspired systems. The key feature of the model is that system reliability can be increased with only a minimal amount of hardware overhead. It also offers a bio-inspired compression/decompression technique that exploits the intimate relationship between different genes. Distributed DNA, highly dynamic and optimized genome redundancy and optimized self-repair characteristics (using block and cell elimination) are some of the other advantages of the proposed model.

 

Late Breaking Papers 5: Applications B

Room: Verriere B

Session Chair:                                                                        

14:0014:15     Towards a Theory of Mind in Simulated Robots

Kyung-Joong Kim, Hod Lipson

Robotics, Evolutionary Computation, Estimation-Exploration Algorithm, Theory of Mind, Neural Networks, Simulation

The psychology term Theory of Mind (ToM) refers to the ability of an agent to recognize that an observed actor acts according to intentions and plans. In humans and some primates, ToM is fundamental to effective cooperation and competition, and is a key component of high-level cognition. In this paper, we explore the use of evolutionary robotics methods to create a robotic ToM. We use a co-evolutionary setup to evolve controllers that retrospectively explain an observed actor s behavior, and new actions that elicit new and more revealing behaviors. Evolved controllers can then be used to predict, manipulate and exploit the observed actor s behavior for cooperation or competition. Experimental results are shown in a physically-realistic simulation environment, and demonstrate an significant performance improvement compared to a direct estimation baseline.

14:1514:30     Self-Reflection in Evolutionary Robotics: Resilient Adaptation with a Minimum of Physical Exploration

Juan Cristobal Zagal, Hod Lipson

Self-modeling, self-reflection, metacognition, learning, evolutionary robotics

Metacognition is the ability of a system to observe and self regulate its own cognitive processes. In this paper we explore the use of metacognitive processes to improve robot resiliency and learning skills. We examine a robot that contains two controllers: An innate controller that is directly connected to sensors and motors, and a meta controller that monitors and modulates the activity of the innate controller. We show how the meta controller can observe, model and control the innate controller without access to the innate controller s internal state or architecture. Quantitative comparisons of this method with traditional evolutionary robotics techniques show how this form of self-reflection is a promising alternative to traditional adaptation methods.

14:3014:45     Evo_Indent Interactive Evolution of GNU indent Options

W B Langdon

Evolutionary Algorithms, (1+3)-ES, mutation, chromosome reordering, user driven fitness, personalised software, customised interface, prettyprint, understandability, comprehension, refactoring, SBSE

Evo_Indent http://www.dcs.kcl.ac.uk/staff/W.Langdon/evo_indent/ is a PHP web server based user driven genetic algorithm which finds good C code layouts generated by GNU indent.Either the refactored source can be usedor the user's preferred indent command options can be saved and re-used to pretty print other program text files.

14:4515:00     Multi-objective Evolutionary Optimization of 3D Differentiated Sensor Network Deployment

Chih-Wei Kang, Jian-Hung Chen

Wireless sensor network, multi-objective optimization, genetic algorithms

This paper describes a multi-objective evolutionary approach for solving multi-objective 3D deployment problems in differentiated wireless sensor networks (WSNs). WSN is a wireless network consisting of spatially distributed autonomous sensors to monitor physical or environmental conditions. Deciding the location of sensor to be deployed on a terrain with the consideration different criteria is an important issue for the design of wireless sensor network. A multi-objective genetic algorithm is proposed to solve 3D differentiated WSN deployment problems with the objectives of the coverage of sensors, satisfaction of detection thresholds, and energy conservation. The preliminary experimental results demonstrated that the proposed approach is suitable for solving 3D deployment problems of WSNs with different requirements.

15:0015:15     An Evolved Neural Controller for Bipedal Walking with Dynamic Balance

Michael E. Palmer, Daniel B. Miller

neural networks, robotics, bipedal, dynamic walking, evolution

We successfully evolved a neural network controller that produces dynamic walking in a simulated bipedal robot with compliant actuators, a difficult control problem. The evolutionary evaluation uses a detailed software simulation of a physical robot. We describe: 1) a novel theoretical method to encourage populations to evolve around local optima, which employs multiple demes and fitness functions of progressively increasing difficulty, and 2) the novel genetic representation of the neural controller.

15:1515:30     Solving Iterated Functions Using Genetic Programming

Michael Douglas Schmidt, Hod Lipson

Iterated Functions, Symbolic Regression

An iterated function f(x) is a function that when composed with itself, produces a given expression f(f(x))=g(x). Iterated functions are essential constructs in fractal theory and dynamical systems, but few analysis techniques exist for solving them analytically. Here we propose using genetic programming to find analytical solutions to iterated functions of arbitrary form. We demonstrate this technique on the notoriously hard iterated function problem of finding f(x) such that f(f(x))=x2 2. While some analytical techniques have been developed to find a specific solution to problems of this form, we show that it can be readily solved using genetic programming without recourse to deep mathematical insight. We find a previously unknown solution to this problem, suggesting that genetic programming may be an essential tool for finding solutions to arbitrary iterated functions.

 

 

 

 

 


 
 Genetic and Evolutionary Computation Conference (GECCO-2009)
GECCO 2008 site       GECCO 2007 site     GECCO 2006 site      GECCO 2005 site         GECCO 2004 site    
GECCO 2003 site       GECCO 2002 site         GECCO 2001 site      GECCO 2000 site      GECCO 1999 site