Loading...
 

Tutorials


Tutorials Feedback

We have prepared two feedback questionnaires which should help the tutorial authors improve their tutorials, and may also influence the selection of tutorials offered at GECCO in the upcoming years. We kindly ask you, the attendees, for cooperation with providing us with the feedback. Nevertheless, both feedback forms are anonymous, and all questions are optional.

Both forms are protected by a password which is the same as the WIFI password at the conference venue. Please, provide your feedback by the end of July!

List of Tutorials

Introductory Tutorials

                             
TitleOrganizers
NEW A Practical Guide to Benchmarking and Experimentation
  • Nikolaus Hansen INRIA Saclay, France
Evolution of Neural Networks
  • Risto Miikkulainen University of Texas at Austin
Evolutionary Computation: A Unified Approach
  • Kenneth A. De Jong George Mason University, Fairfax, VA
Evolutionary Multiobjective Optimization
  • Dimo Brockhoff Inria Saclay - Ile-de-France and CMAP, Ecole Polytechnique, France
NEW Fitness Landscape Characterisation of Optimisation Problems
  • Aldeida Aleti Monash University
  • Katherine Malan University of South Africa
  • Irene Moser Swinburne University of Technology
Genetic Programming
  • Una-May O'Reilly Massachusetts Institute of Technology, US
Hyper-heuristics
  • Daniel R. Tauritz Missouri University of Science and Technology
  • John R. Woodward University of Stirling, UK
Introducing Rule-based Machine Learning: Capturing Complexity
  • Ryan Urbanowicz University of Pennsylvania, USA
Introduction to Artificial Gene Regulatory Networks
  • Sylvain Cussat-Blanc
  • Wolfgang Banzhaf Memorial University of Newfoundland, Canada
Introduction to Randomized Continuous Optimization
  • Anne Auger Inria Saclay-Ile-de-France
  • Nikolaus Hansen INRIA Saclay, France
Introductory Statistics for EC: A Visual Approach
  • Mark Wineberg
Model-Based Evolutionary Algorithms
  • Dirk Thierens Utrecht University, The Netherlands
  • Peter A.N. Bosman Centre for Mathematics and Computer Science, The Netherlands
Representations for Evolutionary Algorithms
  • Franz Rothlauf Johannes Gutenberg-Universtität Mainz
Runtime Analysis of Population-based Evolutionary Algorithms
  • Per Kristian Lehre University of Nottingham, UK
  • Pietro S. Oliveto University of Sheffield, UK
Theory for Non-Theoreticians
  • Benjamin Doerr Ecole Polytechnique

Advanced Tutorials

                       
TitleOrganizers
CMA-ES and Advanced Adaptation Mechanisms
  • Youhei Akimoto Shinshu University, Japan
  • Nikolaus Hansen INRIA Saclay, France
Constraint-Handling Techniques used with Evolutionary Algorithms
  • Carlos Artemio Coello CINVESTAV-IPN, Mexico
NEW Evolutionary Large-Scale Global Optimization: An Introduction
  • Nabi Omidvar University of Birmingham
  • Xiaodong Li RMIT, Australia
NEW Exploratory Landscape Analysis
  • Pascal Kerschke University of Muenster
  • Mike Preuss WWU Münster
Expressive Genetic Programming: Concepts and applications
  • Lee Spector Hampshire College, USA
  • Nic McPhee University of Minnesota
Generative and Developmental Systems
  • Kenneth Stanley University of Central Florida
NEW Next Generation Genetic Algorithms
  • Darrell Whitley Colorado State University, USA
NEW Non-Static Parameter Choices in Evolutionary Computation
  • Carola Doerr CNRS & Universite Pierre et Marie Curie (Paris 6), France
NEW Recent Advances in Evolutionary Multi-Criterion Optimization
  • Kalyanmoy Deb Michigan State University
NEW Sequential Experimentation by Evolutionary Algorithms
  • Ofer Shir Tel-Hai College
  • Thomas Bäck Leiden Institute of Advanced Computer Science
  • Joshua D. Knowles University of Birmingham, UK
  • Richard Allmendinger University of Manchester, UK
Solving Complex Problems with Coevolutionary Algorithms
  • Krzysztof Krawiec Poznan University of Technology, Poland
  • Malcolm Heywood Dalhousie University
Theory of Swarm Intelligence
  • Dirk Sudholt The University of Sheffield

Specialized Tutorials

           
TitleOrganizers
Automated Offline Design of Algorithms
  • Thomas Stützle IRIDIA laboratory, ULB, Belgium
  • Manuel López-Ibáñez University of Manchester, UK
Evolutionary Computation and Cryptology
  • Stjepan Picek KU Leuven, Belgium and Faculty of Electrical Engineering and Computing, Zagreb, Croatia
NEW Evolutionary Computation in Network Management and Security
  • Nur Zincir-Heywood Dalhousie University, Canada
  • Gunes Kayacik Qualcomm Research Silicon Valley, USA
Evolutionary Robotics
  • Nicolas Bredeche Université Pierre et Marie Curie
  • Stéphane Doncieux Université Pierre et Marie Curie
  • Jean-Baptiste Mouret Inria
Intelligent Systems for Smart Cities
  • Enrique Alba University of Málaga, Spain
NEW Multiagent Systems and Agent-based Modeling and Simulation
  • Ana Bazzan Universidade Federal do Rio Grande do Sul (UFRGS)

Introductory Tutorials

NEW A Practical Guide to Benchmarking and Experimentation

Many evolutionary algorithms, in particular those we use in practice, and many if not most relevant optimization problems, are not amenable to comprehensive theoretical analysis. Therefore, assessing and predicting the performance of evolutionary algorithms in "the real world" requires to obtain and analyze empirical data. These data arise either from "artificial" benchmarks or from "real-world" problems themselves. This tutorial is concerned with experimentation and benchmarking algorithms. This includes the design of experimental performance studies (small or large ones) and the presentation, analysis and assessment of the obtained data and results.

The aim of the tutorial is to enable participants to
1) design a sound benchmark study,
2) identify meaningful measurements for the performance assessment, and
3) display and interpret experimental results in a meaningful and accessible way.

The ultimate goal of experimental benchmark studies is to achieve quantitatively meaningful performance assessments, generally with predictive power in regard to "the real world". The considered topics will include experimental study design, accessible presentation in form of tables, run-length distributions (data profiles), performance profiles, scatter plots, scale-up studies, success probabilities and restarts. Furthermore, the number of trials, the role of invariance, statistical significance, and the most common mistakes when conducting experimental studies are touched upon.

A selected number of techniques is exemplified for the case of benchmarking continuous domain evolutionary and non-evolutionary algorithms in a black-box scenario using the BBOB COmparing Continuous Optimizers (COCO) platform, http://coco.gforge.inria.fr/doku.php.

Nikolaus Hansen

Nikolaus Hansen is a research scientist at INRIA, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined INRIA, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the InGene Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Evolution of Neural Networks

Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in many sequential decision tasks. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. As a result, deep learning can be scaled to utilize more of the available computing power, and extended to domains combining multiple modalities, such multiple visual and language tasks. In this tutorial, I will review
(1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes,
(2) ways of combining gradient-based training with evolutionary methods, and
(3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Demos: The tutorial draws from over 30 videos illustrating methods and applications of neuroevolution, including rocket control, multilegged walking, predator-prey coevolution, evolving morphology and behavior for virtual creatures, the NERO machine-learning video game, a Turing Test for game bots, object and scene recognition, captioning, and language and music modeling.

Risto Miikkulainen

Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and a Fellow at Sentient Technologies, Inc. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing, and self-organization of the visual cortex; he is an author of over 370 articles in these research areas. He is an IEEE Fellow, member of the Board of Governors of the Neural Network Society, and an action editor of Cognitive Systems Research and IEEE Transactions on Computational Intelligence and AI in Games.

Evolutionary Computation: A Unified Approach

The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to see the relationships between them, assess strengths and weaknesses, and make good choices for new application areas. This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it.

Finally, the framework is used to identify some important open issues that need further research.

Kenneth A. De Jong

Kenneth A. De Jong is a senior and well-known researcher in the EC community with a rich and diverse research profile. De Jong's research interests include genetic algorithms, evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing. Support for these projects is provided by NSF, DARPA, ONR, and NRL. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of ACM SIGEVO.

Evolutionary Multiobjective Optimization

Many optimization problems are multiobjective in the sense that multiple, conflicting criteria need to be considered simultaneously. Due to the conflict between these objectives, usually, no single optimum solution does exist. Instead, a set of so-called Pareto-optimal solutions, for which no other solution has better function values in all objectives, does result.

In practice, Evolutionary Multiobjective Optimization (EMO) algorithms are widely used for solving these multiobjective optimization problems. Being stochastic blackbox optimizers, EMO approaches allow problems with nonlinear, nondifferentiable, or noisy objective functions to be tackled. By inherently working on sets of solutions, they allow the Pareto-optimal set to be computed or approximated in one algorithm run - opposed to classical techniques from the multicriteria decision making (MCDM) field, which aim for single solutions. Defining problems in a multiobjective way, has two further advantages:
1. The set of (Pareto-optimal) solutions may reveal design principles shared by different solutions (innovization).
2. Some single-objective problems become easier to solve using randomized search heuristics if additional objectives are added (multiobjectivization).

Within this tutorial, a comprehensive introduction to the EMO field will be given and selected research results will be presented in more detail. More specifically, we are going to (i) introduce the basic principles of EMO algorithms in comparison to classical solution-based approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of state-of-the-art algorithms and techniques. Moreover, we will present important recent research results in areas such as preference articulation, performance assessment, and benchmarking.

Though classified as introductory, this tutorial is intended for both novices and people familiar with EMO. Those without experience will learn about the foundations of multiobjective optimization and the basic working principles of state-of-the-art EMO algorithms. The recent results and open questions, highlighted after each chapter of the tutorial, can stimulate future research and/or discussions during the conference.

Dimo Brockhoff

Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich,
Switzerland in 2009. Afterwards, he held two postdoctoral research positions in France at Inria Saclay Ile-de-France (2009-2010) and at Ecole
Polytechnique (2010-2011) before joining Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France center). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general.

NEW Fitness Landscape Characterisation of Optimisation Problems

Fitness landscape analysis (FLA) can be used to characterise optimisation problems by analysing the underlying search space in terms of the objective to be optimised. There have been many recent advances in the field of FLA on the development of methods and measures that have been shown to be effective in the understanding of algorithm behaviour, the prediction of metaheuristic performance and the selection of algorithms.

When optimisation problems are complex, it can be useful to characterise the search landscapes to gain insight into the nature of the problem. The ultimate aim of characterising problems is to answer questions such as: How challenging is a particular problem for an algorithm? How likely is an algorithm to produce good results given a problem with a particular search landscape? Why did my algorithm perform so well/badly on this particular problem?

The aim of this tutorial is to introduce delegates to the field of FLA in a theoretical and practical way. After the tutorial, delegates should feel confident to use some of the existing FLA techniques in practice.

The tutorial will cover the following topics:
- An overview of the theory of FLA
- FLA techniques for combinatorial optimisation problems
- FLA techniques for continuous optimisation problems
- Future research directions

The tutorial will end with an interactive challenge to delegates. Working in groups, delegates will be tasked with guessing the ranks of 2D fitness landscapes based on specified FLA metrics. The winning group will receive a prize.

Aldeida Aleti

Having received her PhD in Computer Science and Software Engineering in 2012 from Swinburne University of Technology, Aldeida Aleti is currently a Senior Lecturer within Faculty of Information Technology at Monash University in Australia. Her research interests include fitness landscape characterisation and combinatorial optimisation problems. She is particularly interested in Automated Software Engineering, which aims at automating the process of software development, staring from requirements elicitation, to software design, code generation, testing, and finally code repair. This involves the application and advancement of novel Artificial Intelligence and optimisation techniques.

Katherine Malan

Katherine Malan is a senior lecturer at the Department of Decision Sciences at the University of South Africa. She recently obtained her PhD in Computer Science from the University of Pretoria (2014), but has 20 years' lecturing experience in Computer Science at three different South African universities. Her research interests include fitness landscape analysis and the application of computational intelligence techniques to real-world problems. She is particularly interested in the link between fitness landscape features and algorithm performance and to applying FLA techniques to real-world optimisation problems, such as the training of neural networks. Katherine is an associated editor for Swarm and Evolutionary Computation and regularly reviews for a number of journals in fields related to evolutionary computation, swarm intelligence and operational research.

Irene Moser

I completed my PhD on dynamic optimisation in 2007. In 2008, I joined Swinburne's faculty of ICT as a lecturer. I have taught and developed quite a few programming and database subjects, run a Bachelor's course, 'rescued' a few PhD students and have taken on various roles in HDR coordination.
My research interests are all things optimisation - combinatorial, continuous, multiobjective, evolutionary methods and their search spaces, custom heuristics.

Genetic Programming

Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome. Genetic programming evolves a 'program' to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming framework. It explains how the powerful capability of genetic programming is derived from modular algorithmic components: executable representations such as an abstract syntax tree, variation operators that preserve syntax and explore a variable length, hierarchical solution space, appropriately chosen programming functions and fitness function specification.

Una-May O'Reilly

Una-May O'Reilly is leader of the AnyScale Learning For All (ALFA) group at CSAIL. ALFA focuses on scalable machine learning, evolutionary algorithms, and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in clinical medicine knowledge discovery: arterial blood pressure forecasting and pattern recognition, diuretics in the ICU; wind energy: turbine layout optimization, resource prediction, cable layout; and MOOC Technology: MoocDB, student persistence and resource usage analysis.

Her research is in the design of scalable Artificial Intelligence systems that execute on a range of hardware systems: GPUs, workstations, grids, clusters, clouds and volunteer compute networks. These systems include machine learning components such as evolutionary algorithms (e.g. genetic programming, genetic algorithms and learning classifiers), classification, non-linear regression, and forecasting algorithms. They span the interpretation and analysis of raw data, through inference on conditioned exemplar data, to the deployment and evaluation of learned “algorithmic machines” in the original application context.

Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, now ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005. She has served on the GECCO business committee, co-led the 2006 and 2009 Genetic Programming: Theory to Practice Workshops and co-chaired EuroGP, the largest conference devoted to Genetic Programming. IIn 2013, with Anna Esparcia, Anniko Ekart and Gabriela Ochoa she inaugurated the Women@GECCO meeting and chairs the group. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and editor for Evolutionary Computation (MIT Press), and action editor for the Journal of Machine Learning Research.

Una-May has a patent for a original genetic algorithm technique applicable to internet-based name suggestions. She holds a B.Sc. from the University of Calgary, and a M.C.S. and Ph.D. (1995) from Carleton University, Ottawa, Canada. She joined the Artificial Intelligence Laboratory, MIT as a Post-Doctoral Associate in 1996.

Hyper-heuristics

The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space.

This tutorial will place hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it (a recent direction introduced by William Langdon).

The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms (often by examining existing human-written algorithms for inspiration). The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.

This leads to a technique for mass-producing algorithms that can be customised to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a travelling salesperson algorithm (TSP) by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off-line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture).

This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Basic knowledge of genetic programming will be assumed.

Daniel R. Tauritz

Daniel R. Tauritz is an Associate Professor in the Department of Computer Science at the Missouri University of Science and Technology (S&T), a contract scientist for Sandia National Laboratories, a former Guest Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. He served previously as GECCO 2010 Late Breaking Papers Chair, GECCO 2012 & 2013 GA Track Co-Chair, GECCO 2015 ECADA Workshop Co-Chair, GECCO 2015 MetaDeeP Workshop Co-Chair, GECCO 2015 Hyper-heuristics Tutorial co-instructor, and GECCO 2015 CBBOC Competition co-organizer. For several years he has served on the GECCO GA track program committee, the Congress on Evolutionary Computation program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.

John R. Woodward

John R. Woodward s a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and is employed on the DAASE project (http://daase.cs.ucl.ac.uk/), and for the previous four years was a lecturer with the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 100 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.

Introducing Rule-based Machine Learning: Capturing Complexity

In the context of evolutionary machine learning, rule-based machine learning (RBML) algorithms are an often overlooked class of algorithms with flexible features employing an alternative paradigm of piece-wise modeling that sets them apart from other strategies, particularly with respect to modeling complexity. This tutorial will introduce the RBML paradigm and three of the most successful methodological RBML families, i.e. Learning Classifier Systems (LCS), Association Rule Learning (ARL) and Artificial Immune Systems (AIS). Particular focus will be placed on learning classifier systems (LCS), or more specifically Michigan-style LCSs, being the most prevalent and foundational of the RBML algorithms. Michigan-style LCSs combine the global search of evolutionary algorithms with the local optimization of reinforcement or supervised learning. Michigan-style LCS algorithms uniquely distribute learned patterns over a collaborative population of individually interpretable (IF: THEN) rules. This allows the algorithm to flexibly and effectively describe complex and diverse problem spaces found in behavior modeling, function approximation, classification, prediction, and data mining.

The overall goal of this tutorial is to increase awareness of RBML algorithms as not only a viable, but often advantageous alternative to other machine learning approaches, as well as to promote a fundamental understanding of how they work, and where they can be applied. While ExSTraCS will be the focus of the demonstration, this tutorial will review and contrast many of the other major LCS algorithms available in the literature, as well as discuss statistical analysis, interpretation, and data visualization strategies for knowledge discovery. Additionally I will tie into this tutorial to the upcoming publication of an introductory textbook on LCS which should be published by GECCO 2017.

Ryan Urbanowicz

Dr. Urbanowicz’s research is focused on bioinformatics, machine learning, epidemiology, data mining, and the development of a new learning classifier system that is maximally functional, accessible, and easier to use and interpret. He has written one of the most cited and regarded reviews of the Learning Classifier System research field as well as 12 additional peer-reviewed LCS research papers, has co-chaired the International Workshop on Learning Classifier Systems for the past 4 years, and has recently published and a new open source learning classifier system algorithm implemented in python, called ExSTraCS. He has also given several invited introductory lectures on LCS algorithms in addition to co-presenting this tutorial in 2013. Dr. Urbanowicz received a Bachelors and Masters of Biological Engineering from Cornell University, as well as a PhD in Genetics from Dartmouth College. Currently he is a post-doctoral researcher in the Geisel School of Medicine, about to transition to a new research position at the University of Pennsylvania, USA.

Introduction to Artificial Gene Regulatory Networks

Gene regulatory networks are a central mechanism in the regulation of gene expression in all living organisms’ cells. Their functioning is nowadays very well understood: they are based on the production of proteins enhanced or inhibited by other proteins from the inside and/or the outside of the cells. This produces complex dynamics with which cells from the same organism, with the exact same DNA, can form very different types (through cell differentiation) and have very different behaviors according to their types and positions.

This tutorial will first introduce the biology of these networks, from the genetic aspect to the dynamics of gene regulation. Then, biologically plausible models will be presented to highlight the complexity of dynamics gene regulatory networks can produce. Using that model, we will show how computational models can be designed so that a genetic algorithm can optimize the network efficiently. We will present a set of applications in which artificial gene regulatory networks are plugged into diverse virtual agents (artificial cells in an artificial embryogenesis context, direct connection to the sensors and effectors or high-level behavior regulation in others). Demos, showing the results obtained with this system, will be presented all along the tutorial.

Sylvain Cussat-Blanc

Sylvain Cussat-Blanc has a permanent research and teaching position at University of Toulouse. He is a permanent member of the Institute of Research in Computer Science of Toulouse (IRIT), a research unit of the French National Center for Research (CNRS). He is interested in developmental models, gene regulatory networks, evolutionary robotics, artificial life and evolutionary computation in general. He is also working on a serious game to teach biologists cell proliferation mechanisms.
He obtained his Ph.D. in 2009. His work was about a cell-based developmental I to produce artificial creatures. During his postdoctoral fellowship with Jordan Pollack in 2011 at Brandeis University (USA), he applied these approaches to evolutionary robotics with the aim to automatically design the real modular robots' morphologies. email:sylvain.cussat-blanc at irit dot fr

Wolfgang Banzhaf

Wolfgang Banzhaf is a professor of Computer Science at Memorial University of Newfoundland. He received a degree in Physics from the Ludwig-Maximilians-University in Munich and a PhD from the Department of Physics of the Technical University of Karlsruhe. He was postdoctoral researcher at the University of Stuttgart and Visiting / Senior Researcher at Mitsubishi Electric Corporation in Japan and the US. From 1993 to 2003 he was Associate Professor for Applied Computer Science at the Technical University of Dortmund. His research interests are in the field of bio-inspired computing, notably evolutionary computation and complex adaptive systems. Studies of self-organization and the field of Artificial Life are also of very much interest to him. Recently he has become more involved with network research as it applies to natural and man-made systems.

Introduction to Randomized Continuous Optimization

Continuous optimization problems arise in many domains in academia or industry. They are commonly formulated as black-box problems that can solely return function values at queried points but no other information like gradient of the function. Typical difficulties that the optimizer then need to face are non-separability, curse of dimensionality, ill-conditioning and ruggedness of the function. In this context, randomized methods like evolution strategies are often the methods of choice.

This introductory tutorial on randomized continuous optimization will start by discussing typical difficulties of optimization problems in continuous domain. It will then introduce basics of randomized method with a specific focus on evolution strategies. In particular sampling of solutions and adaptation mechanisms will be presented. Different algorithms will be introduced including simple evolution strategies, EDAs and CMA-ES. For this latter algorithm, the most advanced notions will be presented in the companion tutorial.

Theoretical concepts related to algorithm design will also be introduced, namely progress and convergence rates and invariance.

In a last part, we will discuss how to properly assess performance of black-box methods.

Anne Auger

Anne Auger is a permanent researcher at the French National Institute for Research in Computer Science and Control (INRIA). She received her diploma (2001) and PhD (2004) in mathematics from the Paris VI University. Before to join INRIA, she worked for two years (2004-2006) at ETH in Zurich. Her main research interest is stochastic continuous optimization including theoretical aspects and algorithm designs. She is a member of ACM-SIGECO executive committee and of the editorial board of Evolutionary Computation. She has been organizing the biannual Dagstuhl seminar "Theory of Evolutionary Algorithms" in 2008 and 2010 and served as track chair for the theory and ES track in 2011, 2013 and 2014. Together with Benjamin Doerr, she is editor of the book "Theory of Randomized Search Heuristics".

Nikolaus Hansen

Nikolaus Hansen is a research scientist at INRIA, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined INRIA, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the InGene Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Introductory Statistics for EC: A Visual Approach

In the past, it was common to see papers published in Evolutionary Computational conferences that presented experimental results without any statistical analysis performed. Fortunately, it is now becoming widely accepted that experimental results lacking proper statistical analysis must be considered anecdotal at best, or even wholly inaccurate. In many areas within EC, and especially at GECCO, if your paper does not include a proper statistical analysis it will be rejected. Yet many researchers in the GEC field still exhibit an unfortunate lack of statistical rigor when performing and analyzing experiments.

This tutorial, a staple at past GECCOs, aims at providing the basic statistical framework that all experimenters in the EC field should follow. It covers introductory topics in statistics such as the T test, confidence intervals, non-parametric statistics, and will touch on multivariate and multifactorial analysis such as the Analysis of Variance (ANOVA) and regression analysis (both linear and polynomial), time permitting.

The tutorial is presented at an introductory level, and takes a very visual approach to statistics; relying on graphics and animation to provide an intuitive understanding of the subject, instead of the traditional equations, which cater to only the initiated. In other words, it is meant for any EC researcher, independent of their mathematical training, who want to compare their newly designed EC system against established EC systems to see if there is an improvement, or who want to determine which EC system performs better on their chosen problem; i.e. nearly everyone.

It is vital, if our community is to be taken seriously, for us to continue to educate ourselves (especially new Graduate students) on the statistical techniques that are considered de rigor for experiments performed in any field, such as ours, that is stochastic in nature.

Mark Wineberg

Prof. Wineberg has been actively researching the field of Evolutionary Computation (EC) and Genetic Algorithms (GA) since 1993 while he was still a graduate student at Carleton University, Ottawa, Canada. Over the years he has published on various topics including: the intersection of Genetic Algorithms and Genetic Programming, enhancing the GA for improved behavior in dynamic environments through specialized multiple populations, and exploring the concept of distances and diversity in GA populations. He is currently continuing his research in dynamic environments and studying the properties that allow one population to evolve more readily than another. Prof. Wineberg also teaches an undergraduate course at Guelph on computer simulation and modeling of discrete stochastic systems, which emphasizes proper statistical analysis, as well as a graduate course on experimental design and analysis for computer science, which is an outgrowth of the statistical analysis tutorial given at GECCO.

Model-Based Evolutionary Algorithms

In model-building evolutionary algorithms the variation operators are guided by the use of a model that conveys as much problem-specific information as possible so as to best combine the currently available solutions and thereby construct new, improved, solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process. Well-known types of algorithms of the latter type are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and subsequently sampled to generate new solutions.

Although EDAs are the best known example, other approaches also exist, such as the use of statistical models in the Linkage Tree Genetic Algorithm (LTGA). In general, replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. This is an especially useful feature when considering optimization in a black-box setting. Successful applications include Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.

This tutorial will provide an introduction and an overview of major research directions in this area.

Dirk Thierens

Dirk Thierens is affiliated with the Department of Information and Computing Sciences at Utrecht University, the Netherlands, where he is teaching courses on Evolutionary Computation and on Computational Intelligence. He has been involved in genetic algorithm research since 1990. His current research interests are mainly focused on the design and application of model learning techniques to improve evolutionary search. Dirk is (has been) a member of the Editorial Board of the journals Evolutionary Computation, Evolutionary Intelligence, IEEE Transactions on Evolutionary Computation, and a member of the program committee of the major international conferences on evolutionary computation. He was elected member of the SIGEVO ACM board and contributed to the organization of several GECCO conferences: workshop co-chair (2003, 2004), track (co-)chair (2004, 2006, 2014), and Editor-in-Chief (2007).

Peter A.N. Bosman

Peter A. N. Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 90 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair (EDA track, 2006, 2009), late-breaking-papers chair (2007), (co-)workshop organizer (OBUPM workshop, 2006; EvoDOP workshop, 2007; GreenGEC workshop, 2012-2014), (co-)local chair (2013) and general chair (2017).

Representations for Evolutionary Algorithms

Successful and efficient use of evolutionary algorithms (EA) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation.

In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, or discrete permutations, and standard off-the-shelf search operators are applied to these genotypes. This is for example the case in standard genetic algorithms, evolution strategies, and some genetic programming approaches like grammatical evolution or cartesian genetic programming. To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotype-phenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation.

Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance. Relevant properties of representations are locality and redundancy.

Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.

The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf

Franz Rothlauf received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 90 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books ""Representations for Genetic and Evolutionary Algorithms"" and ""Design of Modern Heuristics"". Since 2013, he is Academic Director of the Executive MBA program at the University of Mainz.

His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ) and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He serves as treasurer for ACM SIGEVO since 2011. He has been organizer of many workshops and tracks on heuristic optimization issues, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on ""Evolutionary Computation in Communications, Networks, and Connected Systems"", co-organizer of the European workshop series on ""Evolutionary Computation in Transportation and Logistics"", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.

Runtime Analysis of Population-based Evolutionary Algorithms

Populations are at the heart of evolutionary algorithms (EAs). They provide the genetic variation which selection acts upon. A complete picture of EAs can only be obtained if we understand their population dynamics. A rich theory on runtime analysis (also called time-complexity analysis) of EAs has been developed over the last 20 years. The goal of this theory is to show, via rigorous mathematical means, how the performance of EAs depends on their parameter settings and the characteristics of the underlying fitness landscapes. Initially, runtime analysis of EAs was mostly restricted to simplified EAs that do not employ large populations, such as the (1+1) EA. This tutorial introduces more recent techniques that enable runtime analysis of EAs with realistic population sizes.

The tutorial begins with a brief overview of the population-based EAs that are covered by the techniques. We recall the common stochastic selection mechanisms and how to measure the selection pressure they induce. The main part of the tutorial covers in detail widely applicable techniques tailored to the analysis of populations. We discuss random family trees and branching processes, drift and concentration of measure in populations, and level-based analyses.

To illustrate how these techniques can be applied, we consider several fundamental questions: When are populations necessary for efficient optimisation with EAs? What is the appropriate balance between exploration and exploitation and how does this depend on relationships between mutation and selection rates? What determines an EA's tolerance for uncertainty, e.g. in form of noisy or partially available fitness?

Per Kristian Lehre

Per Kristian Lehre received MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. He finished the PhD in 2006 under the supervision of Prof Pauline Haddow, and joined the School of Computer Science at The University of Birmingham, UK, as a Research Fellow in January 2007 with Prof Xin Yao. He was a Post Doctoral Fellow at DTU Informatics, Technical University of Denmark in Lyngby, Denmark from April 2010. He is since September 2011 a Lecturer in the School of Computer Science at the University of Nottingham, UK. Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular runtime analysis of population-based evolutionary algorithms. His research has won numerous best paper awards, including GECCO (2013, 2010, 2009, 2006) and ICSTW (2008). He is vice-chair of IEEE Task Force on Theoretical Foundations of Bio-inspired Computation, and a member of the editorial board of Evolutionary Computation. He has guest-edited special issues of Theoretical Computer Science and IEEE Transaction on Evolutionary Computation on theoretical foundations of evolutionary computation. Dr Lehre has given many tutorials on evolutionary computation in summer schools (UK: 2007, France: 2009, 2010, and 2011, Estonia: 2010), as well as major conferences and workshops (GECCO 2013 and 2014, CEC 2013, ThRaSH 2013). He is the main coordinator of the 2M euro EU-funded project SAGE which brings together theory of evolutionary computation and population genetics.

Pietro S. Oliveto

Pietro S. Oliveto iHe is currently a Vice-Chancellor Fellow at the University of Sheffield, UK and has recently been awarded an EPSRC Early Career Fellowship which he will start in March 2015. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Ecient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener's research group.

His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems. He has published several runtime analysis papers on Evolutionary Algorithms (EAs), Articial Immune Systems (AIS) and Ant Colony Optimization (ACO) algorithms for classical NP-Hard combinatorial optimization problems, a review paper of the field of time complexity analysis of EAs for combinatorial optimization problems and a book chapter containing a tutorial on the runtime analysis of EAs. He has won best paper awards at the GECCO08, ICARIS11 and GECCO14 conferences and got very close at CEC09 and at ECTA11 through best paper nominations.

Dr. Oliveto has given tutorials on the runtime complexity analysis of EAs at WCCI 2012, CEC 2013, GECCO 2013, WCCI 2014 and GECCO 2014. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), IEEE member and Chair of the IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation.

Theory for Non-Theoreticians

This tutorial addresses GECCO attendees who do not or just seldom use mathematical proofs in their everyday research activities. We provide a smooth introduction to the theory of evolutionary computation (EC). Complementing other introductory theory tutorials, we do NOT aim at equipping you with tools or theorems that can be beneficial for your research. Instead, after this tutorial, you will have a clear opinion on

- what theory of evolutionary algorithms aims at,
- how research in theory of evolutionary computation is done,
- how to interpret statements from the theory literature,
- what are some important contributions of theory to our field,
- and what are the current hot topics.

Benjamin Doerr

Benjamin Doerr is a full professor at the Ecole Polytechnique (France). He also is an adjunct professor at Saarland University (Germany). His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of black-box complexity, he proved several of the current best bounds.

Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO, served as its co-chair 2007-2009 and served again in 2014. In 2016, he chaires the Hot-off-the-press track. He is a member of the editorial boards of "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science" and "Information Processing Letters". Together with Anne Auger, he edited the book "Theory of Randomized Search Heuristics".

Advanced Tutorials

CMA-ES and Advanced Adaptation Mechanisms

The Covariance-Matrix-Adaptation Evolution Strategy is nowadays considered as the state-of-the art continuous stochastic search algorithm, in particular for optimization of non-separable, ill-conditioned and rugged functions. The CMA-ES consists of different components that adapt the step-size and the covariance matrix separately.

This tutorial will focus on CMA-ES and provide the audience with an overview of the different adaptation mechanisms used within CMA-ES and the scenarios where their relative impact is important. We will in particular present the rank-one update, rank-mu update, active CMA for the covariance matrix adaptation. Different step-size mechanisms (CSA and TPA) as well as the scenario where they should be applied will be discussed.

We will address important design principles as well as questions related to parameter tuning that always accompany algorithm design. The input parameters such as the initial mean, the initial step-size, and the population size will be discussed in relation with the ruggedness of functions. Restart strategies that automatize the input parameter tuning will be presented.

Youhei Akimoto

Youhei Akimoto is an assistant professor at Shinshu University, Japan. He received his diploma (2007) in computer science and his master degree (2008) and PhD (2011) in computational intelligence and systems science from Tokyo Institute of Technology, Japan. Since 2010, he was also a research fellow of Japan Society for the Promotion of Science for one year. Afterwords, He joined TAO group at INRIA, France, for two years as a post-doc. He started working at Shinshu University in 2013. His research interests include design principle and theoretical analysis of stochastic search heuristics in continuous domain, in particular, the Covariance Matrix Adaptation Evolution Strategy.

Nikolaus Hansen

Nikolaus Hansen is a research scientist at INRIA, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined INRIA, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the InGene Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Constraint-Handling Techniques used with Evolutionary Algorithms

Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).

Carlos Artemio Coello

Carlos Artemio Coello received a BSc in Civil Engineering from the Universidad Autonoma de Chiapas in Mexico in 1991 (graduating Summa Cum Laude). Then, he was awarded a scholarship from the Mexican government to pursue graduate studies in Computer Science at Tulane University (in the USA). He received a MSc and a PhD in Computer Science in 1993 and 1996, respectively. Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design Centre (in England) and a Visiting Professor at DePauw University (in the USA). He is currently full professor at CINVESTAV-IPN in Mexico City, Mexico. He has published over 390 papers in international peer-reviewed journals, book chapters, and conferences. He has also co-authored the book "Evolutionary Algorithms for Solving Multi-Objective Problems", which is now in its Second Edition (Springer, 2007) and has co-edited the book "Applications of Multi-Objective Evolutionary Algorithms" (World Scientific, 2004). His publications currently report over 23,600 citations, according to Google Scholar (his h-index is 61). He has delivered invited talks, keynote speeches and tutorials at international conferences held in Spain, USA, Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, China, India, Uruguay and Mexico. Dr. Coello has served as a technical reviewer for over 50 international journals and for more than 100 international conferences and actually serves as associate editor of the journals "Evolutionary Computation", "IEEE Transactions on Evolutionary Computation", "Computational Optimization and Applications", "Applied Soft Computing", and "Pattern Analysis and Applications". He is also a member of the editorial board of "Engineering Optimization". He received the 2007 National Research Award (granted by the Mexican Academy of Science) in the area of "exact sciences" and, since January 2011, he is an IEEE Fellow for "contributions to multi-objective optimization and constraint-handling techniques." He is also the recipient of the prestigious "2013 IEEE Kiyo Tomiyasu Award" and of the "2012 National Medal of Science and Arts" in the area of "Physical, Mathematical and Natural Sciences" (this is the highest award that a scientist can receive in Mexico). His current research interests are: evolutionary multiobjective optimization and constraint-handling techniques for evolutionary algorithms.

NEW Evolutionary Large-Scale Global Optimization: An Introduction

Many real-world optimization problems involve a large number of decision variables. The trend in engineering optimization shows that the number of decision variables involved in a typical optimization problem has grown exponentially over the last 50 years, and this trend continues with an everincreasing rate. The proliferation of big-data analytic applications has also resulted in the emergence of large-scale optimization problems at the heart of many machine learning problems. The recent advance in the area of machine learning has also witnessed very large scale optimization problems encountered in training deep neural network architectures (so-called deep learning), some of which have over a billion decision variables. It is this “curse-of-dimensionality” that has made largescale optimization an exceedingly difficult task. Current optimization methods are often ill-equipped in dealing with such problems. It is this research gap in both theory and practice that has attracted much research interest, making large-scale optimization an active field in recent years. We are currently witnessing a wide range of mathematical and metaheuristics optimization algorithms being developed to overcome this scalability issue. Among these, metaheuristics have gained popularity due to their ability in dealing with black-box optimization problems.

In this tutorial, we provide an overview of recent advances in the field of evolutionary large-scale global optimization with an emphasis on the divide-and-conquer approaches (a.k.a. decomposition methods). In particular, we give an overview of different approaches including the non-decomposition based approaches such as memetic algorithms and sampling methods to deal with large-scale problems. This is followed by a more detailed treatment of implicit and explicit decomposition algorithms in large-scale optimization. Considering the popularity of decomposition methods in recent years, we provide a detailed technical explanation of the state-of-the-art decomposition algorithms including the differential grouping algorithm and its latest improved derivatives (such as global DG and DG2 algorithms), which outperform other decomposition algorithms on the latest large-scale global optimization benchmarks. We also address the issue of resource allocation in cooperative coevolution and provide a detailed explanation of some recent algorithms such as the contribution-based cooperative co-evolution family of algorithms. Overall, this tutorial takes the form of a critical survey of the existing methods with an emphasis on articulating the challenges in large-scale global optimization in order to stimulate further research interest in this area.

Nabi Omidvar

Mohammad Nabi Omidvar is a research fellow in evolutionary computation and is a member of Centre of Excellence for Research in Computational Intelligence and Applications (Cercia) at the school of computer science, the University of Birmingham. Prior to joining the University of Birmingham, Dr. Omidvar completed his Ph.D. in computer science with the Evolutionary Computing and Machine Learning (ECML) group at RMIT University in Melbourne, Australia. Dr. Omidvar holds a bachelor in applied mathematics and a bachelor in computer science with first class honors from RMIT University. Dr. Omidvar won the IEEE Transaction on Evolutionary Computation Outstanding Paper award for his work on large-scale global optimization. He has also received an Australian Postgraduate Award in 2010 and also received the best Computer Science Honours Thesis award from the School of Computer Science and IT, RMIT University. Dr. Omidvar is a member of IEEE Computational Intelligence Society since 2009 and is a member of IEEE Taskforce on Large-Scale Global Optimization. His current research interests are large-scale global optimization, decomposition methods for optimization, and multi-objective optimization.

Xiaodong Li

Xiaodong Li received his B.Sc. degree from Xidian University, Xi'an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. Currently, he is a full professor at the School of Science (Computer Science and Software Engineering), RMIT University, Melbourne, Australia. His research interests include evolutionary computation, neural networks, complex systems, multiobjective optimization, and swarm intelligence. He serves as an Associate Editor of the IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a Vice-chair of IEEE CIS Task Force of Multi-Modal Optimization, and a former Chair of IEEE CIS Task Force on Large Scale Global Optimization. He was the General Chair of SEAL'08, a Program Co-Chair AI'09, and a Program Co-Chair for IEEE CEC’2012. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS "IEEE Transactions on Evolutionary Computation Outstanding Paper Award".

NEW Exploratory Landscape Analysis

The selection of the best suited algorithm for an optimization problem is usually a complex and difficult task, especially for expensive Black-Box problems. Exploratory Landscape Analysis (ELA) has been conceived as approach for guiding or even automatizing this selection process. It extracts — not necessarily intuitively understandable — landscape features, based on a usually rather small initial sample of observations from the underlying optimization problem. In the extreme case, a well-distributed initial population of an evolutionary algorithm could already be sufficient to compute meaningful (numerical) features that could improve the performance of the algorithm selection models.
Until now, ELA is mostly used in the context of continuous, single-objective, global optimization problems, but it can also be transferred to other domains (e.g. discrete, multi-objective, or multimodal optimization).

We will begin this tutorial with an introduction into the general concept of (automated) algorithm selection, which is one of the main use cases of ELA, followed by a presentation of some examples from different domains, in which ELA has successfully been used to improve the algorithm selection process. Next, we will present the general idea of ELA — What is it exactly? How can it be used? What can we get out of it? What are its limitations? — and afterwards provide a detailed overview of the current state of ELA, including recently published extensions to the previously mentioned additional domains.
Also, we will discuss the differences between single- and multi-objective, as well as multimodal and global optimization problems (with a focus on continuous problems) and show how ELA features can help us to improve our understanding of the different problem landscapes and how that knowledge in return can be used to get a better comprehension of the behavior of the applied optimization algorithms.
The remainder of the tutorial will be used for an interactive live-demo, in which we will demonstrate how one can perform ELA on continuous optimization problems by using the R-packages "smoof" and "flacco".

Pascal Kerschke

Pascal Kerschke ( http://erc.is/p/kerschke ) is a Research Associate at the Group of Information Systems and Statistics at the University of Muenster (Germany). Prior to his PhD-studies, he has received a M.Sc. degree in "Data Sciences" (2013) from the Faculty of Statistics at the TU Dortmund (Germany).
His main research interests are algorithm selection (for continuous or TSP problems), as well as ELA for single- and multi-objective, continuous (Black-Box) optimization problems. Furthermore, he is one of the developers of related R-packages, such as "flacco", "smoof" and "mlr".

Mike Preuss

Mike Preuss is Research Associate at ERCIS, University of Muenster, Germany. Previously, he was with the Chair of Algorithm Engineering at TU Dortmund, Germany, where he received his PhD in 2013. His research interests focus on the field of evolutionary algorithms for real-valued problems, namely on multimodal and multiobjective optimization. He is also active in computational intelligence methods for computer games, especially in procedural content generation (PGC) and realtime strategy games (RTS).

Expressive Genetic Programming: Concepts and applications

The language in which evolving programs are expressed can have significant impacts on the dynamics and problem-solving capabilities of a genetic programming system. In GP these impacts are driven by far more than the absolute computational power of the languages used; just because a computation is theoretically possible in a language, it doesn’t mean it’s readily discoverable or leveraged by evolution. Highly expressive languages can facilitate the evolution of programs for any computable function using, as appropriate, multiple data types, evolved subroutines, evolved control structures, evolved data structures, and evolved modular program and data architectures. In some cases expressive languages can even support the evolution of programs that express methods for their own reproduction and variation (and hence for the evolution of their offspring).

This tutorial will present a range of approaches that have been taken for evolving programs in expressive programming languages. We will then provide a detailed introduction to the Push programming language, which was designed specifically for expressiveness in genetic programming systems. Push programs are syntactically unconstrained but can nonetheless make use of multiple data types and express arbitrary control structures, supporting the evolution of complex, modular programs in a particularly simple and flexible way.

Interleaved with our description of the Push language will be demonstrations of the use of analytical tools such as graph databases and program diff/merge tools to explore ways in which evolved Push programs are actually taking advantage of the language’s expressive features. We will illustrate, for example, the effective use of multiple types and type-appropriate functions, the evolution and modification of code blocks and looping/recursive constructs, and the ability of Push programs to handle multiple, potentially related tasks.

We’ll conclude with a brief review of over a decade of Push-based research, including the production of human-competitive results, along with recent enhancements to Push that are intended to support the evolution of complex and robust software systems.

Lee Spector

Lee Spector is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts, and an adjunct professor in the Department of Computer Science at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer) and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation.

More info: http://hampshire.edu/lspector

Nic McPhee

Generative and Developmental Systems

In evolutionary computation it is common for the genotype to map directly to the phenotype such that each gene in the genotype represents a single discrete component of the phenotype. While this convention is widespread, in nature the mapping is not direct. Instead, the genotype maps to the phenotype through a process of development, which means that each gene may be activated multiple times and in multiple places in the process of constructing the phenotype.

This tutorial will examine why such indirect mapping may be critical to the continuing success of evolutionary computation. Rather than just an artifact of nature, indirect mapping means that vast phenotypic spaces (e.g. the 100 trillion connection human brain) can be searched effectively in a space of far fewer genes (e.g. the 30,000 gene human genome). The tutorial will introduce this research area, called Generative and Developmental Systems (GDS; now part of the Complex Systems track at GECCO), by surveying milestones in the field and introducing its most profound puzzles. In particular, the first half of the tutorial will review the most historically high-impact algorithms and achievements in the field while the second half will probe more deeply into cutting-edge research and its greatest current challenges.

Most importantly, what is the right abstraction of natural development to capture its essential advantages without introducing unnecessary overhead into the search?

Kenneth Stanley

Kenneth Stanley is an associate professor in the Department of Electrical Engineering and Computer Science at the University of Central Florida and director of the Evolutionary Complexity Research Group. He received a B.S.E. from the University of Pennsylvania in 1997 and received a Ph.D. in 2004 from the University of Texas at Austin. He is an inventor of the Neuroevolution of Augmenting Topologies (NEAT), HyperNEAT, and novelty search algorithms for evolving complex artificial neural networks. His main research contributions are in neuroevolution (i.e. evolving neural networks), generative and developmental systems (GDS), coevolution, machine learning for video games, and interactive evolution. He has won best paper awards for his work on NEAT, NERO, NEAT Drummer, FSMC, HyperNEAT, ES-HyperNEAT, adaptive HyperNEAT, novelty search, Galactic Arms Race, and NA-IEC. A founder of the Generative and Developmental Systems track at GECCO, he is also an associate editor of IEEE Transactions on Computational Intelligence and AI in Games, on the editorial board of Evolutionary Computation journal, on the ACM SIGEVO Executive Committee, and a co-founder and the editor-in-chief of aigameresearch.org.

NEW Next Generation Genetic Algorithms

New developments in Gray Box Optimization makes it possible to construct new forms of Genetic Algorithms that do not use random mutation or random recombination. Instead, for certain classes of NP Hard problems (ranging from MAXSAT to the Traveling Salesman Problem), it is possible to exactly compute the location of improving moves (often in constant time), and to use highly efficient forms of greedy deterministic recombination. In some cases, this makes random mutation and random recombination unnecessary and obsolete.

Deterministic “Partition Crossover” can be applied to k-bounded pseudo-Boolean optimization problems (such as MAXSAT and NK Landscapes) as well as problems such as the Traveling Salesman Problem (TSP). Partition Crossover locally decomposes a recombination graph into q subgraphs in O(n) time. It can then identify the best of 2q possible offspring. For example, for q=40, partition crossover returns the best of one trillion (240) possible offspring. If the parents are local optima, the offspring are guaranteed to be locally optimal in the largest hyperplane subspace containing both parents, and offspring are typically also local optima in the full space. This allows partition crossover to directly “tunnel” between local optima, moving directly from local optimum to local optimum.

For the TSP, these results can be use to improve the best existing TSP solvers (e.g., LKH and EAX). It can also be used to improve exact Branch and Bound algorithms such as Concorde. For k-bounded pseudo-Boolean optimization problems these new algorithms are able to solve problems with 1 million variables.

Other recent results also use a similar form of local decomposition coupled with greedy and deterministic optimization. When applied to multiply constrained scheduling problems, the genetic algorithm is able to solve industrial problems with unto 1 billion variables.

These results have the potential to revolutionized evolutionary computation. The methods are decidedly not black box optimizers, and the new algorithms trivially solve many black box benchmarks: ONEMAX, Leading Ones and Trap functions are solved in O(n) time using only 1 evaluation.

Darrell Whitley

Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 200 papers. These papers have garnered more than 16,000 citations. Dr. Whitley's H-index is 54. He has served as Editor-in-Chief of the journal Evolutionary Computation, and recently served as Chair of the Governing Board of ACM Sigevo from 2007 to 2011. He currently serves as an Associate Editor of Artificial Intelligence.

NEW Non-Static Parameter Choices in Evolutionary Computation

When applying evolutionary algorithms to optimization problems, the user often has to decide upon a number of parameters; for example, the population size, the mutation strength, the crossover rate, etc. The choice of these parameters can have a crucial impact on the performance of the algorithm and need thus to be executed with care.

In the early years of evolutionary computation there had been a quest to determine universally "optimal" parameter choices. At the same time, researchers have soon realized that different parameter choices can be optimal in different stages of the optimization process: in the beginning, for example, one may want to allow a larger mutation rate so as to increase the chance of finding the most promising regions of the search space ("exploration phase") while later on a smaller mutation rate guarantees the search to stay focused ("exploitation phase"). Quite surprisingly, this intuitive concept of dynamic parameter choices has never played an important role for discrete optimization with evolutionary computation methods - neither in empirical, nor in theoretical investigations. Most recently, the topic has been revived by a series of theoretical and experimental works proving strict superiority of dynamic parameter settings for a number of different discrete optimization processes. The aim of this tutorial is to survey and discuss parameter update schemes that have been proposed in the EC literature. The tutorial addresses both experimentally- as well as theory-oriented researchers.

Carola Doerr

Carola Doerr (Carola.Doerr@mpi-inf.mpg.de, http://www-ia.lip6.fr/~doerr/) is a CNRS researcher at the Université Pierre et Marie Curie (Paris 6). She studied mathematics at Kiel University (Germany, Diploma in 2007) and computer science at the Max Planck Institute for Informatics and Saarland University (Germany, PhD in 2011). From Dec. 2007 to Nov. 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, mainly in the area of network optimization, where she has used randomized search heuristics to compute more efficient network layouts and schedules. Before joining the CNRS she was a post-doc at the Université Diderot (Paris 7) and the Max Planck Institute for Informatics.

Carola Doerr's research interest is in the theory of randomized algorithms---both in the design of efficient algorithms as well as in finding a suitable complexity theory for randomized search heuristics. After contributing to the revival of black-box complexity, a theory-guided approach to explore the limitations of heuristic search algorithms, she recently started a series of works aimed at exploiting insights from the theory of evolutionary computation to design more efficient EAs, in particular such with a dynamic choice of parameters.

NEW Recent Advances in Evolutionary Multi-Criterion Optimization

Evolutionary multi-criterion optimization (EMO) has become an established subfield within the evolutionary computation (EC) field, simply because of its appeal and importance to practical problems. Real-world problems usually involve multiple conflicting criteria and resulting outcomes in these problems are a number of trade-off solutions. EC methods became useful in solving these problems compared to their classical counterparts, since they can find and store multiple solutions in their populations and they provide an implicit parallel search. The EMO research was started in early nineties (more than two decades ago) and many different methodologies have been suggested since then. Software companies have emerged having specialized products for solving multi-criterion problems in industrial set-ups.

In the past 20 years, besides developing new methodologies, EMO field has also found new directions for research and application, which are relatively unknown to many EC researchers who are not directly working in EMO area. Some of these new ideas have deep-rooted applications to other EC areas and EMO researchers can borrow key EC findings to improve EMO algorithms and applications. The new ideas need to be explored and nurtured further, requiring deep theoretical, algorithmic, and application-oriented studies. In this tutorial, we shall introduce and discuss some of the recent advances in EMO field so that EC researchers and practitioners, in general, and EMO researchers, in particular, can make a comprehensive scope and directions of research in EMO area.

Kalyanmoy Deb

Kalyanmoy Deb is Koenig Endowed Chair Professor at the Department of Electrical and Computer Engineering in Michigan State University (MSU), East Lansing, USA. Prof. Deb's main research interests are in genetic and evolutionary optimization algorithms and their application in optimization, modeling, and machine learning. He is largely known for his seminal research in developing and applying Evolutionary Multi-Criterion Optimization. Prof. Deb was awarded the prestigious `Infosys Prize' in 2012, `TWAS Prize' in Engineering Sciences in 2012, `CajAstur Mamdani Prize' in 2011, `JC Bose National Fellowship' in 2011, `Distinguished Alumni Award' from IIT Kharagpur in 2011, 'Edgeworth-Pareto' award in 2008, Shanti Swarup Bhatnagar Prize in Engineering Sciences in 2005, `Thomson Citation Laureate Award' from Thompson Reuters. His 2002 IEEE-TEC NSGA-II paper is judged as the Most Highly Cited paper and a Current Classic by Thomson Reuters having more than 4,200+ citations. He is a fellow of IEEE and International Society of Genetic and Evolutionary Computation (ISGEC). He has written two text books on optimization and more than 350+ international journal and conference research papers with Google Scholar citations of 55,000+ with h-index of 77. He is in the editorial board on 20 major international journals.

NEW Sequential Experimentation by Evolutionary Algorithms

This tutorial addresses applications of Evolutionary Algorithms (EAs) to optimization tasks where the function evaluation cannot be done through a computer simulation, but requires the execution of an experiment in the real-world (e.g., pharmaceuticals, biocatalyst design, protein engineering, quantum processes, wind tunnel experiments, to mention a few).

The use of EAs for experimental optimization is placed in its historical context with an overview of the landmark studies in this area carried out in the 1960s at the Technical University of Berlin. Statistical Design of Experiments (DoE) methodologies from the late 50s are also reviewed, and it is shown how relatives of these approaches are converging in modern sequential DoE/EA hybrid methods.

The main characteristics of experimental optimization work, in comparison to optimization of simulated systems, are discussed, and practical guidelines for real-world experiments with EAs are given. For example, experimental problems can constrain the evolution due to overhead considerations, interruptions, changes of variables, missing objective function values, population sizes that are determined by the experimental platform, and objective functions that have differing evaluation times (in the case of multiobjective optimization problems).

A series of modern-day case studies shows the persistence of experimental optimization problems today. These cover experimental quantum systems, real DNA and RNA evolution, combinatory drug discovery, tuning of operating conditions, coffee and chocolate processing, and others. These applications can push EA methods outside of their normal operating envelope, and raise research questions in a number of different areas ranging across constrained EAs, multiobjective EAs, robust and reliable methods for noisy and dynamic problems, and metamodeling methods for expensive evaluations.

Ofer Shir

Ofer Shir is a Senior Lecturer at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel, and both MSc and PhD in Computer Science from Leiden University, The Netherlands (PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz – where he specialized in computer science aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current fields of interest encompass Statistical Learning in Theory and in Practice, Experimental Optimization, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Information.

Thomas Bäck

Thomas Bäck is full professor of computer science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002.

He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others.

Thomas Bäck has more than 200 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and is an elected fellow of the International Society for Genetic and Evolutionary Computation for his contributions to the field.

Joshua D. Knowles

Joshua Knowles is Professor of Natural Computation at the University of Birmingham, UK, and Honorary Professor in the Decision Sciences Center, Alliance Manchester Business School. He is known for foundational work in evolutionary multiobjective optimization, algorithms, performance assessment methods, and applications. He has been working on sequential experimentation methods in practical (scientific) applications for at least 10 years.

Richard Allmendinger

Richard Allmendinger is a Lecturer at the Alliance Manchester Business School (AMBS), University of Manchester, UK. He obtained a PhD in Computer Science from the University of Manchester (2011) and a Diplom from the Karlsruhe Institute of Technology University, Germany (2008). Before joining AMBS he spend 4 years as a Postdoctoral Researcher and Honorary Lecturer at the Biochemical Engineering Department, University College London, UK. Richard is the General Chair of the upcoming IEEE CIBCB (Computational Intelligence for Bioinformatics and Computational Biology) conference (http://cibcb2017.org/), Vice-Chair of the IEEE CIS BBTC (Bioinformatics and Bioengineering Technical Committee) and Chair of the IEEE CIS Task Force on Optimization Methods in Bioinformatics and Bioengineering. His industrial collaborators include several biopharma companies and software consultancies. His main research interests include experimental optimization, process economics, decisional tools for the design of cost-effective manufacturing processes, multiobjective optimization, and, most recently, big data applications in healthcare.

Solving Complex Problems with Coevolutionary Algorithms

Coevolutionary algorithms (CoEAs) go beyond the conventional paradigm of evolutionary computation in having the potential to answer questions about what to evolve against (competition) and / or establish how multi-agent behaviours can be discovered (cooperation). Competitive coevolution can be considered from the perspective of discovering tests that distinguish between the performance of candidate solutions. Cooperative coevolution implies that mechanisms are adopted for distributing fitness across more than one individual. In both these variants, the evolving entities engage in interactions that affect all the engaged parties, and result in search gradients that may be very different from those observed in conventional evolutionary algorithm, where fitness is defined externally. This allows CoEAs to model complex systems and solve problems that are difficult or not naturally addressed using conventional evolution.

This tutorial will begin by first establishing basic frameworks for competitive and cooperative coevolution and noting the links to related formalisms (interactive domains and test-based problems). We will identify the pathologies that potentially appear when assuming such coevolutionary formulations (disengagement, forgetting, mediocre stable states) and present the methods that address these issues. Compositional formulations will be considered in which hierarchies of development are explicitly formed leading to the incremental complexification of solutions. The role of system dynamics will also be reviewed with regards to providing additional insight into how design decisions regarding, say, the formulation assumed for cooperation, impact on the development of effective solutions. We will also present the concepts of coordinate systems and underlying objectives and how they can make search/learning more effective. Other covered developments will include hybridization with local search and relationships to shaping.

The concepts covered in the tutorial will be illustrated with numerous examples and applications, including the cooperative development of programs (soccer, MsPacMan, Rubik’s Cube), competitive coevolution of game strategies (Othello), and machine learning (learning from data streams and object recognition).

Krzysztof Krawiec

Krzysztof Krawiec is an Associate Professor in the Institute of Computing Science at Poznan University of Technology, Poland. His primary research areas are genetic programming, semantic genetic programming, and coevolutionary algorithms, with applications in program synthesis, modeling, pattern recognition, and games. Dr. Krawiec co-chaired the European Conference on Genetic Programming in 2013 and 2014, GP track at GECCO'16, and is an associate editor of Genetic Programming and Evolvable Machines journal.

Malcolm Heywood

Malcolm Heywood is a Professor of Computer Science at Dalhousie University, Canada. He has conducted research in genetic programming (GP) since 2000. He has a particular interest in scaling up the tasks that GP can potentially be applied to. His current research is attempting the appraise the utility of coevolutionary methods under non-stationary environments as encountered in streaming data applications, and coevolving agents for single and multi-agent reinforcement learning tasks. In the latter case the goal is to coevolve behaviours for playing soccer under the RoboSoccer environment (a test bed for multi-agent reinforcement learning). Dr. Heywood is a member of the editorial board for Genetic Programming and Evolvable Machines (Springer). He was a track co-chair for the GECCO GP track in 2014 and a co-chair for European Conference on Genetic Programming in 2015.

Theory of Swarm Intelligence

Social animals as found in fish schools, bird flocks, bee hives, and ant colonies are able to solve highly complex problems in nature. This includes foraging for food, constructing astonishingly complex nests, and evading or defending against predators. Remarkably, these animals in many cases use very simple, decentralized communication mechanisms that do not require a single leader. This makes the animals perform surprisingly well, even in dynamically changing environments. The collective intelligence of such animals is known as swarm intelligence and it has inspired popular and very powerful optimization paradigms, including ant colony optimization (ACO) and particle swarm optimization (PSO).

The reasons behind their success are often elusive. We are just beginning to understand when and why swarm intelligence algorithms perform well, and how to use swarm intelligence most effectively. Understanding the fundamental working principles that determine their efficiency is a major challenge.

This tutorial will give a comprehensive overview of recent theoretical results on swarm intelligence algorithms, with an emphasis on their efficiency (runtime/computational complexity). In particular, the tutorial will show how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms.
The results shed light on the working principles of swarm intelligence algorithms, identify the impact of parameters and other design choices on performance, and thus help to use swarm intelligence more effectively.

The tutorial will be divided into a first, larger part on ACO and a second, smaller part on PSO. For ACO we will consider simple variants of the MAX-MIN ant system. Investigations of example functions in pseudo-Boolean optimization demonstrate that the choices of the pheromone update strategy and the evaporation rate have a drastic impact on the running time. We further consider the performance of ACO on illustrative problems from combinatorial optimization: constructing minimum spanning trees, solving shortest path problems with and without noise, and finding short tours for the TSP.

For particle swarm optimization, the tutorial will cover results on PSO for pseudo-Boolean optimization as well as a discussion of theoretical results in continuous spaces.

Dirk Sudholt

Dirk obtained his Diplom (Master's) degree in 2004 and his PhD in computer science in 2008 from the Technische Universitaet Dortmund, Germany, under the supervision of Prof. Ingo Wegener. He has held postdoc positions at the International Computer Science Institute in Berkeley, California, working in the Algorithms group led by Prof. Richard M. Karp and at the University of Birmingham, UK, working with Prof. Xin Yao. Since January 2012 he is a Lecturer at the University of Sheffield, UK, leading the newly established Algorithms research group.

His research focuses on the computational complexity of randomized search heuristics such as evolutionary algorithms and swarm intelligence algorithms like ant colony optimization and particle swarm optimization. He is an editorial board member of Evolutionary Computation and Natural Computing and receives funding from the EU's Future and Emerging Technologies scheme (SAGE project). He has more than 70 refereed publications in international journals and conferences, including 8 best paper awards at leading conferences, GECCO and PPSN. He has given 9 tutorials at ThRaSH, WCCI/CEC, GECCO, and PPSN.

Specialized Tutorials

Automated Offline Design of Algorithms

Most optimization algorithms, including evolutionary algorithms and metaheuristics, and general-purpose solvers for integer or constraint programming, have often many parameters that need to be properly designed and tuned for obtaining the best results on a particular problem. Automatic (offline) algorithm design methods help algorithm users to determine the parameter settings that optimize the performance of the algorithm before the algorithm is actually deployed. Moreover, automatic offline algorithm design methods may potentially lead to a paradigm shift in algorithm design because they enable algorithm designers to explore much larger design spaces than by traditional trial-and-error and experimental design procedures. Thus, algorithm designers can focus on inventing new algorithmic components, combine them in flexible algorithm frameworks, and let final algorithm design decisions be taken by automatic algorithm design techniques for specific application contexts.

This tutorial is structured into two main parts. In the first part, we will give an overview of the algorithm design and tuning problem, review recent methods for automatic algorithm design, and illustrate the potential of these techniques using recent, notable applications from the presenters' and other researchers work. In the second part of the tutorial will focus on a detailed discussion of more complex scenarios, including multi-objective problems, anytime algorithms, heterogeneous problem instances, and the automatic generation of algorithms from algorithm frameworks. The focus of this second part of the tutorial is, hence, on practical but challenging applications of automatic algorithm design. The second part of the tutorial will demonstrate how to tackle algorithm design tasks using our irace software (http://iridia.ulb.ac.be/irace), which implements the iterated racing procedure for automatic algorithm design. We will provide a practical step-by-step guide on using irace for the typical algorithm design scenario.

Thomas Stützle

Thomas Stützle is a senior research associate of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the Diplom (German equivalent of M.S. degree) in business engineering from the Universität Karlsruhe (TH), Karlsruhe, Germany in 1994, and his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 20 edited proceedings or books, 8 journal special issues, and more than 190 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Swarm Intelligence, and Applied Mathematics and Computation and on the editorial board of seven other journals including Evolutionary Computation and Journal of Artificial Intelligence Research. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS. His 2002 GECCO paper on "A Racing Algorithm For Configuring Metaheuristics" (joint work with M. Birattari, L. Paquete, and K. Varrentrapp) has received the 2012 SIGEVO impact award.

Manuel López-Ibáñez

Dr. López-Ibáñez is a lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 17 journal papers, 6 book chapters and 36 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace).

Evolutionary Computation and Cryptology

Evolutionary Computation (EC) has been used with great success on various real-world problems. One domain abundant with numerous difficult problems is cryptology. Cryptology can be divided into cryptography, that informally speaking considers methods how to ensure secrecy (but also authenticity, privacy, etc.), and cryptanalysis, that deals with methods how to break cryptographic systems.

Although not always in an obvious way, EC can be applied to problems from both domains. This tutorial will first give a brief introduction to cryptology intended for general audience (therefore, omitting proofs and mathematics behind many concepts). Afterwards, we concentrate on several topics from cryptology that are successfully tackled up to now with EC and discuss why those topics are suitable to apply EC.

However, care must be taken since there exists a number of problems that seem to be impossible to solve with EC and one needs to realize the limitations of the heuristics. We will discuss the choice of appropriate EC techniques (GA, GP, CGP, ES, multi-objective optimization, etc) for various problems and elaborate on the importance of that choice. Furthermore, we will discuss the gap between the cryptographic community and EC community and what does that mean for the results. By doing that, we give a special emphasis on the perspective that cryptography presents a source of benchmark problems for the EC community.

To conclude, we will present a number of topics we consider to be a strong research choice that can have a real-world impact. In that part, we give a special attention to cryptographic problems where cryptographic community successfully applied EC, but where those problems remained somewhat out of the focus of the EC community.

This tutorial will also present some live demos of EC in action when dealing with cryptographic problems. We will present several problems, ways of encoding solutions, impact of the algorithms choice and finally, we will run some experiments to show the results and discuss how to assess them from cryptographic perspective.

Stjepan Picek

Stjepan Picek finished his PhD in 2015 as a double doctorate under the supervision of Lejla Batina, Elena Marchiori (Radboud University Nijmegen, The Netherlands) and Domagoj Jakobovic (Faculty of Electrical Engineering and Computing, Croatia). The topic of his research was cryptology and evolutionary computation techniques (EC) which resulted in a thesis "Evolutionary Computation in Cryptology".
Currently, Stjepan is working as a postdoc researcher at the KU Leuven, Belgium as a part of the COSIC group where he continues his research on the applications of EC in the field of cryptology. His research topics include evolutionary computation, cryptology, and machine learning.
Prior to that, Stjepan worked in industry and government.
He regularly publishes papers in both evolutionary computation and cryptographic conferences and journals.
Besides that, he is a member of several professional societies (ACM, IEEE, IACR).

NEW Evolutionary Computation in Network Management and Security

Cyberspace is becoming more and more challenging to secure due to the increasing vulnerabilities and the complex interactions between cyber systems, physical infrastructures and users/attackers that can operate anywhere on the Internet. Cyber security is the body of technologies, processes and practices designed and developed to protect the systems, networks and data in cyberspace. In this tutorial, we will first introduce the topics of network management and cyber security for general audience. Then, we will discuss how evolutionary computing techniques can help in these tasks to improve best practices.

Network management includes tasks such as network and system discovery, monitoring, performance analysis, fault prediction and diagnosis. Cyber security includes tasks such as network security, application security and information security. In the computing context, the term security implies cyber security. Ensuring cyber security requires coordinated efforts of network management and information systems.

Currently, one of the most problematic elements of network management and security is the quickly and constantly evolving nature of the systems and attacks. To address this challenge, advisory organizations such as NIST promote more proactive and adaptive approaches. This results in a shift toward continuous monitoring and real-time assessments of the current situations on our networks and systems. Such a shift for adaptive and proactive approaches may provide many opportunities for the evolutionary computing community. We will give an overview of different researches done up to now in the network management and security field as well as discuss new and open problems that are suitable for evolutionary computing techniques going forward in this field. Finally, we will present numerous examples, applications and live demos to illustrate the concepts in this tutorial.

Nur Zincir-Heywood

Nur Zincir-Heywood is a Professor of Computer Science at Dalhousie University, Halifax, Canada. Her research interests include computational intelligence and data analytics for network operations and cyber security. She currently works on traffic and behavior analysis for network / service management and cyber-security. She has substantial experience of industrial research in systems security and computer networking. Dr. Zincir-Heywood is a member of the IEEE and the ACM.

Gunes Kayacik

Gunes Kayacik is a Research Scientist at Qualcomm Research Silicon Valley, USA. His research interests have always been found in the middle ground between computer security and machine learning. Following the completion of a Ph.D. in computer science from Dalhousie University, he was awarded a Marie Curie Postdoctoral Fellowship. His postdoctoral research focused on mobile device security, specifically getting devices to recognize their users from sensor data. Prior to joining Qualcomm, Dr. Kayacik worked at Silicon Valley start-ups, developing machine learning methods for botnet detection and data leak prevention, which protected several thousand end users and hosts.

Evolutionary Robotics

In the same way as evolution shapes animals, is it possible to use artificial evolution to automatically design robots? This attractive vision gave birth to Evolutionary Robotics (ER), an interdisciplinary field that incorporates ideas from Biology, Robotics and Artificial Intelligence. Within the last 20 years, Evolutionary Robotics explored many research avenues, from understanding Natural Evolution thanks to ER models, to co-designing robots’ bodies and brains.

This tutorial will give an overview of the various questions addressed in ER, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges in ER will be described. The tutorial will in particular focus on:

  • what is evolutionary robotics?
  • fitness function design and the influence of selection pressure;
  • encodings of controllers and morphologies;
  • evolution for physical robots and the reality gap;
  • embodied evolution and collective robotics systems;
  • open questions and future challenges.

Nicolas Bredeche

Nicolas Bredeche is Professeur des Universites (Professor) at Universite Pierre et Marie Curie (UPMC, Paris, France), His research is done in the team Architectures and Models for Adaptation and Cognition, within the Institute of Intelligent Systems and Robotics (ISIR, CNRS). His field of research is mostly concerned with Evolutionary Computation and Complex Systems (self-adaptive collective robotic systems, generative and developmental systems, evolutionary robotics). In particular, part of his work is about (a) evolutionary design and/or adaptation of group of embodied agents and (b) artificial ontogeny such as multi-cellular developmental systems and self-regulated networks.

Nicolas Bredeche is author of more than 30 papers in journals and major conferences in the field. He has (or currently is) advisor or co-advisor of six PhD students and has served in program committees and/or as track chair of the major conferences in the field of Evolutionary Computation (incl. GECCO track chair for the Artificial Life and Robotics track 2012/2013). He is also a member of the french Evolution Artificielle association and has been regularly co-organizing the french Artificial Evolution one-day seminar (JET) since 2009. He has also organized several international workshops in Evolution, Robotics, and Development of Artificial Neural Networks.

Stéphane Doncieux

Stéphane Doncieux is Professeur des Universités (Professor) in Computer Sci- ence at Université Pierre et Marie Curie (UPMC, Paris, France). His research is mainly concerned with the use of evolutionary algorithms in the context of optimization or synthesis of robot controllers. He worked in a robotics context to design, for instance, controllers for flying robots, but also in the context of modeling where he worked on the use of multi-objective evolutionary algorithms to optimize and study computational models. More recently, he focused on the use of multi-objective approaches to tackle learning problems like premature convergence or generalization.

He is engineer of the ENSEA, a french electronic engineering school. He obtained a Master’s degree in Artificial Intelligence and Pattern Recognition in 1999. He pursued and defended a PhD in Computer Science in 2003. He was responsible, with Bruno Gas, of the SIMA research team since its creation in 2007 and up to 2011. Since then, he is the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 11 permanent researchers, 3 post-doc students and 9 PhD students. Researchers of the team work on different aspects of learning in the context of motion control and cognition, both from a computational neuroscience perspective and a robotics perspective. He has published 10 journal papers and more than 30 articles in international conferences. He has organized several workshops on ER at conferences like GECCO or IEEE-IROS and has edited 2 books.

Jean-Baptiste Mouret

Jean-Baptiste Mouret is a researcher at Inria, a French research institute dedicated to computer science and mathematics. He was previously an assistant professor (maître de conférences) at ISIR (Institute for Intelligent Systems and Robotics), which is part of Université Pierre et Marie Curie–Paris 6 (UPMC). He is the principal investigator of an ERC grant (ResiBots – Robots with animal-like resilience, 2015-2020) and was the recipient of a French “ANR young researcher” grant (Creadapt – Creative adaptation by Evolution, 2012-2015).

Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was recently featured on the cover of Nature (Cully et al., 2015) and it received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), the GECCO best paper award (2011, GDS track), and the IEEE CEC "best student paper" award (2009).

J.-B. Mouret obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from the Pierre and Marie Curie University (Paris, France) in 2005 and a Ph.D. in computer science from the same university in 2008. He co-edited two books, authored more than 50 papers in major journals and conferences, and was co-organizer of two conferences and three workshops about bio-inspired robotics and evolutionary robotics.

Intelligent Systems for Smart Cities

The concept of Smart Cities can be understood as a holistic approach to improve the level of development and management of the city in a broad range of services by using information and communication technologies.

It is common to recognize six axes of work in them: i) Smart Economy, ii) Smart People, iii) Smart Governance, iv) Smart Mobility, v) Smart Environment, and vi) Smart Living. In this tutorial we first focus on a capital issue: smart mobility. European citizens and economic actors need a transport system which provides them with seamless, high-quality door-to-door mobility. At the same time, the adverse effects of transport on the climate, the environment and human health need to be reduced. We will show many new systems based in the use of bio-inspired techniques to ease the road traffic flow in the city, as well as allowing a customized smooth experience for travellers (private and public transport).

This tutorial will then discuss on potential applications of intelligent systems for energy (like adaptive lighting in streets), environmental applications (like mobile sensors for air pollution), smart building (intelligent design), and several other applications linked to smart living, tourism, and smart municipal governance.

Enrique Alba

Prof. Enrique Alba had his degree in engineering and PhD in Computer Science in 1992 and 1999, respectively, by the University of Málaga (Spain). He works as a Full Professor in this university with different teaching duties: data communications, distributed programing, software quality, and also evolutionary algorithms, bases for R+D+i and smart cities at graduate and master/doctoral programs. Prof. Alba leads an international team of researchers in the field of complex optimization/learning with applications in smart cities, bioinformatics, software engineering, telecoms, and others. In addition to the organization of international events (ACM GECCO, IEEE IPDPS-NIDISC, IEEE MSWiM, IEEE DS-RT, …) Prof. Alba has offered dozens postgraduate courses, multiple seminars in more than 30 international institutions, and has directed several research projects (7 with national funds, 5 in Europe, and numerous bilateral actions). Also, Prof. Alba has directed 7 projects for innovation and transference to the industry (OPTIMI, Tartessos, ACERINOX, ARELANCE, TUO, INDRA, AOP) and presently he also works as invited professor at INRIA, the Univ. of Luxembourg, and Univ. of Ostrava. He is editor in several international journals and book series of Springer-Verlag and Wiley, as well as he often reviews articles for more than 30 impact journals. He has published 87 articles in journals indexed by Thomson ISI, 17 articles in other journals, 40 papers in LNCS, and more than 250 refereed conferences. Besides that, Prof. Alba has published 11 books, 39 book chapters, and has merited 6 awards to his professional activities. Prof. Alba’s H index is 50, with more than 11000 cites to his work.

NEW Multiagent Systems and Agent-based Modeling and Simulation

Based on concepts that are well grounded on artificial and computational intelligence, and with the recent advances in information and communication technologies, a new paradigm that considers social interactions among intelligent actors was introduced: the area of multiagent systems (MAS) and agent-based modeling and simulation (ABMS). This area studies neither just physical systems nor agents in isolation, but the agent as part of a social space. This is important in our modern world where embedded processing is ubiquitous and thus part of our social fabric.

The term agent has been used within the GECCO community in various contexts (e.g., swarm intelligence), but it seems that both the GECCO and the AAMAS (the premier conference on MAS) communities do not really take advantage of the potential of working together. Few researchers do have publications in both communities, despite the fact that EcoMAS is a traditional workshop held together with GECCO.

The goal of this tutorial is twofold: (a) about one-third of the time will be devoted to cover basic material about MAS (because I deem the material on single agent's architecture to be of less interest to this community) and ABMS (since this simulation paradigm is highly used by the computational intelligence community), and hence provide the audience a sense of the basic principles; and (b) about two-thirds of the time will cover the most recent advances in MAS, including the highly relevant topic of multiagent learning, one of the obvious interfaces between these two communities, as well evolutionary game theory as proxy for MAS.

The scope will include: introduction to autonomous agents and multiagent systems; agents coordination; agent-based modeling and simulation; distributed constraint optimization (DCOP); stochastic planning with Markov decision processes (MDP); multiagent learning; game theory and stochastic games as multiagent interactions; consensus reaching via social choice. If time and infrastructure permits, exercises using Netlogo can be covered as well.

Ana Bazzan

Ana L. C. Bazzan is a full professor at the Federal University of Rio Grande do Sul (UFRGS) in Brazil. She is an associated editor of the J. of Autonomous Agents and Multiagent Systems; Advances in Complex systems; and Mutiagent and Grid Systems. She has served as general co-chair of the AAMAS 2014 (the premier conference on autonomous agents and multiagent systems), among others. She serves as area chair for MAS for the IJCAI 2017 conference, and has served several times as member of the AAMAS (and other conferences) program committee (as PC member or senior PC member). Also, she is a member of the IFAAMAS board (2004-2008 and 2014-). She co-organized the Workshop on Synergies between Multiagent Systems, Machine Learning, and Complex Systems (TRI 2015), held together with IJCAI 2015 in Buenos Aires. Her research interests include MAS, ABMS, machine learning, multiagent reinforcement learning, evolutionary game theory, swarm intelligence, and complex systems. Her work is mainly applied in domains related to traffic and transportation.

Ana Bazan has a record of publications in the just mentioned areas, including GECCO, and is one of the few researchers who regularly attend meetings in the areas of computational intelligence (both organized by ACM and IEEE), and MAS. In 2012, her team won the Demolition Derby competiton at GECCO'12.