Computational Intelligence and Modern Heuristics

Free download. Book file PDF easily for everyone and every device. You can download and read online Computational Intelligence and Modern Heuristics file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Computational Intelligence and Modern Heuristics book. Happy reading Computational Intelligence and Modern Heuristics Bookeveryone. Download file Free Book PDF Computational Intelligence and Modern Heuristics at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Computational Intelligence and Modern Heuristics Pocket Guide.

If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account. If the address matches an existing account you will receive an email with instructions to retrieve your username. Chapter Kwang Y. Lee Search for more papers by this author. Malihe M.

How to Solve It: Modern Heuristics by Zbigniew Michalewicz

Farsangi Search for more papers by this author. John G. Vlachogiannis Search for more papers by this author. Book Editor s : Mircea Eremia Search for more papers by this author. Tools Request permission Export citation Add to favorites Track citation. Share Give access Share full text access. Share full text access. Please review our Terms and Conditions of Use and check box below to share full-text version of article. Abstract This chapter provides with basic knowledge of recent intelligent optimization and control techniques, and how they are combined with knowledge elements in computational intelligence systems.

Citing Literature. Related Information. Close Figure Viewer. Therefore, in his work he investigates the utilization of interpolation and active learning methods to change the means of how classifiers are initialized, insufficiently covered problem space niches are filled, or adequate actions are selected. A further aspect he investigates is the question how Learning Classifier Systems can be enhanced toward a behavior of proactive knowledge construction.

In model-building evolutionary algorithms the variation operators are guided by the use of a model that conveys as much problem-specific information as possible so as to best combine the currently available solutions and thereby construct new, improved, solutions.


  • Canoeing the Congo. The First Source-to-Sea Descent of the Congo River?
  • Salish Languages and Linguistics?
  • Computational Intelligence and Modern Heuristics?
  • Welcome to the WGMHO home page;
  • Memorable Quotes?
  • The Witness (Badge of Honor, Book 4)!

Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process. Well-known types of algorithms of the latter type are Estimation-of-Distribution Algorithms EDAs where probabilistic models of promising solutions are built and subsequently sampled to generate new solutions. In general, replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space.

Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. This is an especially useful feature when considering optimization in a black-box setting. Successful applications include Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.

This tutorial will provide an introduction and an overview of major research directions in this area. He has been involved in genetic algorithm research since His current research interests are mainly focused on the design and application of model learning techniques to improve evolutionary search. Dirk is has been a member of the Editorial Board of the journals Evolutionary Computation, Evolutionary Intelligence, IEEE Transactions on Evolutionary Computation, and a member of the program committee of the major international conferences on evolutionary computation.

Peter A. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application of estimation-of-distribution algorithms EDAs.

He has co- authored over 90 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. In recent years, there has been a resurgence of interest in reinforcement learning RL , particularly in the deep learning community. While much of the attention has been focused on using Value-function learning approaches i. Q-Learning or Estimated Policy Gradient-based approaches to train neural-network policies, little attention has been paid to Neuroevolution NE for policy search.

The larger research community may have forgotten about previous successes of Neuroevolution.

Some of the most challenging reinforcement learning problems are those where reward signals are sparse and noisy. For many of these problems, we only know the outcome at the end of the task, such as whether the agent wins or loses, whether the robot arm picks up the object or not, or whether the agent has survived. Since NE only require the final cumulative reward that an agent gets at the end of its rollout in an environment, these are the types of problems where NE may have an advantage over traditional RL methods.

In this tutorial, I show how Neuroevolution can be successfully applied to Deep RL problems to help find a suitable set of model parameters for a neural network agent. Using popular modern software frameworks for RL TensorFlow, OpenAI Gym, pybullet, roboschool , I will apply NE to continuous control robotic tasks, and show we can obtain very good results to control bipedal robot walkers, Kuka robot arm for grasping tasks, Minitaur robot, and also various existing baseline locomotion tasks common in the Deep RL literature.

Iowa State University

I will even show that NE can even obtain state-of-the-art results over Deep RL methods, and highlight ways to use NE that can lead to more stable and robust policies compared to traditional RL methods. I will also describe how to incorporate NE techniques into existing RL research pipelines taking advantage of distributed processing on Cloud Compute. I will also discuss how to combine techniques from deep learning, such as the use of deep generative models, with Neuroevolution to solve more challenging Deep Reinforcement Learning problems that rely on high dimensional video inputs for continous robotics control, or for video game simulation tasks.

We will look at combining model-based reinforcement learning approaches with Neuroevolution to tackle these problems, using TensorFlow, OpenAI Gym, and pybullet environments. A case study will be presented when researchers prepare to tackle both areas, and we end with a group discussion about issues with cross-community collaborations with the audience.

Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he co-ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Math from the University of Toronto. Successful and efficient use of evolutionary algorithms EA depends on the choice of the genotype, the problem representation mapping from genotype to phenotype and on the choice of search operators that are applied to the genotypes.

These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation. Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance.

Relevant concepts are the locality and redundancy of representations. Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes.

Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes. The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples.

Since , he is professor of Information Systems at the University of Mainz. He has published more than 90 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books ""Representations for Genetic and Evolutionary Algorithms"" and ""Design of Modern Heuristics"". His main research interests are the application of modern heuristics in planning and optimization systems.

He has been organizer of many workshops and tracks on heuristic optimization issues, chair of EvoWorkshops in and , co-organizer of the European workshop series on ""Evolutionary Computation in Communications, Networks, and Connected Systems"", co-organizer of the European workshop series on ""Evolutionary Computation in Transportation and Logistics"", and co-chair of the program committee of the GA track at GECCO Evolutionary algorithm theory has studied the time complexity of evolutionary algorithms for more than 20 years.

Kundrecensioner

Different aspects of this rich and diverse research field were presented in three different advanced or specialized tutorials at last year's GECCO. This tutorial presents the foundations of this field. We introduce the most important notions and definitions used in the field and consider different evolutionary algorithms on a number of well-known and important example problems.

Through a careful and thorough introduction of important analytical tools and methods, including fitness-based partitions, typical events and runs and drift analysis, by the end of the tutorial the attendees will be able to apply these techniques to derive relevant runtime results for non-trivial evolutionary algorithms.

Moreover, the attendees will be fully prepared to follow the more advanced tutorials that cover more specialized aspects of the field, including the new advanced runtime analysis tutorial on realistic population-based EAs.

How to Solve It: Modern Heuristics

To assure the coverage of the topics required in the specialised tutorials, this introductory tutorial will be coordinated with the presenters of the more advanced ones. In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. This makes the tutorial appropriate for everyone with an interest in the theory of evolutionary algorithms without the need to have prior knowledge of probability theory and analysis of randomized algorithms.


  • Athenagoras: Legatio and De Resurrectione.
  • Produkt empfehlen;
  • Book Subject Areas.
  • Aguecheeks Beef, Belchs Hiccup, and Other Gastronomic Interjections: Literature, Culture, and Food Among the Early Moderns.
  • Object-Oriented Design Heuristics.
  • Heuristic Optimization Techniques.

From , he was a Lecturer in the School of Computer Science at the University of Nottingham, until , when he returned to Birmingham. Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular, runtime analysis of population-based evolutionary algorithms. He was the coordinator of the successful 2M euro EU-funded project SAGE which brought together the theory of evolutionary computation and population genetics. Pietro S. Ingo Wegener's research group. His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems.

This tutorial addresses GECCO attendees who do not regularly use theoretical methods in their research. For these, we give a smooth introduction to the theory of evolutionary computation EC. Complementing other introductory theory tutorials, we do not discuss mathematical methods or particular results, but explain. Benjamin Doerr is a full professor at the Ecole Polytechnique France. He also is an adjunct professor at Saarland University Germany. His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms.

Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of black-box complexity, he proved several of the current best bounds.

In , he chaires the Hot-off-the-press track. The Covariance-Matrix-Adaptation Evolution Strategy is nowadays considered as the state-of-the art continuous stochastic search algorithm, in particular for optimization of non-separable, ill-conditioned and rugged functions. The CMA-ES consists of different components that adapt the step-size and the covariance matrix separately. This tutorial will focus on CMA-ES and provide the audience with an overview of the different adaptation mechanisms used within CMA-ES and the scenarios where their relative impact is important.

We will in particular present the rank-one update, rank-mu update, active CMA for the covariance matrix adaptation. We will address important design principles as well as questions related to parameter tuning that always accompany algorithm design. The input parameters such as the initial mean, the initial step-size, and the population size will be discussed in relation with the ruggedness of functions.

Restart strategies that automatize the input parameter tuning will be presented. Youhei Akimoto is an associate professor at University of Tsukuba, Japan. He received his diploma in computer science and his master degree and PhD in computational intelligence and systems science from Tokyo Institute of Technology, Japan. Since , he was also a research fellow of Japan Society for the Promotion of Science for one year.

He was an assistant professor at Shinshu University from to He started working at the current position in April, His research interests include design principle and theoretical analysis of stochastic search heuristics in continuous domain, in particular, the Covariance Matrix Adaptation Evolution Strategy. Evolutionary multi-objective optimization EMO has been a major research topic in the field of evolu- tionary computation for many years.

It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation multi-objective optimization solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem s so as to facilitate the optimization.

Decomposition methods have been well used and studied in traditional multi-objective optimization. It has been a commonly used evolutionary algorithmic framework in recent years. In particular, it is self-contained that foundations of multi-objective optimization and the basic working principles of EMO algorithms will be included for those without experience in EMO to learn.

Open questions will be posed and highlighted for discussion at the latter session of this tutorial. Afterwards, he spent a year as a postdoctoral research associate at Michigan State University.

Similar books

Then, he moved to the UK and took the post of research fellow at University of Birmingham. His current research interests include the evolutionary multi-objective optimization, automatic problem solving, machine learning and applications in water engineering and software engineering. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. He is on the list of the Thomson Reuters and highly cited researchers in computer science.

He is an IEEE fellow. One of the most challenging problems in solving optimization problems with evolutionary algorithms EAs is the choice of the parameters, which allow to adjust the behavior of the algorithms to the problem at hand. Suitable parameter values need to be found, for example, for the population size, the mutation strength, the crossover rate, the selective pressure, etc. The choice of these parameters can have a crucial impact on the performance of the algorithm and need thus to be executed with care. In the early years of evolutionary computation there had been a quest to determine universally "optimal" parameter choices.

At the same time, researchers have soon realized that different parameter choices can be optimal in different stages of the optimization process: in the beginning of an optimization process, for example, one may want to allow a larger mutation rate to increase the chance of finding the most promising regions of the search space "exploration" phase while later on, a small mutation rate guarantees the search to stay focused "exploitation" phase.

Such dynamic parameter choices are today standard in continuous optimization. Quite surprisingly, however, the situation is much different in discrete optimization, where non-static parameter choices have never lived up to their impact. The ambition of this tutorial is to contribute to a paradigm change towards a more systematic use of dynamic parameter choices.

The International Journal of Intelligent Real-Time Automation

To this end, we survey existing techniques to automatically select parameter values on the fly. We will discuss both experimental and theoretical results that demonstrate the unexploited potential of non-static parameter choices. Our tutorial thereby addresses experimentally as well as theory-oriented researchers alike. No specific background is required to follow this tutorial. Carola Doerr Carola. Doerr lip6. Carola Doerr's main research activities are in the mathematical analysis of randomized algorithms, with a strong focus on evolutionary algorithms and other black-box optimizers.

She has been very active in the design and analysis of black-box complexity models, a theory-guided approach to explore the limitations of heuristic search algorithms. Most recently, she has used knowledge from these studies to prove superiority of dynamic parameter choices in evolutionary computation, a topic that she believes to carry huge unexplored potential for the community. Carola is an editor of two special issues in Algorithmica. Evolutionary algorithms have been used in various ways to create or guide the creation of digital art.

In this tutorial we present techniques from the thriving field of biologically inspired art. We show how evolutionary computation methods can be used to enhance artistic creativity and lead to software systems that help users to create artistic work. We start by providing a general introduction into the use of evolutionary computation methods for digital art and highlight different application areas. This covers different evolutionary algorithms including genetic programming for the creation of artistic images.

Afterwards, we discuss evolutionary algorithms to create artistic artwork in the context of image transition and animation. We show how the connection between evolutionary computation methods and a professional artistic approach finds application in digital animation and new media art, and discuss the different steps of involving evolutionary algorithms for image transition into the creation of paintings.

Afterwards, we give an overview on the use of aesthetic features to evaluate digital art. Finally, we outline directions for future research and discuss some open problems. Frank Neumann received his diploma and Ph. In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of renewable energy, logistics, and mining. Aneta Neumann graduated from the Christian-Albrechts-University of Kiel, Germany in computer science and is currently undertaking her postgraduate research at the School of Computer Science , the University of Adelaide, Australia.

Her main research interest is understanding the fundamental link between bio-inspired computation and digital art. New developments in Gray Box Optimization makes it possible to construct new forms of Genetic Algorithms that do not use random mutation or random recombination. Instead, for certain classes of NP Hard problems ranging from MAXSAT to the Traveling Salesman Problem , it is possible to exactly compute the location of improving moves often in constant time , and to use highly efficient forms of greedy deterministic recombination.

In some domains, this makes random mutation and random recombination obsolete. Partition Crossover locally decomposes a recombination graph into q subgraphs in O n time. It can then identify the best of 2! If the parents are local optima, the offspring are guaranteed to be locally optimal in the largest hyperplane subspace containing both parents, and offspring are typically also local optima in the full space. Other recent results also use a similar form of local decomposition coupled with greedy and deterministic optimization. When applied to multiply constrained scheduling problems, the genetic algorithm is able to solve industrial problems with unto 1 billion variables.

Darrell Whitley has been active in Evolutionary Computation since , and has published more than papers. These papers have garnered more than 16, citations. Whitley's H-index is He currently serves as an Associate Editor of Artificial Intelligence. Fitness landscape analysis FLA can be used to characterise optimisation problems by analysing the underlying search space in terms of the objective to be optimised. There have been many recent advances in the field of FLA on the development of methods and measures that have been shown to be effective in the understanding of complex problems, algorithm behaviour, and the selection of algorithms.

This tutorial is aimed at delegates interested in developing a deeper understanding of the complexities of search spaces and how these impact on the performance of algorithms. The tutorial will cover both discrete and continuous domains and will include the following topics:. She worked in industry for five years before joining academia and has held faculty and research positions at the University Simon Bolivar, Venezuela and the University of Nottingham, UK. Her research interests lie in the foundations and application of evolutionary algorithms and heuristic search methods, with emphasis on autonomous search, hyper-heuristics, fitness landscape analysis and visualisation.

She recently obtained her PhD in Computer Science from the University of Pretoria , but has 20 years' lecturing experience in Computer Science at three different South African universities. Her research interests include fitness landscape analysis and the application of computational intelligence techniques to real-world problems. She is particularly interested in the link between fitness landscape features and algorithm performance and to applying FLA techniques to real-world optimisation problems, such as the training of neural networks.

Katherine is an associated editor for Swarm and Evolutionary Computation and regularly reviews for a number of journals in fields related to evolutionary computation, swarm intelligence and operational research. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections.

The tutorial will focus on the following important aspects of PSO behaviour. With the knowledge presented in this tutorial a PSO practitioner will gain up to date theoretical insights into PSO behaviour and as a result be able to make informed choices when utilizing PSO. His research interests include swarm intelligence, evolutionary computation, neural networks, artificial immune systems, and the application of these paradigms to data mining, games, bioinformatics, finance, and difficult optimization problems.

He has published over papers in these fields and is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence. His research interests include swarm intelligence, evolutionary computation, and machine learning, with a strong focus of theoretical research. Dr Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.

In the most common scenario of evaluating a GP program on a set of input-output examples fitness cases , the semantic approach characterizes program with a vector of outputs rather than a single scalar value fitness. The past research on semantic GP has demonstrated that the additional information obtained in this way facilitates designing more effective search operators. In particular, exploiting the geometric properties of the resulting semantic space leads to search operators with attractive properties, which have provably better theoretical characteristics than conventional GP operators.

This in turn leads to dramatic improvements in experimental comparisons. The aim of the tutorial is to give a comprehensive overview of semantic methods in genetic programming, illustrate in an accessible way the formal geometric framework for program semantics to design provably good mutation and crossover operators for traditional GP problem domains, and to analyze rigorously their performance runtime analysis. A recent extension of this framework to Grammatical Evolution will be also presented.

Other promising emerging approaches to semantics in GP will be reviewed. In particular, the recent developments in the behavioural programming and approaches that automatically acquire a multi-objective characterization of programs will be covered as well. Current challenges and future trends in semantic GP will be identified and discussed. Efficient implementation of semantic search operators may be challenging. We will illustrate very efficient, concise and elegant implementations of these operators, which are available for download from the web.

He has been active in evolutionary computation research for the last 10 years with a substantial publication record in the area. He is the founder of the Geometric Theory of Evolutionary Algorithms, which unifies Evolutionary Algorithms across representations and has been used for the principled design of new successful search algorithms and for their rigorous theoretical analysis. His primary research areas are genetic programming, semantic genetic programming, and coevolutionary algorithms, with applications in program synthesis, modeling, pattern recognition, and games.

The use of EAs for experimental optimization is placed in its historical context with an overview of the landmark studies in this area carried out in the s at the Technical University of Berlin. At the same time, statistics-based Design of Experiments DoE methodologies, rooted in the late s, constitute a gold-standard in existing laboratory equipment, and are therefore reviewed as well at an introductory level to EC audience. The main characteristics of experimental optimization work, in comparison to optimization of simulated systems, are discussed, and practical guidelines for real-world experiments with EAs are given.

For example, experimental problems can constrain the evolution due to overhead considerations, interruptions, changes of variables, missing assays, imposed population-sizes, and target assays that have different evaluation times in the case of multiple objective optimization problems. Selected modern-day case studies show the persistence of experimental optimization problems today. These cover experimental quantum systems, combinatorial drug discovery, protein expression, and others.

These applications can throw EAs out of their normal operating envelope, and raise research questions in a number of different areas ranging across constrained EAs, multiple objective EAs, robust and reliable methods for noisy and dynamic problems, and metamodeling methods for expensive evaluations. Herschel Rabitz — where he specialized in computer science aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member , which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.

He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. The optimization of parameters of a simulation model is usually denoted as Simulation Optimization.

This is a typical problem in the design and optimization of complex systems, where a solution can only be evaluated by means of running a simulation. On the one hand, simulation optimization is black box optimization, which suits evolutionary algorithms well. On the other hand, simulation models are often stochastic, which affects the selection step in evolutionary algorithms. Furthermore, running a simulation is usually computationally expensive, so the number of solutions that can be evaluated is limited.

This tutorial will survey various simulation optimization techniques and then explain in more detail what it takes to successfully apply evolutionary algorithms for simulation optimization. It will brie y cover parallelization and surrogate models to reduce the runtime, but then focus in particular on the handling of noise. The tutorial assumes that the audience is familiar with evolutionary computation.

He has been an active researcher in the field of Evolutionary Computation for over 20 years, has published over papers in peer-reviewed journals and conferences, resulting in an H-Index of 52 Google Scholar. His main research interests include optimization under uncertainty, simulation-based optimization and multi-objective optimization. Competitive coevolution can be considered from the perspective of discovering tests that distinguish between the performance of candidate solutions. Cooperative coevolution implies that mechanisms are adopted for distributing fitness across more than one individual.

In both these variants, the evolving entities engage in interactions that affect all the engaged parties, and result in search gradients that may be very different from those observed in conventional evolutionary algorithm, where fitness is defined externally. This allows CoEAs to model complex systems and solve problems that are difficult or not naturally addressed using conventional evolution. This tutorial will begin by first establishing basic frameworks for competitive and cooperative coevolution and noting the links to related formalisms interactive domains and test-based problems.

We will identify the pathologies that potentially appear when assuming such coevolutionary formulations disengagement, forgetting, mediocre stable states and present the methods that address these issues. Compositional formulations will be considered in which hierarchies of development are explicitly formed leading to the incremental complexification of solutions.

The role of system dynamics will also be reviewed with regards to providing additional insight into how design decisions regarding, say, the formulation assumed for cooperation, impact on the development of effective solutions. Other covered developments will include hybridization with local search and relationships to shaping.

He has conducted research in genetic programming GP since He has a particular interest in scaling up the tasks that GP can potentially be applied to. His current research is attempting the appraise the utility of coevolutionary methods under non-stationary environments as encountered in streaming data applications, and coevolving agents for single and multi-agent reinforcement learning tasks.

In the latter case the goal is to coevolve behaviours for playing soccer under the RoboSoccer environment a test bed for multi-agent reinforcement learning. Multiobjective optimization algorithms usually produce a set of trade-off solutions approximating the Pareto front where no solution from the set is better than any other in all objectives this is called an approximation set.

While there exist many measures to assess the quality of approximation sets, no measure is as effective as visualization, especially if the Pareto front is known and can be visualized as well. Visualization in evolutionary multiobjective optimization is relevant in many aspects, such as estimating the location, range, and shape of the Pareto front, assessing conflicts and trade-offs between objectives, selecting preferred solutions, monitoring the progress or convergence of an optimization run, and assessing the relative performance of different algorithms.

This tutorial will provide a comprehensive overview of methods used in multiobjective optimization for visualizing either individual approximation sets resulting from a single algorithm run or multiple approximation sets stemming from repeated runs. The methods will be organized in a recently proposed taxonomy of visualization methods and analyzed according to a methodology for assessing and comparing visualization methods. The methodology uses a list of requirements for visualization methods and benchmark approximation sets in a similar way as performance metrics and benchmark test problems are used for comparing optimization algorithms.

His research interests are in stochastic optimization, evolutionary computation and intelligent data analysis. He focuses on evolutionary multiobjective optimization, including result visualization, constraint handling and use of surrogate models. He is also active in promoting evolutionary computation in practice and has led optimization projects for steel industry, car manufacturing and energy management. She was awarded the PhD degree in Information and Communication Technologies by the Jozef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization.

She has completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems. This tutorial will first describe the current paradigms of concurrent programming, including mainly channel-based concurrency, and which languages work with it.

It will then proceed to examine the changes evolutionary algorithms have to undergo in order to properly leverage these kind of software architectures. It will be addressed mainly to students who have some knowledge of programming techniques, as well as those who understand basic concepts of evolutionary algorithms and want to advance on the its basic architecture and its implementations.

Eventually, the attendee will be able to reformulate existing evolutionary algorithms in a concurrent setting and know basic concepts on how to implement them using concurrent languages such as Go, Scala or Perl 6. Nowadays, concurrency has the promise of bringing high-performance computing to the most plain vanilla desktop, since the increase in power in CPUs has been due mainly to architectural, and not physical, changes.

However, the existence of languages that can use this concurrency properly is not so mainstream, with just a few languages implementing core functionalities to work with concurrent primitives, data structures and functions. Most languages are based in a relatively old concept, communicating sequential processes, which make different threads only share state through communication; this has led to the rise of functional languages which do not have side effect and do not change state in pure functions and reactive architectures which instead of working sequentially, impose a certain sequence on data flow.

This tutorial will explain these basic concepts, and then will apply it to evolutionary algorithms, seeing how the canonical evolutionary algorithm can be changed to a stateless algorithm, while keeping the bioinspired spirit and leveraging concurrency. JJ Merelo is professor at the university of Granada. Creativity is a core ability that was fostered throughout human evolution: it has enabled new ways of solving problems and doing art, but it has also led to discoveries through unconventional ways.

Can computational processes be creative? Who should judge and what should be critiqued? How can artificial evolution help such computational processes? In turn, how can evolutionary search benefit from computational creativity? To address the above questions, in this tutorial we use principles such as unconventional search, deception, value, novelty, surprise, and quality diversity as overarching elements for connecting evolutionary computation EC and computational creativity CC.

In particular, we will explore the research area in which bio-inspired algorithmic design meets creativity and search and, hence, we attempt to incorporate foundational ideas and concepts from evolutionary computation in the computational creativity domain. EC areas such as divergent search and quality diversity could benefit from theoretical approaches of computational creativity and empirical implementations of CC-inspired algorithms.

Similarly, we argue that CC research and practice can only help in advancing work on EA theory and algorithm design. Antonios Liapis is a Lecturer at the Institute of Digital Games, University of Malta, where he bridges the gap between game technology and game design in courses focusing on human-computer creativity, digital prototyping and game development.

He received the Ph. His research focuses on Artificial Intelligence as an autonomous creator or as a facilitator of human creativity. His work includes computationally intelligent tools for game design, as well as computational creators that blend semantics, visuals, sound, plot and level structure to create horror games, adventure games and more. He is the general chair of the EvoMusArt conference and has served as local chair and demonstrations chair at the Computational Intelligence and Games conference. He has received several awards for his research contributions and reviewing effort.

The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications. Among the techniques studied and applied within these research fields, evolutionary computation EC including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role.

Recently, evolutionary deep learning has also attracted very good attention to these fields.

The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications.

The application areas to be covered will include edge detection, segmentation, object tracking, object recognition, motion detection, image classification and recognition. In particular, we will discuss the detection of relevant set of features for classification based on an information-theoretical approach derived from complex system analysis. We take a focus on the use of evolutionary deep learning idea for image analysis this includes automatic learning architectures, learning parameters and transfer functions of convolutional neural networks and autoencoders and genetic programming if time allows.

We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over research papers in refereed international journals and conferences in these areas. Since he has been with the University of Parma, where he has been Associate Professor since Recent research grants include: co-management of a project funded by Italian Railway Network Society RFI aimed at developing an automatic inspection system for train pantographs; a "Marie Curie Initial Training Network" grant, for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing; a grant from "Compagnia diS.

Paolo" on "Bioinformatic and experimental dissection of the signalling pathways underlying dendritic spine function". He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from to Since , he has been chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, now a track of the EvoApplications conference.

Co-editor of special issues of journals dedicated to Evolutionary Computation for Image Analysis and Signal Processing. He has been awarded the "Evostar Award", in recognition of the most outstanding contribution to Evolutionary Computation. As a result, over the last decade an entire research area - denoted Exploratory Landscape Analysis ELA - has developed around the topic of automated and feature-based problem characterization for continuous optimization problems.



admin