scholarly journals The Univariate Marginal Distribution Algorithm Copes Well With Deception and Epistasis

2021 ◽  
pp. 1-22
Author(s):  
Benjamin Doerr ◽  
Martin S. Krejca

Abstract In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by the choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n2+2elnn) fitness evaluations. Since an offspring population size λ of order n log n can prevent genetic drift, the UMDA can solve the DLB problem with O(n2) log n fitness evaluations. In contrast, for classic evolutionary algorithms no better run time guarantee than O(n3) is known (which we prove to be tight for the (1 + 1) EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than many classic evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.

2021 ◽  
Vol 1 (1) ◽  
pp. 1-38
Author(s):  
Dogan Corus ◽  
Andrei Lissovoi ◽  
Pietro S. Oliveto ◽  
Carsten Witt

We analyse the impact of the selective pressure for the global optimisation capabilities of steady-state evolutionary algorithms (EAs). For the standard bimodal benchmark function TwoMax , we rigorously prove that using uniform parent selection leads to exponential runtimes with high probability to locate both optima for the standard ( +1) EA and ( +1) RLS with any polynomial population sizes. However, we prove that selecting the worst individual as parent leads to efficient global optimisation with overwhelming probability for reasonable population sizes. Since always selecting the worst individual may have detrimental effects for escaping from local optima, we consider the performance of stochastic parent selection operators with low selective pressure for a function class called TruncatedTwoMax, where one slope is shorter than the other. An experimental analysis shows that the EAs equipped with inverse tournament selection, where the loser is selected for reproduction and small tournament sizes, globally optimise TwoMax efficiently and effectively escape from local optima of TruncatedTwoMax with high probability. Thus, they identify both optima efficiently while uniform (or stronger) selection fails in theory and in practice. We then show the power of inverse selection on function classes from the literature where populations are essential by providing rigorous proofs or experimental evidence that it outperforms uniform selection equipped with or without a restart strategy. We conclude the article by confirming our theoretical insights with an empirical analysis of the different selective pressures on standard benchmarks of the classical MaxSat and multidimensional knapsack problems.


Algorithmica ◽  
2020 ◽  
Author(s):  
Johannes Lengler ◽  
Dirk Sudholt ◽  
Carsten Witt

Abstract The compact Genetic Algorithm (cGA) evolves a probability distribution favoring optimal solutions in the underlying search space by repeatedly sampling from the distribution and updating it according to promising samples. We study the intricate dynamics of the cGA on the test function OneMax, and how its performance depends on the hypothetical population size K, which determines how quickly decisions about promising bit values are fixated in the probabilistic model. It is known that the cGA and the Univariate Marginal Distribution Algorithm (UMDA), a related algorithm whose population size is called $$\lambda$$ λ , run in expected time $$O(n \log n)$$ O ( n log n ) when the population size is just large enough ($$K = \varTheta (\sqrt{n}\log n)$$ K = Θ ( n log n ) and $$\lambda = \varTheta (\sqrt{n}\log n)$$ λ = Θ ( n log n ) , respectively) to avoid wrong decisions being fixated. The UMDA also shows the same performance in a very different regime ($$\lambda =\varTheta (\log n)$$ λ = Θ ( log n ) , equivalent to $$K = \varTheta (\log n)$$ K = Θ ( log n ) in the cGA) with much smaller population size, but for very different reasons: many wrong decisions are fixated initially, but then reverted efficiently. If the population size is even smaller ($$o(\log n)$$ o ( log n ) ), the time is exponential. We show that population sizes in between the two optimal regimes are worse as they yield larger runtimes: we prove a lower bound of $$\varOmega (K^{1/3}n + n \log n)$$ Ω ( K 1 / 3 n + n log n ) for the cGA on OneMax for $$K = O(\sqrt{n}/\log ^2 n)$$ K = O ( n / log 2 n ) . For $$K = \varOmega (\log ^3 n)$$ K = Ω ( log 3 n ) the runtime increases with growing K before dropping again to $$O(K\sqrt{n} + n \log n)$$ O ( K n + n log n ) for $$K = \varOmega (\sqrt{n} \log n)$$ K = Ω ( n log n ) . This suggests that the expected runtime for the cGA is a bimodal function in K with two very different optimal regions and worse performance in between.


Author(s):  
M. Kanthababu

Recently evolutionary algorithms have created more interest among researchers and manufacturing engineers for solving multiple-objective problems. The objective of this chapter is to give readers a comprehensive understanding and also to give a better insight into the applications of solving multi-objective problems using evolutionary algorithms for manufacturing processes. The most important feature of evolutionary algorithms is that it can successfully find globally optimal solutions without getting restricted to local optima. This chapter introduces the reader with the basic concepts of single-objective optimization, multi-objective optimization, as well as evolutionary algorithms, and also gives an overview of its salient features. Some of the evolutionary algorithms widely used by researchers for solving multiple objectives have been presented and compared. Among the evolutionary algorithms, the Non-dominated Sorting Genetic Algorithm (NSGA) and Non-dominated Sorting Genetic Algorithm-II (NSGA-II) have emerged as most efficient algorithms for solving multi-objective problems in manufacturing processes. The NSGA method applied to a complex manufacturing process, namely plateau honing process, considering multiple objectives, has been detailed with a case study. The chapter concludes by suggesting implementation of evolutionary algorithms in different research areas which hold promise for future applications.


Author(s):  
Anderson Sergio ◽  
Sidartha Carvalho ◽  
Marco Rego

Compact evolutionary algorithms have proven to be an efficient alternative for solving optimization problems in computing environments with low processing power. In this kind of solution, a probability distribution simulates the behavior of a population, thus looking for memory savings. Several compact algorithms have been proposed, including the compact genetic algorithm and compact differential evolution. This work aims to investigate the use of compact approaches in other important evolutionary algorithms: evolution strategies. This paper proposes two different approaches for compact versions of evolution strategies. Experiments were performed and the results analyzed. The results showed that, depending on the nature of problem, the use of the compact version of Evolution Strategies can be rewarding.


2006 ◽  
Vol 14 (3) ◽  
pp. 277-289 ◽  
Author(s):  
Reza Rastegar ◽  
Arash Hariri

The compact Genetic Algorithm (cGA) is an Estimation of Distribution Algorithm that generates offspring population according to the estimated probabilistic model of the parent population instead of using traditional recombination and mutation operators. The cGA only needs a small amount of memory; therefore, it may be quite useful in memory-constrained applications. This paper introduces a theoretical framework for studying the cGA from the convergence point of view in which, we model the cGA by a Markov process and approximate its behavior using an Ordinary Differential Equation (ODE). Then, we prove that the corresponding ODE converges to local optima and stays there. Consequently, we conclude that the cGA will converge to the local optima of the function to be optimized.


2018 ◽  
Vol 26 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Patrik Gustavsson ◽  
Anna Syberfeldt

Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This article presents a new, more efficient algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the article, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.


2012 ◽  
pp. 352-376 ◽  
Author(s):  
M. Kanthababu

Recently evolutionary algorithms have created more interest among researchers and manufacturing engineers for solving multiple-objective problems. The objective of this chapter is to give readers a comprehensive understanding and also to give a better insight into the applications of solving multi-objective problems using evolutionary algorithms for manufacturing processes. The most important feature of evolutionary algorithms is that it can successfully find globally optimal solutions without getting restricted to local optima. This chapter introduces the reader with the basic concepts of single-objective optimization, multi-objective optimization, as well as evolutionary algorithms, and also gives an overview of its salient features. Some of the evolutionary algorithms widely used by researchers for solving multiple objectives have been presented and compared. Among the evolutionary algorithms, the Non-dominated Sorting Genetic Algorithm (NSGA) and Non-dominated Sorting Genetic Algorithm-II (NSGA-II) have emerged as most efficient algorithms for solving multi-objective problems in manufacturing processes. The NSGA method applied to a complex manufacturing process, namely plateau honing process, considering multiple objectives, has been detailed with a case study. The chapter concludes by suggesting implementation of evolutionary algorithms in different research areas which hold promise for future applications.


2021 ◽  
Vol 12 (2) ◽  
pp. 1-17
Author(s):  
Xingsi Xue ◽  
Xiaojing Wu ◽  
Junfeng Chen

Ontology provides a shared vocabulary of a domain by formally representing the meaning of its concepts, the properties they possess, and the relations among them, which is the state-of-the-art knowledge modeling technique. However, the ontologies in the same domain could differ in conceptual modeling and granularity level, which yields the ontology heterogeneity problem. To enable data and knowledge transfer, share, and reuse between two intelligent systems, it is important to bridge the semantic gap between the ontologies through the ontology matching technique. To optimize the ontology alignment’s quality, this article proposes an Interactive Compact Genetic Algorithm (ICGA)-based ontology matching technique, which consists of an automatic ontology matching process based on a Compact Genetic Algorithm (CGA) and a collaborative user validating process based on an argumentation framework. First, CGA is used to automatically match the ontologies, and when it gets stuck in the local optima, the collaborative validation based on the multi-relationship argumentation framework is activated to help CGA jump out of the local optima. In addition, we construct a discrete optimization model to define the ontology matching problem and propose a hybrid similarity measure to calculate two concepts’ similarity value. In the experiment, we test the performance of ICGA with the Ontology Alignment Evaluation Initiative’s interactive track, and the experimental results show that ICGA can effectively determine the ontology alignments with high quality.


Sign in / Sign up

Export Citation Format

Share Document