# Publications

Publications by categories in reversed chronological order.

# Journals

- Self-adjusting Offspring Population Sizes Outperform Fixed Parameters on the Cliff Function
*Mario Alejandro Hevia Fajardo*, and Dirk Sudholt*Artificial Intelligence*, 2024In the discrete domain, self-adjusting parameters of evolutionary algorithms (EAs) have emerged as a fruitful research area with many runtime analyses showing that self-adjusting parameters can outperform the best fixed parameters. Most existing runtime analyses focus on elitist EAs on simple problems, for which moderate performance gains were shown. Here we consider a much more challenging scenario: the multimodal function Cliff, defined as an example where a (1,λ) EA is effective, and for which the best known upper runtime bound for standard EAs is O(n

^{25}).

We prove that a (1,λ) EA self-adjusting the offspring population size λ using success-based rules optimises Cliff in O(n) expected generations and O(n log n) expected evaluations. Along the way, we prove tight upper and lower bounds on the runtime for fixed λ (up to a logarithmic factor) and identify the runtime for the best fixed λ as n^{η}for η≈3.97677 (up to sub-polynomial factors). Hence, the self-adjusting (1,λ) EA outperforms the best fixed parameter by a factor of at least n^{2.9767}. - Self-adjusting Population Sizes for Non-elitist Evolutionary Algorithms: Why Success Rates Matter
*Mario Alejandro Hevia Fajardo*, and Dirk Sudholt*Algorithmica*, 2024Evolutionary algorithms (EAs) are general-purpose optimisers that come with several parameters like the sizes of parent and offspring populations or the mutation rate. It is well known that the performance of EAs may depend drastically on these parameters. Recent theoretical studies have shown that self-adjusting parameter control mechanisms that tune parameters during the algorithm run can provably outperform the best static parameters in EAs on discrete problems. However, the majority of these studies concerned elitist EAs and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist EAs. We study one of the best-known parameter control mechanisms, the one-fifth success rule, to control the offspring population size λ in the non-elitist (1,λ) EA. It is known that the (1,λ) EA has a sharp threshold with respect to the choice of λ where the expected runtime on the benchmark function OneMax changes from polynomial to exponential time. Hence, it is not clear whether parameter control mechanisms are able to find and maintain suitable values of λ. For OneMax we show that the answer crucially depends on the success rate s (i. e. a one-(s+1)-th success rule). We prove that, if the success rate is appropriately small, the self-adjusting (1,λ) EA optimises OneMax in O(n) expected generations and O(n log n) expected evaluations, the best possible runtime for any unary unbiased black-box algorithm. A small success rate is crucial: we also show that if the success rate is too large, the algorithm has an exponential runtime on OneMax and other functions with similar characteristics.

- Theoretical and Empirical Analysis of Parameter Control Mechanisms in the (1 +(λ,λ)) Genetic Algorithm
*Mario Alejandro Hevia Fajardo*, and Dirk Sudholt*ACM Trans. Evol. Learn. Optim.*, Jan 2023The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably.

We study this problem for the standard Jump_{k}benchmark problem class using runtime analysis. The self-adjusting (1 + (λ, λ)) GA behaves like a (1 + n) EA whenever the maximum value for λ is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that resets are effective for all Jump_{k}problems: the self-adjusting (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and evolutionary algorithms with heavy-tailed mutation, apart from a small polynomial overhead.

Along the way, we present new general methods for translating existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. We also show that the algorithm presents a bimodal parameter landscape with respect to λ on Jump_{k}. For appropriate n and k, the landscape features a local optimum in a wide basin of attraction and a global optimum in a narrow basin of attraction. To our knowledge this is the first proof of a bimodal parameter landscape for the runtime of an evolutionary algorithm on a multimodal problem.

# Conferences (peer reviewed)

- PPSNRanking Diversity Benefits Coevolutionary Algorithms on an Intransitive Game
*Mario Hevia Fajardo*, and Per Kristian Lehre*In Parallel Problem Solving from Nature – PPSN XVIII*, 2024Competitive coevolutionary algorithms (CoEAs) often encounter so-called coevolutionary pathologies particularly cycling behavior, which becomes more pronounced for games where there is no clear hierarchy of superiority among the possible strategies (intransitive games). In order to avoid these pathologies and ensure an efficient optimisation, it has been suggested that it is critical to choose a _good_ evaluation environment (set of solutions used for evaluation).

In this paper, we use runtime analysis to increase our understanding of the essential characteristics that the evaluation environments should possess to ensure efficient runtime on the intransitive problem class Bilinear. For this problem class, we observe that it is beneficial to maintain a high _diversity of rankings_ in the evaluation environment, that is, a set of individuals used for evaluation which are diverse in how they rank opponents.

We propose and analyse two mechanisms that implement this idea. In the first approach, we ensure diversity of rankings through an archive. In the second approach, we introduce a CoEA without an archive, but with a ranking diversity mechanism. Both approaches optimise Bilinear in expected polynomial time. - A Self-adaptive Coevolutionary Algorithm
*In Proceedings of the Genetic and Evolutionary Computation Conference*, 2024Coevolutionary algorithms are helpful computational abstractions of adversarial behavior and they demonstrate multiple ways that populations of competing adversaries influence one another. We introduce the ability for each competitor’s mutation rate to evolve through self-adaptation. Because dynamic environments are frequently addressed with self-adaptation, we set up dynamic problem environments to investigate the impact of this ability. For a simple bilinear problem, a sensitivity analysis of the adaptive method’s parameters reveals that it is robust over a range of multiplicative rate factors, when the rate is changed up or down with equal probability. An empirical study determines that each population’s mutation rates converge to values close to the error threshold. Mutation rate dynamics are complex when both populations adapt their rates. Large scale empirical self-adaptation results reveal that both reasonable solutions and rates can be found. This addresses the challenge of selecting ideal static mutation rates in coevolutionary algorithms. The algorithm’s payoffs are also robust. They are rarely poor and frequently they are as high as the payoff of the static rate to which they converge. On rare runs, they are higher.

- Analysis of a Pairwise Dominance Coevolutionary Algorithm with Spatial Topology2024
Competitive coevolutionary algorithms are used to model adversarial dynamics. The diversity of the adversarial populations can be changed with a spatial topology. To achieve more clarity in how a spatial topology impacts performance and complexity we introduce a spatial topology to a pairwise dominance coevolutionary algorithm named PDCoEA. The new algorithm is called STPDCoEA. We use a methodology for consistent algorithm comparison to empirically study the impact of topology, problem, and mutation rates on the dynamics and payoffs in STPDCoEA. We compare records of multi-run dynamics on three problems and observe that the spatial topology impacts the performance and diversity.

- Runtime Analysis of a Co-Evolutionary Algorithm: Overcoming Negative Drift in Maximin-Optimisation
*Mario Alejandro Hevia Fajardo*, Per Kristian Lehre, and Shishen Lin*In Proceedings of the 17th ACM/SIGEVO Conference on Foundations of Genetic Algorithms*, 2023Co-evolutionary algorithms have found several applications in game-theoretic applications and optimisation problems with an adversary, particularly where the strategy space is discrete and exponentially large, and where classical game-theoretic methods fail. However, the application of co-evolutionary algorithms is difficult because they often display pathological behaviour, such as cyclic behaviour and evolutionary forgetting. These challenges have prevented the broad application of co-evolutionary algorithms.

We derive, via rigorous mathematical methods, bounds on the expected time of a simple co-evolutionary algorithm until it discovers a Maximin-solution on the pseudo-Boolean Bilinear problem. Despite the intransitive nature of the problem leading to a cyclic behaviour of the algorithm, we prove that the algorithm obtains the Maximin-solution in expected O(n^{1.5}) time.

However, we also show that the algorithm quickly forgets the Maximin-solution and moves away from it. These results in a large total regret of Θ(Tn^{1.5}) after T iterations. Finally, we show that using a simple archive solves this problem reducing the total regret significantly.Along the way, we present new mathematical tools to compute the expected time for co-evolutionary algorithms to obtain a Maximin-solution. We are confident that these tools can help further advance runtime analysis in both co-evolutionary and evolutionary algorithms. - How Fitness Aggregation Methods Affect the Performance of Competitive CoEAs on Bilinear Problems
*Mario Alejandro Hevia Fajardo*, and Per Kristian Lehre*In Proceedings of the Genetic and Evolutionary Computation Conference*, 2023Competitive co-evolutionary algorithms (CoEAs) do not rely solely on an external function to assign fitness values to sampled solutions. Instead, they use the aggregation of outcomes from interactions between competing solutions allowing to rank solutions and make selection decisions. This makes CoEAs a useful tool for optimisation problems that have intrinsically interactive domains.Over the past decades, many ways to aggregate the outcomes of interactions have been considered. At the moment, it is unclear which of these is the best choice. Previous research is fragmented and most of the fitness aggregation methods (fitness measures) proposed have only been studied empirically.

We argue that a proper understanding of the dynamics of CoEAs and their fitness measures can only be achieved through rigorous analysis of their behaviour. In this work we make a step towards this goal by using runtime analysis to study two commonly used fitness measures. We show a dichotomy in the behaviour of a (1, λ) CoEA when optimising a Bilinear problem. The algorithm finds a Nash equilibrium efficiently if the worst interaction is used as a fitness measure but it takes exponential time w.o.p. if the average of all interactions is used instead. - Analysis of a Pairwise Dominance Coevolutionary Algorithm And DefendIt
*In Proceedings of the Genetic and Evolutionary Computation Conference*, 2023While competitive coevolutionary algorithms are ideally suited to model adversarial dynamics, their complexity makes it difficult to understand what is happening when they execute. To achieve better clarity, we introduce a game named DefendIt and explore a previously developed pairwise dominance coevolutionary algorithm named PDCoEA. We devise a methodology for consistent algorithm comparison, then use it to empirically study the impact of population size, the impact of relative budget limits between the defender and attacker, and the impact of mutation rates on the dynamics and payoffs. Our methodology provides reliable comparisons and records of run and multi-run dynamics. Our supplementary material also offers enticing and detailed animations of a pair of players’ game moves over the course of a game of millions of moves matched to the same run’s populations’ payoffs.

- Hard Problems Are Easier for Success-Based Parameter Control
*Mario Alejandro Hevia Fajardo*, and Dirk Sudholt*In Proceedings of the Genetic and Evolutionary Computation Conference*, 2022Recent works showed that simple success-based rules for self-adjusting parameters in evolutionary algorithms (EAs) can match or outperform the best fixed parameters on discrete problems. Non-elitism in a (1, λ) EA combined with a self-adjusting of spring population size λ outperforms common EAs on the multimodal Cliff problem. However, it was shown that this only holds if the success rate λ that governs self-adjustment is small enough. Otherwise, even on OneMax, the self-adjusting (1, λ) EA stagnates on an easy slope, where frequent successes drive down the of spring population size.

We show that self-adjustment works as intended in the absence of easy slopes. We define everywhere hard functions, for which successes are never easy to find and show that the self-adjusting (1, λ) EA is robust with respect to the choice of success rates λ. We give a general fitness-level upper bound on the number of evaluations and show that the expected number of generations is at most O(d+log(1/p^{+}_{min})) where d is the number of non-optimal fitness values and p^{+}_{min}is the smallest probability of finding an improvement from a non-optimal search point. We discuss implications for the everywhere hard function LeadingOnes and a new class OneMaxBlocks of everywhere hard functions with tunable difficulty. - Self-Adjusting Offspring Population Sizes Outperform Fixed Parameters on the Cliff Function
*Mario Alejandro Hevia Fajardo*, and Dirk Sudholt*In Proceedings of the 16th ACM/SIGEVO Conference on Foundations of Genetic Algorithms*, 2021In the discrete domain, self-adjusting parameters of evolutionary algorithms (EAs) has emerged as a fruitful research area with many runtime analyses showing that self-adjusting parameters can out-perform the best fixed parameters. Most existing runtime analyses focus on elitist EAs on simple problems, for which moderate performance gains were shown. Here we consider a much more challenging scenario: the multimodal function Cliff, defined as an example where a (1, λ) EA is effective, and for which the best known upper runtime bound for standard EAs is O(n

^{25}). We prove that a (1, λ) EA self-adjusting the offspring population size λ using success-based rules optimises Cliff in O(n) expected generations and O(n log n) expected evaluations. Along the way, we prove tight upper and lower bounds on the runtime for fixed λ (up to a logarithmic factor) and identify the runtime for the best fixed λ as nη for η ≈ 3.9767 (up to sub-polynomial factors). Hence, the self-adjusting (1, λ) EA outperforms the best fixed parameter by a factor of at least n^{2.9767}(up to sub-polynomial factors). - Self-Adjusting Population Sizes for Non-Elitist Evolutionary Algorithms: Why Success Rates Matter
*Mario Alejandro Hevia Fajardo*, and Dirk Sudholt*In Proceedings of the Genetic and Evolutionary Computation Conference*, 2021Recent theoretical studies have shown that self-adjusting mechanisms can provably outperform the best static parameters in evolutionary algorithms on discrete problems. However, the majority of these studies concerned elitist algorithms and we do not have a clear answer on whether the same mechanisms can be applied for non-elitist algorithms.

We study one of the best-known parameter control mechanisms, the one-fifth success rule, to control the offspring population size λ in the non-elitist (1, λ) EA. It is known that the (1, λ) EA has a sharp threshold with respect to the choice of λ where the runtime on OneMax changes from polynomial to exponential time. Hence, it is not clear whether parameter control mechanisms are able to find and maintain suitable values of λ.

We show that the answer crucially depends on the success rate s (i. e. a one-(s + 1)-th success rule). We prove that, if the success rate is appropriately small, the self-adjusting (1, λ) EA optimises OneMax in O(n) expected generations and O(n log n) expected evaluations. A small success rate is crucial: we also show that if the success rate is too large, the algorithm has an exponential runtime on OneMax. - On the Choice of the Parameter Control Mechanism in the (1+(λ, λ)) Genetic Algorithm
*Mario Alejandro Hevia Fajardo*, and Dirk Sudholt*In Proceedings of the Genetic and Evolutionary Computation Conference*, 2020The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably.We study this problem and possible solutions to it using rigorous runtime analysis for the standard Jump

_{k}benchmark problem class. The original algorithm behaves like a (1+n) EA whenever the maximum value λ = n is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that this strategy is effective for all Jump_{k}problems: the (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and fast evolutionary algorithms, apart from a small polynomial overhead. Along the way, we present new general methods for bounding the runtime of the (1 + (λ, λ)) GA that allows to translate existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. Our methods are easy to use and give upper bounds for novel classes of functions. - An Empirical Evaluation of Success-Based Parameter Control Mechanisms for Evolutionary Algorithms
*Mario Alejandro Hevia Fajardo**In Proceedings of the Genetic and Evolutionary Computation Conference*, 2019Success-based parameter control mechanisms for Evolutionary Algorithms (EA) change the parameters every generation based on the success of the previous generation and the current parameter value. In the last years there have been proposed several mechanisms of success-based parameter control in the literature. The purpose of this paper is to evaluate and compare their sequential optimisation time and parallelisation on different types of problems. The geometric mean of the sequential and parallel optimisation times is used as a new metric to evaluate the parallelisation of the EAs capturing the trade off between both optimisation times. We perform an empirical study comprising of 9 different algorithms on four benchmark functions. From the 9 algorithms eight algorithms were taken from the literature and one is a modification proposed here.We show that the modified algorithms has a 20% faster sequential optimisation time than the fastest known GA on OneMax. Additionally we show the benefits of success-based parameter control mechanisms for NP-hard problems and using the proposed metric we also show that success-based offspring population size mechanisms are outperformed by static choices in parallel EAs.

# Thesis

- Runtime Analysis of Success-Based Parameter Control Mechanisms for Evolutionary Algorithms on Multimodal ProblemsMario Alejandro Hevia FajardoApr 2023
Evolutionary algorithms are simple general-purpose optimisers often used to solve complex engineering and design problems. They mimic the process of natural evolution: they use a population of possible solutions to a problem that evolves by mutating and recombining solutions, identifying increasingly better solutions over time. Evolutionary algorithms have been applied to a broad range of problems in various disciplines with remarkable success. However, the reasons behind their success are often elusive: their performance often depends crucially, and unpredictably, on their parameter settings. It is, furthermore, well known that there are no globally good parameters, that is, the correct parameters for one problem may differ substantially to the parameters needed for another, making it harder to translate previous successfully implemented parameters to new problems. Therefore, understanding how to properly select the parameters is an important but challenging task. This is commonly known as the parameter selection problem.

A promising solution to this problem is the use of automated dynamic parameter selection schemes (parameter control) that allow evolutionary algorithms to identify and continuously track optimal parameters throughout the course of evolution without human intervention. In recent years the study of parameter control mechanisms in evolutionary algorithms has emerged as a very fruitful research area. However, most existing runtime analyses focus on simple problems with benign characteristics, for which fixed parameter settings already run efficiently and only moderate performance gains were shown. The aim of this thesis is to understand how parameter control mechanisms can be used on more complex and challenging problems with many local optima (multimodal problems) to speed up optimisation.

We use advanced methods from the analysis of algorithms and probability theory to evaluate the performance of evolutionary algorithms, estimating the expected time until an algorithm finds satisfactory solutions for illustrative and relevant optimisation problems as a vital stepping stone towards designing more efficient evolutionary algorithms. We first analyse current parameter control mechanisms on multimodal problems to understand their strengths and weaknesses. Subsequently we use this knowledge to design parameter control mechanisms that mitigate the weaknesses of current mechanisms while maintaining their strengths. Finally, we show with theoretical and empirical analyses that these enhanced parameter control mechanisms are able to outperform the best fixed parameter settings on multimodal optimisation.