2
2.0

Jun 29, 2018
06/18

by
Nikolaus Hansen; Anne Auger; Olaf Mersmann; Tea Tusar; Dimo Brockhoff

texts

#
eye 2

#
favorite 0

#
comment 0

COCO is a platform for Comparing Continuous Optimizers in a black-box setting. It aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. We present the rationals behind the development of the platform as a general proposition for a guideline towards better benchmarking. We detail underlying fundamental concepts of COCO such as its definition of a problem, the idea of instances, the relevance of target values, and...

Topics: Machine Learning, Artificial Intelligence, Numerical Analysis, Computing Research Repository,...

Source: http://arxiv.org/abs/1603.08785

3
3.0

Jun 30, 2018
06/18

by
Alexandre Chotard; Anne Auger; Nikolaus Hansen

texts

#
eye 3

#
favorite 0

#
comment 0

This paper analyses a $(1,\lambda)$-Evolution Strategy, a randomised comparison-based adaptive search algorithm, on a simple constraint optimisation problem. The algorithm uses resampling to handle the constraint and optimizes a linear function with a linear constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using path length control. We exhibit for each case a Markov chain whose stability analysis would...

Topics: Neural and Evolutionary Computing, Mathematics, Computing Research Repository, Optimization and...

Source: http://arxiv.org/abs/1404.3023

3
3.0

Jun 30, 2018
06/18

by
Ilya Loshchilov; Marc Schoenauer; Michèle Sebag; Nikolaus Hansen

texts

#
eye 3

#
favorite 0

#
comment 0

The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is widely accepted as a robust derivative-free continuous optimization algorithm for non-linear and non-convex optimization problems. CMA-ES is well known to be almost parameterless, meaning that only one hyper-parameter, the population size, is proposed to be tuned by the user. In this paper, we propose a principled approach called self-CMA-ES to achieve the online adaptation of CMA-ES hyper-parameters in order to improve its overall...

Topics: Neural and Evolutionary Computing, Computing Research Repository, Artificial Intelligence

Source: http://arxiv.org/abs/1406.2623

6
6.0

Jun 29, 2018
06/18

by
Tea Tusar; Dimo Brockhoff; Nikolaus Hansen; Anne Auger

texts

#
eye 6

#
favorite 0

#
comment 0

The bbob-biobj test suite contains 55 bi-objective functions in continuous domain which are derived from combining functions of the well-known single-objective noiseless bbob test suite. Besides giving the actual function definitions and presenting their (known) properties, this documentation also aims at giving the rationale behind our approach in terms of function groups, instances, and potential objective space normalization.

Topics: Artificial Intelligence, Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1604.00359

2
2.0

Jun 29, 2018
06/18

by
Youhei Akimoto; Anne Auger; Nikolaus Hansen

texts

#
eye 2

#
favorite 0

#
comment 0

We investigate evolution strategies with weighted recombination on general convex quadratic functions. We derive the asymptotic quality gain in the limit of the dimension to infinity, and derive the optimal recombination weights and the optimal step-size. This work is an extension of previous works where the asymptotic quality gain of evolution strategies with weighted recombination was derived on the infinite dimensional sphere function. Moreover, for a finite dimensional search space, we...

Topics: Optimization and Control, Mathematics

Source: http://arxiv.org/abs/1608.04813

3
3.0

Jun 29, 2018
06/18

by
Nikolaus Hansen; Tea Tusar; Olaf Mersmann; Anne Auger; Dimo Brockhoff

texts

#
eye 3

#
favorite 0

#
comment 0

We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.

Topics: Artificial Intelligence, Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1603.08776

43
43

Sep 22, 2013
09/13

by
Nikolaus Hansen

texts

#
eye 43

#
favorite 0

#
comment 0

We combine a refined version of two-point step-size adaptation with the covariance matrix adaptation evolution strategy (CMA-ES). Additionally, we suggest polished formulae for the learning rate of the covariance matrix and the recombination weights. In contrast to cumulative step-size adaptation or to the 1/5-th success rule, the refined two-point adaptation (TPA) does not rely on any internal model of optimality. In contrast to conventional self-adaptation, the TPA will achieve a better...

Source: http://arxiv.org/abs/0805.0231v4

3
3.0

Jun 29, 2018
06/18

by
Dimo Brockhoff; Tea Tušar; Dejan Tušar; Tobias Wagner; Nikolaus Hansen; Anne Auger

texts

#
eye 3

#
favorite 0

#
comment 0

This document details the rationales behind assessing the performance of numerical black-box optimizers on multi-objective problems within the COCO platform and in particular on the biobjective test suite bbob-biobj. The evaluation is based on a hypervolume of all non-dominated solutions in the archive of candidate solutions and measures the runtime until the hypervolume value succeeds prescribed target values.

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1605.01746

34
34

Sep 23, 2013
09/13

by
Alexandre Chotard; Anne Auger; Nikolaus Hansen

texts

#
eye 34

#
favorite 0

#
comment 0

The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation, where the step size is adapted measuring the length of a so-called cumulative path. The cumulative path is a combination of the previous steps realized by the algorithm, where the importance of each step decreases with time. This article studies the CSA-ES on composites of strictly increasing functions with affine linear functions through the investigation of its underlying Markov chains. Rigorous results on the change...

Source: http://arxiv.org/abs/1212.0139v1

52
52

Sep 21, 2013
09/13

by
Yann Ollivier; Ludovic Arnold; Anne Auger; Nikolaus Hansen

texts

#
eye 52

#
favorite 0

#
comment 0

We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space $X$ into a continuous-time black-box optimization method on $X$, the \emph{information-geometric optimization} (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting \emph{IGO flow} is the flow of an ordinary differential equation conducting the natural gradient ascent of an adaptive, time-dependent...

Source: http://arxiv.org/abs/1106.3708v2

44
44

Sep 23, 2013
09/13

by
Nikolaus Hansen

texts

#
eye 44

#
favorite 0

#
comment 0

This report considers how to inject external candidate solutions into the CMA-ES algorithm. The injected solutions might stem from a gradient or a Newton step, a surrogate model optimizer or any other oracle or search mechanism. They can also be the result of a repair mechanism, for example to render infeasible solutions feasible. Only small modifications to the CMA-ES are necessary to turn injection into a reliable and effective method: too long steps need to be tightly renormalized. The main...

Source: http://arxiv.org/abs/1110.4181v1

4
4.0

Jun 28, 2018
06/18

by
Alexandre Chotard; Anne Auger; Nikolaus Hansen

texts

#
eye 4

#
favorite 0

#
comment 0

This paper analyzes a (1, $\lambda$)-Evolution Strategy, a randomized comparison-based adaptive search algorithm, optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain describing the behaviour of the algorithm. Stability of the chain...

Topics: Optimization and Control, Mathematics

Source: http://arxiv.org/abs/1510.04409

11
11

Jun 29, 2018
06/18

by
Nikolaus Hansen

texts

#
eye 11

#
favorite 0

#
comment 0

This tutorial introduces the CMA Evolution Strategy (ES), where CMA stands for Covariance Matrix Adaptation. The CMA-ES is a stochastic, or randomized, method for real-parameter (continuous domain) optimization of non-linear, non-convex functions. We try to motivate and derive the algorithm from intuitive concepts and from requirements of non-linear, non-convex search in continuous domain.

Topics: Machine Learning, Statistics, Computing Research Repository, Learning

Source: http://arxiv.org/abs/1604.00772

5
5.0

Jun 29, 2018
06/18

by
Nikolaus Hansen; Anne Auger; Dimo Brockhoff; Dejan Tušar; Tea Tušar

texts

#
eye 5

#
favorite 0

#
comment 0

We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1605.03560

78
78

Jul 19, 2013
07/13

by
Anne Auger; Nikolaus Hansen; Jorge M. Perez Zerpa; Raymond Ros; Marc Schoenauer

texts

#
eye 78

#
favorite 0

#
comment 0

In this paper, the performances of the quasi-Newton BFGS algorithm, the NEWUOA derivative free optimizer, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the Differential Evolution (DE) algorithm and Particle Swarm Optimizers (PSO) are compared experimentally on benchmark functions reflecting important challenges encountered in real-world optimization problems. Dependence of the performances in the conditioning of the problem and rotational invariance of the algorithms are in...

Source: http://arxiv.org/abs/1005.5631v1

3
3.0

Jan 31, 2020
01/20

by
Nikolaus Hansen

Thougts in games Archived from iTunes at https://podcasts.apple.com/us/podcast/fav-games/id1466645283. Items in this collection are restricted.

Topics: podcast, itunes, apple