2
2.0

Jun 29, 2018
06/18

by
Nikolaus Hansen; Anne Auger; Olaf Mersmann; Tea Tusar; Dimo Brockhoff

texts

#
eye 2

#
favorite 0

#
comment 0

COCO is a platform for Comparing Continuous Optimizers in a black-box setting. It aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. We present the rationals behind the development of the platform as a general proposition for a guideline towards better benchmarking. We detail underlying fundamental concepts of COCO such as its definition of a problem, the idea of instances, the relevance of target values, and...

Topics: Machine Learning, Artificial Intelligence, Numerical Analysis, Computing Research Repository,...

Source: http://arxiv.org/abs/1603.08785

3
3.0

Jun 29, 2018
06/18

by
Nikolaus Hansen; Tea Tusar; Olaf Mersmann; Anne Auger; Dimo Brockhoff

texts

#
eye 3

#
favorite 0

#
comment 0

We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.

Topics: Artificial Intelligence, Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1603.08776

3
3.0

Jun 29, 2018
06/18

by
Dimo Brockhoff; Tea Tušar; Dejan Tušar; Tobias Wagner; Nikolaus Hansen; Anne Auger

texts

#
eye 3

#
favorite 0

#
comment 0

This document details the rationales behind assessing the performance of numerical black-box optimizers on multi-objective problems within the COCO platform and in particular on the biobjective test suite bbob-biobj. The evaluation is based on a hypervolume of all non-dominated solutions in the archive of candidate solutions and measures the runtime until the hypervolume value succeeds prescribed target values.

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1605.01746

6
6.0

Jun 29, 2018
06/18

by
Tea Tusar; Dimo Brockhoff; Nikolaus Hansen; Anne Auger

texts

#
eye 6

#
favorite 0

#
comment 0

The bbob-biobj test suite contains 55 bi-objective functions in continuous domain which are derived from combining functions of the well-known single-objective noiseless bbob test suite. Besides giving the actual function definitions and presenting their (known) properties, this documentation also aims at giving the rationale behind our approach in terms of function groups, instances, and potential objective space normalization.

Topics: Artificial Intelligence, Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1604.00359

3
3.0

Jun 30, 2018
06/18

by
Alexandre Chotard; Anne Auger; Nikolaus Hansen

texts

#
eye 3

#
favorite 0

#
comment 0

This paper analyses a $(1,\lambda)$-Evolution Strategy, a randomised comparison-based adaptive search algorithm, on a simple constraint optimisation problem. The algorithm uses resampling to handle the constraint and optimizes a linear function with a linear constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using path length control. We exhibit for each case a Markov chain whose stability analysis would...

Topics: Neural and Evolutionary Computing, Mathematics, Computing Research Repository, Optimization and...

Source: http://arxiv.org/abs/1404.3023

2
2.0

Jun 29, 2018
06/18

by
Youhei Akimoto; Anne Auger; Nikolaus Hansen

texts

#
eye 2

#
favorite 0

#
comment 0

We investigate evolution strategies with weighted recombination on general convex quadratic functions. We derive the asymptotic quality gain in the limit of the dimension to infinity, and derive the optimal recombination weights and the optimal step-size. This work is an extension of previous works where the asymptotic quality gain of evolution strategies with weighted recombination was derived on the infinite dimensional sphere function. Moreover, for a finite dimensional search space, we...

Topics: Optimization and Control, Mathematics

Source: http://arxiv.org/abs/1608.04813

52
52

Sep 21, 2013
09/13

by
Yann Ollivier; Ludovic Arnold; Anne Auger; Nikolaus Hansen

texts

#
eye 52

#
favorite 0

#
comment 0

We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space $X$ into a continuous-time black-box optimization method on $X$, the \emph{information-geometric optimization} (IGO) method. Invariance as a major design principle keeps the number of arbitrary choices to a minimum. The resulting \emph{IGO flow} is the flow of an ordinary differential equation conducting the natural gradient ascent of an adaptive, time-dependent...

Source: http://arxiv.org/abs/1106.3708v2

34
34

Sep 23, 2013
09/13

by
Alexandre Chotard; Anne Auger; Nikolaus Hansen

texts

#
eye 34

#
favorite 0

#
comment 0

The CSA-ES is an Evolution Strategy with Cumulative Step size Adaptation, where the step size is adapted measuring the length of a so-called cumulative path. The cumulative path is a combination of the previous steps realized by the algorithm, where the importance of each step decreases with time. This article studies the CSA-ES on composites of strictly increasing functions with affine linear functions through the investigation of its underlying Markov chains. Rigorous results on the change...

Source: http://arxiv.org/abs/1212.0139v1

57
57

Sep 23, 2013
09/13

by
Mohamed Jebalia; Anne Auger; Marc Schoenauer; Francois James; Marie Postel

texts

#
eye 57

#
favorite 0

#
comment 0

This paper deals with the identification of the flux for a system of conservation laws in the specific example of analytic chromatography. The fundamental equations of chromatographic process are highly non linear. The state-of-the-art Evolution Strategy, CMA-ES (the Covariance Matrix Adaptation Evolution Strategy), is used to identify the parameters of the so-called isotherm function. The approach was validated on different configurations of simulated data using either one, two or three...

Source: http://arxiv.org/abs/0710.0322v1

78
78

Jul 19, 2013
07/13

by
Anne Auger; Nikolaus Hansen; Jorge M. Perez Zerpa; Raymond Ros; Marc Schoenauer

texts

#
eye 78

#
favorite 0

#
comment 0

In this paper, the performances of the quasi-Newton BFGS algorithm, the NEWUOA derivative free optimizer, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the Differential Evolution (DE) algorithm and Particle Swarm Optimizers (PSO) are compared experimentally on benchmark functions reflecting important challenges encountered in real-world optimization problems. Dependence of the performances in the conditioning of the problem and rotational invariance of the algorithms are in...

Source: http://arxiv.org/abs/1005.5631v1

51
51

Sep 23, 2013
09/13

by
Cyril Furtlehner; Jean-Marc Lasgouttes; Anne Auger

texts

#
eye 51

#
favorite 0

#
comment 0

In the context of inference with expectation constraints, we propose an approach based on the "loopy belief propagation" algorithm LBP, as a surrogate to an exact Markov Random Field MRF modelling. A prior information composed of correlations among a large set of N variables, is encoded into a graphical model; this encoding is optimized with respect to an approximate decoding procedure LBP, which is used to infer hidden variables from an observed subset. We focus on the situation...

Source: http://arxiv.org/abs/0903.4860v1

5
5.0

Jun 29, 2018
06/18

by
Nikolaus Hansen; Anne Auger; Dimo Brockhoff; Dejan Tušar; Tea Tušar

texts

#
eye 5

#
favorite 0

#
comment 0

We present an any-time performance assessment for benchmarking numerical optimization algorithms in a black-box scenario, applied within the COCO benchmarking platform. The performance assessment is based on runtimes measured in number of objective function evaluations to reach one or several quality indicator target values. We argue that runtime is the only available measure with a generic, meaningful, and quantitative interpretation. We discuss the choice of the target values, runlength-based...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1605.03560

4
4.0

Jun 28, 2018
06/18

by
Alexandre Chotard; Anne Auger; Nikolaus Hansen

texts

#
eye 4

#
favorite 0

#
comment 0

This paper analyzes a (1, $\lambda$)-Evolution Strategy, a randomized comparison-based adaptive search algorithm, optimizing a linear function with a linear constraint. The algorithm uses resampling to handle the constraint. Two cases are investigated: first the case where the step-size is constant, and second the case where the step-size is adapted using cumulative step-size adaptation. We exhibit for each case a Markov chain describing the behaviour of the algorithm. Stability of the chain...

Topics: Optimization and Control, Mathematics

Source: http://arxiv.org/abs/1510.04409