We present a budget-free experimental setup and procedure for benchmarking numericaloptimization algorithms in a black-box scenario. This procedure can be applied with the COCO benchmarking platform. We describe initialization of and input to the algorithm and touch upon therelevance of termination and restarts.
Topics: Artificial Intelligence, Neural and Evolutionary Computing, Computing Research Repository
Source: http://arxiv.org/abs/1603.08776
COCO is a platform for Comparing Continuous Optimizers in a black-box setting. It aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. We present the rationals behind the development of the platform as a general proposition for a guideline towards better benchmarking. We detail underlying fundamental concepts of COCO such as its definition of a problem, the idea of instances, the relevance of target values, and...
Topics: Machine Learning, Artificial Intelligence, Numerical Analysis, Computing Research Repository,...
Source: http://arxiv.org/abs/1603.08785