Short-Term Scientific Mission
Main theme: Benchmarking
Grantee: Vanessa Volz, Stichting Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
Host: Carola Doerr, Sorbonne University, Paris, France
Start date: 2024-09-02
End date: 2024-09-27
Awarded: 2024-07-05
Report approved: 2024-11-06
While several benchmarking frameworks exist to evaluate and compare randomised optimisation algorithms, and evolutionary algorithms in particular, none of them are well-suited for algorithms that learn, whether in a separate training phase or during runtime. Algorithm selection and configuration methods, as well as many surrogate-assisted approaches, fall into this gap. This gap arises from the explicit tailoring of popular benchmarking interfaces to the black-box use case, which leads to implicit assumptions.
The STSM aims at the development of a method for integrating learning algorithms into existing benchmarking interfaces for continuous optimisation. This includes addressing common assumptions and structuring future collaboration.
During the STSM, the focus was on learning across recurring problems, where similarity can be assumed. Important design decisions were refined by considering real-world(-like) applications like route planning and hyperparameter optimisation. Baseline experiments that were defined and run demonstrated the ability to iteratively tailor an algorithm to improve its performance over multiple problem instances.
Nevertheless, the uncertainty of the algorithm behaviour and the strength of the learning signal remain open issues. Additionally, extending beyond recurring problems requires significant work on measuring their similarity. The STSM laid the groundwork for future collaborations by establishing stronger ties, sharing data and planning upcoming publications to address these challenges.