Working Group 6

Benchmarking

Leader: Boris Naujoks, DE
Vice-Leader: Vanessa Volz, NL

WG6 is concerned with experimentally comparing algorithms so that the most information can be gained from the effort spent. The group focuses on evaluating the performance of Randomised Optimisation Algorithms (ROAs) themselves, but also comparing ROAs to more traditional solvers and studying how ROAs perform in actual practice when the humans, processes, and technologies involved in their deployment are included in the experiments.

Tasks

  • Improving upon current performance benchmarking methodology and tools, and supporting all other WGs in their use of benchmarking in connection with, but not limited to, theoretical development, algorithm design, problem instance characterisation, code validation, and algorithm selection and configuration.
  • Devising appropriate methodologies for the comparison of ROAs to more traditional solvers in scenarios where both can be applied, and conducting initial studies in that direction.
  • Devising appropriate methodologies for benchmarking ROAs, and contributing to the assessment of the practical impact of the Action’s modelling and ROA development advances.

Latest news

2025-08-14

Hub organiser and team applications invited

The ROAR-NET Problem Modelling Code Fest will take place online from 9 to 11 September 2025. Prospective hub organisers may apply until 26 August 2025. The team application deadline is 2 September 2025.

2025-06-09

Save the date: ROAR-NET Problem Modelling Code Fest

The ROAR-NET Problem Modelling Code Fest will take place online from 9 to 11 September 2025.

2025-03-27

ROAR-NET API Specification unveiled

The development repository for the ROAR-NET API Specification is now publicly available on GitHub.