TY - JOUR
T1 - Benchmarking optimization algorithms: An open source framework for the traveling salesman problem
AU - WEISE, Thomas
AU - CHIONG, Raymond
AU - LASSIG, Jorg
AU - TANG, Ke
AU - TSUTSUI, Shigeyoshi
AU - CHEN, Wenxiang
AU - MICHALEWICZ, Zbigniew
AU - YAO, Xin
PY - 2014/8
Y1 - 2014/8
N2 - We introduce an experimentation procedure for evaluating and comparing optimization algorithms based on the Traveling Salesman Problem (TSP). We argue that end-of-run results alone do not give sufficient information about an algorithm's performance, so our approach analyzes the algorithm's progress over time. Comparisons of performance curves in diagrams can be formalized by comparing the areas under them. Algorithms can be ranked according to a performance metric. Rankings based on different metrics can then be aggregated into a global ranking, which provides a quick overview of the quality of algorithms in comparison. An open source software framework, the TSP Suite, applies this experimental procedure to the TSP. The framework can support researchers in implementing TSP solvers, unit testing them, and running experiments in a parallel and distributed fashion. It also has an evaluator component, which implements the proposed evaluation process and produces detailed reports. We test the approach by using the TSP Suite to benchmark several local search and evolutionary computation methods. This results in a large set of baseline data, which will be made available to the research community. Our experiments show that the tested pure global optimization algorithms are outperformed by local search, but the best results come from hybrid algorithms. © 2014 IEEE.
AB - We introduce an experimentation procedure for evaluating and comparing optimization algorithms based on the Traveling Salesman Problem (TSP). We argue that end-of-run results alone do not give sufficient information about an algorithm's performance, so our approach analyzes the algorithm's progress over time. Comparisons of performance curves in diagrams can be formalized by comparing the areas under them. Algorithms can be ranked according to a performance metric. Rankings based on different metrics can then be aggregated into a global ranking, which provides a quick overview of the quality of algorithms in comparison. An open source software framework, the TSP Suite, applies this experimental procedure to the TSP. The framework can support researchers in implementing TSP solvers, unit testing them, and running experiments in a parallel and distributed fashion. It also has an evaluator component, which implements the proposed evaluation process and produces detailed reports. We test the approach by using the TSP Suite to benchmark several local search and evolutionary computation methods. This results in a large set of baseline data, which will be made available to the research community. Our experiments show that the tested pure global optimization algorithms are outperformed by local search, but the best results come from hybrid algorithms. © 2014 IEEE.
UR - http://www.scopus.com/inward/record.url?scp=84904612263&partnerID=8YFLogxK
U2 - 10.1109/MCI.2014.2326101
DO - 10.1109/MCI.2014.2326101
M3 - Journal Article (refereed)
SN - 1556-603X
VL - 9
SP - 40
EP - 52
JO - IEEE Computational Intelligence Magazine
JF - IEEE Computational Intelligence Magazine
IS - 3
M1 - 6853446
ER -