Abstract
Measuring the performance of an algorithm for solving multiobjective optimization problem has always been challenging simply due to two conflicting goals, i.e., convergence and diversity of obtained tradeoff solutions. There are a number of metrics for evaluating the performance of a multiobjective optimizer that approximates the whole Pareto-optimal front. However, for evaluating the quality of a preferred subset of the whole front, the existing metrics are inadequate. In this paper, we suggest a systematic way to adapt the existing metrics to quantitatively evaluate the performance of a preference-based evolutionary multiobjective optimization algorithm using reference points. The basic idea is to preprocess the preferred solution set according to a multicriterion decision making approach before using a regular metric for performance assessment. Extensive experiments on several artificial scenarios, and benchmark problems fully demonstrate its effectiveness in evaluating the quality of different preferred solution sets with regard to various reference points supplied by a decision maker. © 1997-2012 IEEE.
Original language | English |
---|---|
Article number | 8049301 |
Pages (from-to) | 821-835 |
Number of pages | 15 |
Journal | IEEE Transactions on Evolutionary Computation |
Volume | 22 |
Issue number | 6 |
Early online date | 25 Sept 2017 |
DOIs | |
Publication status | Published - Dec 2018 |
Externally published | Yes |
Funding
This work was supported in part by EPSRC under Grant EP/K001523/1, and in part by NSFC under Grant 61329302. The work of X. Yao was supported by a Royal Society Wolfson Research Merit Award.
Keywords
- Evolutionary multiobjective optimization (EMO)
- multicriterion decision making (MCDM)
- performance assessment
- reference point
- user-preference