Abstract
Adaptive operator selection (AOS) is used to determine the application rates of different operators in an online manner based on their recent performances within an optimization process. This paper proposes a bandit-based AOS method, fitness-rate-rank-based multiarmed bandit (FRRMAB). In order to track the dynamics of the search process, it uses a sliding window to record the recent fitness improvement rates achieved by the operators, while employing a decaying mechanism to increase the selection probability of the best operator. Not much work has been done on AOS in multiobjective evolutionary computation since it is very difficult to measure the fitness improvements quantitatively in most Pareto-dominance-based multiobjective evolutionary algorithms. Multiobjective evolutionary algorithm based on decomposition (MOEA/D) decomposes a multiobjective optimization problem into a number of scalar optimization subproblems and optimizes them simultaneously. Thus, it is natural and feasible to use AOS in MOEA/D. We investigate several important issues in using FRRMAB in MOEA/D. Our experimental results demonstrate that FRRMAB is robust and its operator selection is reasonable. Comparison experiments also indicate that FRRMAB can significantly improve the performance of MOEA/D. © 2013 IEEE.
Original language | English |
---|---|
Pages (from-to) | 114-130 |
Journal | IEEE Transactions on Evolutionary Computation |
Volume | 18 |
Issue number | 1 |
Early online date | 11 Jan 2013 |
DOIs | |
Publication status | Published - Feb 2014 |
Externally published | Yes |
Funding
This work was supported in part by the Natural Science Foundation of China under Grant 61272289, and by the City University of Hong Kong under Strategic Grant 7002826.
Keywords
- Adaptive operator selection (AOS)
- decomposition
- multiarmed bandit
- multiobjective evolutionary algorithm based on decomposition (MOEA/D)
- multiobjective optimization