TY - JOUR
T1 - Neural Network-Based Information Transfer for Dynamic Optimization
AU - LIU, Xiao-Fang
AU - ZHAN, Zhi-Hui
AU - GU, Tian-Long
AU - KWONG, Sam
AU - LU, Zhenyu
AU - DUH, Henry Been-Lirn
AU - ZHANG, Jun
PY - 2020/5
Y1 - 2020/5
N2 - In dynamic optimization problems (DOPs), as the environment changes through time, the optima also dynamically change. How to adapt to the dynamic environment and quickly find the optima in all environments is a challenging issue in solving DOPs. Usually, a new environment is strongly relevant to its previous environment. If we know how it changes from the previous environment to the new one, then we can transfer the information of the previous environment, e.g., past solutions, to get new promising information of the new environment, e.g., new high-quality solutions. Thus, in this paper, we propose a neural network (NN)-based information transfer method, named NNIT, to learn the transfer model of environment changes by NN and then use the learned model to reuse the past solutions. When the environment changes, NNIT first collects the solutions from both the previous environment and the new environment and then uses an NN to learn the transfer model from these solutions. After that, the NN is used to transfer the past solutions to new promising solutions for assisting the optimization in the new environment. The proposed NNIT can be incorporated into population-based evolutionary algorithms (EAs) to solve DOPs. Several typical state-of-the-art EAs for DOPs are selected for comprehensive study and evaluated using the widely used moving peaks benchmark. The experimental results show that the proposed NNIT is promising and can accelerate algorithm convergence.
AB - In dynamic optimization problems (DOPs), as the environment changes through time, the optima also dynamically change. How to adapt to the dynamic environment and quickly find the optima in all environments is a challenging issue in solving DOPs. Usually, a new environment is strongly relevant to its previous environment. If we know how it changes from the previous environment to the new one, then we can transfer the information of the previous environment, e.g., past solutions, to get new promising information of the new environment, e.g., new high-quality solutions. Thus, in this paper, we propose a neural network (NN)-based information transfer method, named NNIT, to learn the transfer model of environment changes by NN and then use the learned model to reuse the past solutions. When the environment changes, NNIT first collects the solutions from both the previous environment and the new environment and then uses an NN to learn the transfer model from these solutions. After that, the NN is used to transfer the past solutions to new promising solutions for assisting the optimization in the new environment. The proposed NNIT can be incorporated into population-based evolutionary algorithms (EAs) to solve DOPs. Several typical state-of-the-art EAs for DOPs are selected for comprehensive study and evaluated using the widely used moving peaks benchmark. The experimental results show that the proposed NNIT is promising and can accelerate algorithm convergence.
KW - Dynamic optimization problem (DOP)
KW - information transfer
KW - neural network (NN)
UR - http://www.scopus.com/inward/record.url?scp=85075256695&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2019.2920887
DO - 10.1109/TNNLS.2019.2920887
M3 - Journal Article (refereed)
SN - 2162-237X
VL - 31
SP - 1557
EP - 1570
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 5
ER -