A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization

Qiang YANG, Wei-Neng CHEN, Tianlong GU, Huaxiang ZHANG, Huaqiang YUAN, Sam KWONG, Jun ZHANG

Research output: Journal PublicationsJournal Article (refereed)peer-review

73 Citations (Scopus)

Abstract

Large-scale optimization with high dimensionality and high computational cost becomes ubiquitous nowadays. To tackle such challenging problems efficiently, devising distributed evolutionary computation algorithms is imperative. To this end, this paper proposes a distributed swarm optimizer based on a special master-slave model. Specifically, in this distributed optimizer, the master is mainly responsible for communication with slaves, while each slave iterates a swarm to traverse the solution space. An asynchronous and adaptive communication strategy based on the request-response mechanism is especially devised to let the slaves communicate with the master efficiently. Particularly, the communication between the master and each slave is adaptively triggered during the iteration. To aid the slaves to search the space efficiently, an elite-guided learning strategy is especially designed via utilizing elite particles in the current swarm and historically best solutions found by different slaves to guide the update of particles. Together, this distributed optimizer asynchronously iterates multiple swarms to collaboratively seek the optimum in parallel. Extensive experiments on a widely used large-scale benchmark set substantiate that the distributed optimizer could: 1) achieve competitive effectiveness in terms of solution quality as compared to the state-of-the-art large-scale methods; 2) accelerate the execution of the algorithm in comparison with the sequential one and obtain almost linear speedup as the number of cores increases; and 3) preserve a good scalability to solve higher dimensional problems.
Original languageEnglish
Pages (from-to)3393-3408
JournalIEEE Transactions on Cybernetics
Volume50
Issue number7
Early online date9 Apr 2019
DOIs
Publication statusPublished - Jul 2020
Externally publishedYes

Bibliographical note

This work was supported in part by the National Natural Science Foundation of China under Grant 61622206 and Grant 61876111, in part by the Natural Science Foundation of Guangdong under Grant 2015A030306024, in part by the Science and Technology Plan Project of Guangdong Province under Grant 2018B050502006, and in part by the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing.

Keywords

  • Distributed evolutionary algorithms
  • elite-guided learning (EGL)
  • high-dimensional problems
  • large-scale optimization
  • particle swarm optimization (PSO)

Fingerprint

Dive into the research topics of 'A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization'. Together they form a unique fingerprint.

Cite this