Abstract
Large-scale optimization problem (LSOP) is an essential research topic in the field of evolutionary computation community. Many large-scale optimization algorithms often maintain a large population for diversity enhancement. However, updating such a large population consumes a significant number of fitness evaluations (FEs), which may lead to the insufficient evolution of the population. In light of this, this article proposes a small-scale learning particle swarm optimization (SSLPSO) for solving LSOPs. In the small-scale learning mechanism, only up to two representative individuals are updated in every generation to effectively save FEs and prolong the evolutionary generations, so as to refine the solution accuracy. Specifically, we first design a representative individual selection (RIS) strategy to select the convergence representative individual and the diversity representative individual for updating. Then, we develop a representative individual learning (RIL) strategy, which includes a convergence learning method and a diversity learning method for the convergence representative individual and the diversity representative individual, respectively. Meanwhile, we further propose an adaptive strategy adjustment (ASA) method based on evolutionary state assessment to determine whether the representative individuals should be updated, further achieving the adaptive adjustment of the evolutionary behavior in the population. Experimental results on the commonly used large-scale test suites, IEEE CEC2010 and IEEE CEC2013, show that the performance of SSLPSO is significantly better than, or at least comparable to other state-of-the-art large-scale optimization algorithms, including the winners of large-scale competitions. Finally, the application of SSLPSO to a large-scale constrained water distribution network optimization problem further demonstrates its real-world applicability.
| Original language | English |
|---|---|
| Pages (from-to) | 523-536 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Cybernetics |
| Volume | 56 |
| Issue number | 1 |
| Early online date | 17 Sept 2025 |
| DOIs | |
| Publication status | Published - Jan 2026 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Funding
This work was supported in part by the National Natural Science Foundations of China (NSFC) under Grant 62106055; in part by the Guangdong Natural Science Foundation under Grant 2025A1515010256; in part by the Guangzhou Science and Technology Planning Project under Grant 2023A04J0388 and Grant 2023A03J0662.
Keywords
- Evolutionary computation
- large-scale optimization
- particle swarm optimization (PSO)
- small-scale learning particle swarm optimization (SSLPSO)
Fingerprint
Dive into the research topics of 'Less Is More: A Small-Scale Learning Particle Swarm Optimization for Large-Scale Optimization'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver