Cooperative co-evolution for large scale optimization through more frequent random grouping

Mohammad Nabi OMIDVAR, Xiaodong LI, Zhenyu YANG, Xin YAO

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review

203 Citations (Scopus)

Abstract

In this paper we propose three techniques to improve the performance of one of the major algorithms for large scale continuous global function optimization. Multilevel Cooperative Co-evolution (MLCC) is based on a Cooperative Co-evolutionary framework and employs a technique called random grouping in order to group interacting variables in one subcomponent. It also uses another technique called adaptive weighting for co-adaptation of subcomponents. We prove that the probability of grouping interacting variables in one subcomponent using random grouping drops significantly as the number of interacting variables increases. This calls for more frequent random grouping of variables. We show how to increase the frequency of random grouping without increasing the number of fitness evaluations. We also show that adaptive weighting is ineffective and in most cases fails to improve the quality of found solution, and hence wastes considerable amount of CPU time by extra evaluations of objective function. Finally we propose a new technique for self-adaptation of the subcomponent sizes in CC. We demonstrate how a substantial improvement can be gained by applying these three techniques. © 2010 IEEE.
Original languageEnglish
Title of host publication2010 IEEE World Congress on Computational Intelligence, WCCI 2010 - 2010 IEEE Congress on Evolutionary Computation, CEC 2010
DOIs
Publication statusPublished - Jul 2010
Externally publishedYes

Fingerprint

Dive into the research topics of 'Cooperative co-evolution for large scale optimization through more frequent random grouping'. Together they form a unique fingerprint.

Cite this