Evolving Benchmark Functions for Optimization Algorithms

Yang LOU, Shiu Yin YUEN, Guanrong CHEN

Research output: Book Chapters | Papers in Conference ProceedingsBook ChapterResearchpeer-review

Abstract

Optimization aims at finding optimal solution(s) from all feasible solutions, where an optimal solution represents the extremum with respect to a certain objective. This chapter introduces an evolving approach for generating benchmark testing problems. It also introduces a systematic method for constructing performance-comparison-based benchmark problems, namely the hierarchical-fitness-based evolving benchmark generator (HFEBG). The chapter describes the HFEBG framework, together with two variants, namely HFEBG-U and HFEBG-H. It utilises U-test and H-test in different hierarchical-fitness assignment methods. Testing optimization algorithms on both real-world problems and benchmark problems would give a performance measure. Performance comparison of optimization algorithms is studied in terms of unique difficulty: specifically, uniquely easy and uniquely difficult problems. A criticism has been expressed in the field, namely, many proposed novel optimization algorithms actually contribute little, since they are not compared with the winners of competitions. Sequential learnable evolutionary algorithm provides an algorithm-selection framework for solving black-box continuous design-optimization problems.
Original languageEnglish
Title of host publicationFrom Parallel to Emergent Computing
EditorsAndrew ADAMATZKY, Selim G. AKL, Georgios Ch. SIRAKOULIS
Place of PublicationBoca Raton
PublisherCRC Press, Taylor & Francis Group
Chapter11
Pages239–260
Number of pages22
Edition1
ISBN (Electronic)9781315167084
Publication statusPublished - 2019
Externally publishedYes

Fingerprint

Dive into the research topics of 'Evolving Benchmark Functions for Optimization Algorithms'. Together they form a unique fingerprint.

Cite this