Abstract
Fitness sharing has been used widely in genetic algorithms for multi-objective function optimization and machine learning. It is often implemented with a scaling function, which adjusts an individual's raw fitness to improve the performance of the genetic algorithm. However, choosing a scaling function is an ad hoc affair that lacks sufficient theoretical foundation. Although this is already known, an explanation of why scaling works is lacking. This paper explains why a scaling function is often needed for fitness sharing. We investigate fitness sharing's performance at multi-objective optimization, demonstrate the need for a scaling function of some kind, and discuss what form of scaling function would be best. We provide both theoretical and empirical evidence that fitness sharing with a scaling function suffers a dilemma which can easily be mistaken for deception. Our theoretical analyses and empirical studies explain why a larger-than-necessary population is needed for fitness sharing with a scaling function to work, and give an explanation for common fixes such as further processing with a hill-climbing algorithm. Our explanation predicts that annealing the scaling power during a run will improve results, and we verify that it does.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of 1995 IEEE International Conference on Evolutionary Computation |
| Pages | 166-171 |
| Number of pages | 6 |
| Volume | 1 |
| DOIs | |
| Publication status | Published - 1995 |
| Externally published | Yes |
Fingerprint
Dive into the research topics of 'A dilemma for fitness sharing with a scaling function'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver