In noisy evolutionary optimization, sampling is a common strategy to deal with noise, which evaluates the fitness of a solution multiple times (called sample size) independently and then uses the average to approximate the true fitness. Previous studies mainly focused on the empirical design of efficient sampling strategies, and the few theoretical analyses mainly proved the effectiveness of sampling with a fixed sample size in some situations. There are many fundamental theoretical issues to be addressed. In this paper, we first investigate the effect of sample size. By analyzing the (1+1)-EA on noisy LeadingOnes, we show that as the sample size increases, the running time can reduce from exponential to polynomial, but then return to exponential. This discloses that a proper sample size is crucial in practice. Then, we investigate what other strategies can work when sampling with any fixed sample size fails. By two illustrative examples, we prove that using parent populations can be better, and if using parent populations is also ineffective, adaptive sampling (i.e., sampling with an adaptive sample size) can work. © 2018 Association for Computing Machinery.
|Title of host publication
|GECCO 2018 - Proceedings of the 2018 Genetic and Evolutionary Computation Conference
|Association for Computing Machinery, Inc
|Number of pages
|Published - 2 Jul 2018
Bibliographical noteThis work was supported by the Ministry of Science and Technology of China (2017YFC0804003), the NSFC (61603367, 61672478), the YESS (2016QNRC001), the Science and Technology Innovation Committee Foundation of Shenzhen (ZDSYS201703031748284), and the Royal Society Newton Advanced Fellowship (NA150123).
- Computational complexity
- Evolutionary algorithms
- Noisy optimization
- Running time analysis