Abstract
Over the last decade, research on automated parameter tuning, often referred to as automatic algorithm configuration (AAC), has made significant progress. Although the usefulness of such tools has been widely recognized in real world applications, the theoretical foundations of AAC are still very weak. This paper addresses this gap by studying the performance estimation problem in AAC. More specifically, this paper first proves the universal best performance estimator in a practical setting, and then establishes theoretical bounds on the estimation error, i.e., the difference between the training performance and the true performance for a parameter configuration, considering finite and infinite configuration spaces respectively. These findings were verified in extensive experiments conducted on four algorithm configuration scenarios involving different problem domains. Moreover, insights for enhancing existing AAC methods are also identified. Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Original language | English |
---|---|
Title of host publication | AAAI 2020 - 34th AAAI Conference on Artificial Intelligence |
Publisher | AAAI press |
Pages | 2384-2391 |
Number of pages | 8 |
ISBN (Print) | 9781577358350 |
Publication status | Published - 2020 |
Externally published | Yes |
Funding
This work is supported partially by the National Key Research and Development Program of China (Grant No. 2017YFC0804003), the National Natural Science Foundation of China (Grant Nos. 61672478, 61806090), the Program for University Key Laboratory of Guangdong Province (Grant No. 2017KSYS008), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams(Grant No. 2017ZT07X386) and the Shenzhen Peacock Plan (Grant No. KQTD2016112514355531).