Measuring generalization performance in coevolutionary learning

Siang Yew CHONG, Peter TIÑO, Xin YAO

Research output: Journal PublicationsJournal Article (refereed)peer-review

47 Citations (Scopus)


Coevolutionary learning involves a training process where training samples are instances of solutions that interact strategically to guide the evolutionary (learning) process. One main research issue is with the generalization performance, i.e., the search for solutions (e.g., input-output mappings) that best predict the required output for any new input that has not been seen during the evolutionary process. However, there is currently no such framework for determining the generalization performance in coevolutionary learning even though the notion of generalization is well-understood in machine learning. In this paper, we introduce a theoretical framework to address this research issue. We present the framework in terms of game-playing although our results are more general. Here, a strategy's generalization performance is its average performance against all test strategies. Given that the true value may not be determined by solving analytically a closed-form formula and is computationally prohibitive, we propose an estimation procedure that computes the average performance against a small sample of random test strategies instead. We perform a mathematical analysis to provide a statistical claim on the accuracy of our estimation procedure, which can be further improved by performing a second estimation on the variance of the random variable. For game-playing, it is well-known that one is more interested in the generalization performance against a biased and diverse sample of "good"test strategies. We introduce a simple approach to obtain such a test sample through the multiple partial enumerative search of the strategy space that does not require human expertise and is generally applicable to a wide range of domains. We illustrate the generalization framework on the coevolutionary learning of the iterated prisoner's dilemma (IPD) games. We investigate two definitions of generalization performance for the IPD game based on different performance criteria, e.g., in terms of the number of wins based on individual outcomes and in terms of average payoff. We show that a small sample of test strategies can be used to estimate the generalization performance. We also show that the generalization performance using a biased and diverse set of "good"test strategies is lower compared to the unbiased case for the IPD game. This is the first time that generalization is defined and analyzed rigorously in coevolutionary learning. The framework allows the evaluation of the generalization performance of any coevolutionary learning system quantitatively. © 2008 IEEE.
Original languageEnglish
Pages (from-to)479-505
Number of pages27
JournalIEEE Transactions on Evolutionary Computation
Issue number4
Publication statusPublished - Aug 2008
Externally publishedYes

Bibliographical note

The work of X. Yao was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) under Grant GR/T10671/01.


  • Chebyshev's inequality
  • Coevolutionary learning
  • Evolutionary computation
  • Generalization
  • Iterated prisoner's dilemma (IPD)


Dive into the research topics of 'Measuring generalization performance in coevolutionary learning'. Together they form a unique fingerprint.

Cite this