Co-evolutionary learning of contextual asymmetric actors

Siang Yew CHONG, Christopher HILL, Xin YAO

Research output: Book Chapters | Papers in Conference ProceedingsConference paper (refereed)Researchpeer-review


Co-evolutionary learning of the iterated prisoner's dilemma (IPD) has been used to model and simulate interactions, which may not be realistic due to assumptions of a fixed and symmetric payoff matrix for all players. Recently, we proposed to extend the co-evolutionary learning framework for any two-player repeated encounter game to model more realistic behavioral interactions. One issue we studied is to endow players with individual and self-adaptive payoff matrix to model individual variations in their utility expectations of rewards for making certain decisions. Here, we study a different issue involving contextual asymmetric actors. The differences in the utility expectations (payoff matrix) are due to contextual circumstances (external) such as political roles rather than variations in individual preferences (internal). We emphasize the model of interactions among contextually asymmetric actors through a multi-population structure in the co-evolutionary learning framework where different populations representing different actor roles interact. We study how different actor roles modelled by fixed and asymmetric payoff matrices can have an impact to the outcome of co-evolutionary learning. As an illustration, we apply co-evolutionary learning of two contextually asymmetric actors from the spanish democratic transition. © ECMS.
Original languageEnglish
Title of host publicationProceedings - 23rd European Conference on Modelling and Simulation, ECMS 2009
Number of pages7
Publication statusPublished - 9 Jun 2009
Externally publishedYes


  • Asymmetric payoff
  • Coevolutionary learning
  • Multi-population
  • Repeated encounter games
  • Spanish democratic transition


Dive into the research topics of 'Co-evolutionary learning of contextual asymmetric actors'. Together they form a unique fingerprint.

Cite this