Abstract
Coevolutionary learning provides a framework for modeling more realistic iterated prisoner's dilemma (IPD) interactions and to study conditions of how and why certain behaviors (e.g., cooperation) in a complex environment can be learned through an adaptation process guided by strategic interactions. The coevolutionary learning of cooperative behaviors can be attributed to the mechanism of direct reciprocity (e.g., repeated encounters). However, for the more complex IPD game with more choices, it is unknown precisely why the mechanism of direct reciprocity is less effective in promoting the learning of cooperative behaviors. Here, our study suggests that the evolution of defection may be a result of strategies effectively having more opportunities to exploit others when there are more choices. We note that strategies are less able to resolve the intention of an intermediate choice, e.g., whether it is a signal to engender further cooperation or a subtle exploitation. A likely consequence is strategies adapting to lower cooperation plays that offer higher payoffs in the short-term view when they cannot resolve the intention of opponents. However, cooperation in complex human interactions may also involve indirect interactions rather than direct interactions only. Following this, we study the coevolutionary learning of IPD with more choices and reputation. Here, current behavioral interactions depend not only on choices made in previous moves (direct interactions), but also choices made in past interactions that are reflected by their reputation scores (indirect interactions). The coevolutionary learning of cooperative behaviors is possible in the IPD with more choices when strategies use reputation as a mechanism to estimate behaviors of future partners and to elicit mutual cooperation play right from the start of interactions. In addition, we study the impact of the accuracy of reputation estimation in reflecting strategy behaviors of different implementations and why it is important for the evolution of cooperation. We show that the accuracy is related to how memory of games from previous generations is incorporated to calculate reputation scores and how frequently reputation scores are updated. © 2007 IEEE.
Original language | English |
---|---|
Pages (from-to) | 689-711 |
Number of pages | 23 |
Journal | IEEE Transactions on Evolutionary Computation |
Volume | 11 |
Issue number | 6 |
Early online date | 29 Nov 2007 |
DOIs | |
Publication status | Published - Dec 2007 |
Externally published | Yes |
Bibliographical note
The work of X. Yao was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) under Grant GR/T10671/01.Keywords
- Coevolutionary learning
- Evolutionary computation
- Evolutionary games
- Iterated prisoner's dilemma (IPD)
- Reputation