2007 | OriginalPaper | Buchkapitel
Cooperation is not always so simple to learn
verfasst von : M. Mailliard, F. Amblard, C. Sibertin-Blanc, P. Roggero
Erschienen in: Agent-Based Approaches in Economic and Social Complex Systems IV
Verlag: Springer Japan
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this paper, we propose to study the influence of different learning mechanisms of social behaviours on a given multi-agent model (
Sibertin-Blanc et al. 2005
). The studied model has been elaborated from a formalization of the organized action theory (
Crozier and Friedberg 1977
) and is based on the modelling of control and dependency relationships between resources and actors. The proposed learning mechanisms cover different possible implementations of the classifier systems on this model. In order to compare our results with existing ones in a classical framework, we restrain here the study to cases corresponding to the prisoner’s dilemma framework. The obtained results exhibit variability about convergence times as well as emergent social behaviours depending on the implementation choices for the learning classifier systems (LCS) and on the LCS parameters. We conclude by analysing the sources of this variability and giving perspectives about the use of such a model in broader cases.