Can LLMs foster cooperation in multi-agent apps?
Evolution of Cooperation in LLM-Agent Societies: A Preliminary Study Using Different Punishment Strategies
This paper explores how cooperation evolves in groups of LLM-powered agents interacting in a simulated "diner's dilemma" scenario. Agents choose between cheap or expensive meals, impacting individual and group utility, and can punish selfish behavior. The research investigates how different agent strategies (e.g., always cooperate and punish, cooperate only after being punished) spread through the group via a simulated evolutionary process.
Key findings indicate that LLMs can effectively implement these strategies, and that punishment is crucial for driving cooperation. Explicitly defining punishment costs leads to faster and more consistent cooperation than letting the LLM decide. The stochastic nature of the strategy adoption process, however, means outcomes can vary. This suggests LLMs are a promising tool for simulating realistic multi-agent interactions, offering a more nuanced approach than abstract mathematical models.