Can LLMs learn social norms through dialogue?
Evolution of Social Norms in LLM Agents using Natural Language
This research investigates whether large language models (LLMs) acting as autonomous agents can establish social norms within a game environment solely through natural language interactions. Specifically, it examines the emergence of "metanorms," which are norms that enforce punishment for those who don't follow other established norms.
The key findings demonstrate that LLM agents can indeed develop and enforce complex social norms, including metanorms, by engaging in natural language discussions. The agents exhibited diverse strategies and behaviors based on their individual "personality" parameters and learned to adapt those strategies over time through a simulated evolutionary process driven by rewards within the game. This has important implications for understanding and controlling the emergence of norms and behaviors in LLM-based multi-agent systems.