How can LLMs detect shifting alliances in natural language games?
Dynamic Coalition Structure Detection in Natural Language-based Interactions
This paper tackles the challenge of predicting how and why alliances form in multi-agent systems where agents communicate using natural language, specifically within the game of Diplomacy. It proposes a two-stage method: 1) detect potential agreements from the dialogue using a combination of large language models (LLMs) and game-specific intent models, and 2) evaluate the likelihood of those agreements being honored using a game-theoretic approach called "subjective rationalizability" which considers each agent's individual perspective and uncertainty. Key for LLM-based multi-agent systems, the work demonstrates the effectiveness of combining LLMs for dialogue parsing with game-theoretic reasoning for strategic prediction in a complex, partially observable environment. The combination of intent recognition from specialized models with broader LLM context parsing is also highlighted.