How can LLMs strategize in changing games?
Factorised Active Inference for Strategic Multi-Agent Interactions
November 13, 2024
https://arxiv.org/pdf/2411.07362This paper proposes a new way to model how multiple AI agents interact strategically, using a framework called Active Inference (AIF). Instead of assuming agents have perfect knowledge of each other, each agent maintains individual beliefs about the other agents' hidden "mental states" and preferences, updating these beliefs as the game unfolds. This factored approach makes the model more aligned with game theory principles. The researchers apply this model to iterated games and show how it can track shifts in agent behavior during game transitions where payoffs change, highlighting how equilibrium states are reached.
Key points relevant to LLM-based multi-agent systems:
- Factorised beliefs: Agents maintain individual beliefs about others' hidden states, mirroring how LLMs could model other agents' intentions and strategies.
- Adaptive behavior: Agents adapt their actions based on observed behavior and updated beliefs, relevant for building responsive and adaptive LLM agents.
- Strategic uncertainty: Agents don't know other agents' payoff functions, reflecting realistic multi-agent scenarios where LLMs might need to infer goals and preferences.
- Equilibrium selection: The model shows how agents converge to different equilibrium states under varying conditions, useful for analyzing and designing stable multi-agent LLM systems.
- Game transitions: The model handles dynamic environments where game rules change, offering insights for developing robust LLM agents in non-stationary scenarios.
- Joint context: While maintaining factorized beliefs, the model integrates these into a joint interaction context defined by the overall game payoff function, which is important for coordinating the actions of LLM-based multi-agents working toward a common objective.