How can LLMs play games rationally?
Game-theoretic LLM: Agent Workflow for Negotiation Games
This paper explores how well Large Language Models (LLMs) can make rational decisions in strategic scenarios, like negotiations, using game theory as a framework. It finds that LLMs often struggle with complex games and deviate from optimal strategies, especially when uncertainty is involved. To address this, researchers designed game-theory-inspired workflows to guide LLM decision-making. These workflows significantly improved LLM performance in reaching agreements and achieving near-optimal outcomes. However, the research also found that LLM-agents using these workflows can be exploited by agents not using them, raising a meta-strategic question of when it's beneficial to use such a workflow. Additionally, LLM rationality was found to be surprisingly sensitive to seemingly minor changes in a game's parameters, like the exact numerical payoffs, and even to the LLM's assigned "personality". Finally, the study examined differences between LLM irrationality and human irrationality, finding different patterns despite overall sub-optimal outcomes for both.