Can LLMs verify human-like behavior in games?
Expectation vs. Reality: Towards Verification of Psychological Games
This paper explores "psychological games," where an agent's payoff depends not only on its actions but also its beliefs about other agents' actions. It adapts existing game theory algorithms to find optimal solutions in these scenarios and creates a framework for analysing such games within multi-stage, probabilistic settings. This is relevant to LLM-based multi-agent systems because it allows for modelling agents whose behavior is affected by emotions, trust, or social norms. The research provides a way to analyze and design such systems, considering how an LLM's "beliefs" can influence its choices and the overall system's behavior. The proposed methods could be used to create LLMs that learn how to behave "fairly" or according to social norms within a multi-agent environment.