Can I verify human-like strategic reasoning in MAS?
A Model Checker for Natural Strategic Ability
October 21, 2024
https://arxiv.org/pdf/2410.14374This paper introduces the first model checker tool for NatATL, a logic designed to model bounded rationality in multi-agent systems, making it more aligned with human-like decision-making. The tool excels at:
- Synthesizing optimal strategies: It efficiently generates winning strategies for coalitions of agents, prioritizing those with lower complexity, which is valuable for LLM-based agents with limited computational resources.
- Handling strategies with and without memory: The tool can verify both memoryless and history-dependent strategies, catering to diverse LLM agent designs that may or may not rely on past interactions.
- Practicality for LLM development: By combining ATL and NatATL, the tool significantly reduces verification time for real-world scenarios, especially when dealing with unsatisfiable formulas, which is a common challenge in LLM development.
This makes it a promising tool for building and verifying LLM-based multi-agent systems that exhibit more realistic and human-like behavior.