Can GNNs better explain multi-agent communication?
Evaluating and Improving Graph-based Explanation Methods for Multi-Agent Coordination
This paper investigates how to explain the decisions made by AI agents working together in a team, particularly when those agents use Graph Neural Networks (GNNs) to coordinate. It examines existing methods for explaining GNNs and finds they can be useful but need improvement. The researchers propose a new training technique that makes the agents' communication patterns clearer, leading to better explanations. This is especially relevant to LLM-based multi-agent systems as it provides a way to understand and debug the complex interactions between LLMs working as a team. The focus on graph-based explanations directly addresses the challenge of interpreting the communication flow and influence between multiple LLMs. The proposed improvements to explanation quality offer potential for greater transparency and control in multi-agent LLM applications.