How can LLMs improve multi-agent communication efficiency?
PAGNet: Pluggable Adaptive Generative Networks for Information Completion in Multi-Agent Communication
This paper introduces PAGNet, a new framework for improving communication and coordination in multi-agent reinforcement learning (MARL) systems. It uses a generative model to construct a shared understanding of the "global state" from each agent's limited, local view. This addresses the challenge of partial observability in MARL. A novel "information-level weight network" learns to prioritize important information in agent communications, making the process more efficient. The pluggable design allows easy integration with existing MARL algorithms, and the use of generative models decouples communication learning from the main reward-driven learning process, enhancing efficiency.
For LLM-based multi-agent systems, PAGNet offers a mechanism for agents with limited perspectives to build a coherent shared understanding of their environment through communication. The information-level weighting could be adapted to prioritize relevant information extracted by LLMs from complex data. The pluggable design makes it readily adaptable to various LLM-based agent architectures. The ability to pre-train the communication and generative modules could significantly improve the efficiency of training multi-agent LLM systems.