Can shared memory improve multi-agent pathfinding?
SRMT: SHARED MEMORY FOR MULTI-AGENT LIFE-LONG PATHFINDING
This paper introduces Shared Recurrent Memory Transformer (SRMT), a novel architecture for improving coordination in multi-agent reinforcement learning (MARL) systems. SRMT uses a shared memory mechanism that allows agents to indirectly communicate by reading and writing their individual memories into a shared space. This facilitates cooperation without explicit communication protocols. Key for LLM-based multi-agent systems is the ability of the shared memory to store and retrieve complex information relevant to decision-making, enabling more effective collaboration and problem-solving in decentralized environments. The paper demonstrates SRMT's effectiveness on multi-agent pathfinding tasks, showing improved performance and generalization compared to existing MARL baselines, especially in scenarios with limited feedback. The shared recurrent memory aspect is particularly relevant to LLMs, which can leverage this mechanism to manage and process long sequences of information for coordinated action in multi-agent settings.