Can suggestion sharing improve MARL collective welfare?
Achieving Collective Welfare in Multi-Agent Reinforcement Learning via Suggestion Sharing
December 18, 2024
https://arxiv.org/pdf/2412.12326This paper introduces Suggestion Sharing (SS), a novel multi-agent reinforcement learning (MARL) method for achieving collective welfare even when individual agent goals conflict. Instead of sharing sensitive information like rewards, values, or policies, agents share action suggestions with each other. This allows agents to learn cooperative behaviors while preserving privacy.
Key points for LLM-based multi-agent systems:
- Reduced information sharing: SS addresses privacy concerns by limiting communication to action suggestions, which could be crucial when integrating LLMs that might generate sensitive outputs.
- Cooperative behavior with individual rewards: SS enables agents with individual reward functions to learn collaborative strategies, aligning with the decentralized nature of many LLM-based multi-agent applications.
- Potential for LLM integration: The suggestion-sharing mechanism could be implemented using LLMs, where agents generate and interpret natural language suggestions.
- Scalability challenges and solutions: The paper acknowledges scalability limitations and suggests using techniques like sparse network topologies and reduced communication frequencies, which are relevant for LLM-based systems that can be computationally expensive.
- Trust and deception: The authors highlight the need for future work to address potential issues of trust and deception in suggestion sharing, which is a significant concern when deploying LLMs in multi-agent settings.