How can LLMs implicitly repair noisy communication?
Implicit Repair with Reinforcement Learning in Emergent Communication
February 19, 2025
https://arxiv.org/pdf/2502.12624This paper investigates implicit repair mechanisms in emergent communication (EC) within multi-agent reinforcement learning (MARL). It focuses on how agents learn to communicate effectively in noisy environments, specifically within a modified Lewis Game where a Speaker describes an image and a Listener must identify it from a set of candidates. The agents learn robust communication protocols by incorporating redundancy to counteract the noise, mimicking implicit conversational repair in human language.
Key points for LLM-based multi-agent systems:
- Redundancy as Robustness: Training with noisy communication channels encourages agents to embed redundancy in their messages, improving robustness against noise during inference and even with other noise types.
- RL for Emergent Communication: Reinforcement learning agents outperform supervised learning approaches, especially in complex environments with noisy communication.
- Scaling Difficulty Improves Generalization: Increasing the number of candidate images, hence the task's difficulty, boosts the generalization capacity of the learned communication protocol.
- Implicit Repair without Explicit Feedback: Implicit conversational repair emerges naturally in a reinforcement learning setting even without explicit feedback mechanisms about message corruption.
- Transfer Learning Potential: The emergent languages, while not optimized for secondary tasks like classification, demonstrate a degree of transferability and adaptation to new scenarios.
- Limitations of Discrete Communication for Reconstruction: Reconstructing images from messages derived from discrete communication remains a challenging task, highlighting limitations in encoding fine-grained visual details.