Can shared models create roles in limited-communication robot teams?
Emergence of Roles in Robotic Teams with Model Sharing and Limited Communication
This paper explores a novel approach to multi-agent resource foraging where a single "leader" agent learns using Deep Q-Learning (DQL) and periodically shares its learned model with "ally" agents. The allies adapt the shared model with slight variations, creating a diverse team without continuous individual learning or explicit communication. A reward function encourages role differentiation (e.g., explorer, disruptor) based on proximity to adversaries and allies. Experiments compare this method against traditional Multi-Agent Reinforcement Learning (MARL) and centralized DQL, demonstrating competitive performance with lower computational cost, suggesting potential for efficient scaling in resource-constrained LLM-based multi-agent systems. The model sharing and evolutionary adaptation mechanism, coupled with the implicit role development through the reward function, are key for LLM-based systems where communication and computation can be bottlenecks.