How to detect malicious agents in a multi-robot network?
Distributed Detection of Adversarial Attacks for Resilient Cooperation of Multi-Robot Systems with Intermittent Communication
This paper investigates how to make a group of robots work together reliably even when some robots are hacked (malicious) and communication is unreliable (intermittent). It focuses on detecting these hacked robots and enabling the good robots to still achieve their goal, like moving in formation, despite the disruptions.
Relevant to LLM-based multi-agent systems, the paper emphasizes using local information available to each agent (like a robot or an LLM agent) for attack detection. It proposes a method for detecting malicious agents even when communication is intermittent, which is common in real-world scenarios. This localized approach could be relevant for developing more robust and secure LLM-based multi-agent systems.