How resilient are multi-agent optimizations to attacks?
Optimization under Attack: Resilience, Vulnerability, and the Path to Collapse
This paper investigates how resilient, vulnerable, or prone to collapse multi-agent optimization systems are when some agents act adversarially, prioritizing their own goals over the system's. Using simulations with energy, privacy, and voting datasets, the research explores how different factors like the number of adversarial agents, their level of selfishness, and their position in the network affect the system's overall performance.
For LLM-based multi-agent systems, this research highlights the importance of considering adversarial agents. It underscores that even a small number of selfish agents can significantly degrade the system's efficiency, especially if they are highly selfish or strategically located within the communication network. This emphasizes the need for robust mechanisms to detect and mitigate the impact of adversarial agents in real-world applications. The open-source release of the I-EPOS collective learning framework provides a valuable tool for JavaScript developers to experiment with these concepts and build more resilient multi-agent systems.