How can uncoordinated AI agents coexist harmoniously?
Position: Emergent Machina Sapiens Urge Rethinking Multi-Agent Paradigms
February 10, 2025
https://arxiv.org/pdf/2502.04388This paper argues that current multi-agent AI frameworks (like game theory and multi-agent reinforcement learning) are insufficient for the coming wave of independently designed and deployed AI agents ("machina sapiens"). It proposes a new framework focusing on dynamic "norms" (rules governing interactions) and "protocols" that allow agents to adapt their goals and cooperate based on feedback, relationships, and societal impacts.
Key points for LLM-based multi-agent systems:
- Dynamic Goals and Norms: LLMs can be leveraged to create agents capable of adapting their goals and learning norms in real-time, addressing the limitations of static objectives in current frameworks.
- Emergent Cooperation: The paper emphasizes emergent cooperation, which aligns with the potential of LLMs to negotiate, compromise, and form coalitions without explicit pre-programming.
- Context Awareness: The proposed framework highlights context awareness, which is a strength of LLMs, allowing agents to adapt to changing environments and societal expectations.
- Social Feedback and Relationships: LLMs can be used to model and manage complex social interactions and relationships between agents, including trust, reputation, and influence.
- Ethical Considerations: The paper raises ethical questions related to free will, accountability, and human-agent collaboration, all relevant to the development of LLM-based multi-agent systems.