How can agents reach partial agreements reliably?
Disagree and Commit: Degrees of Argumentation-based Agreements
January 7, 2025
https://arxiv.org/pdf/2501.01992This paper introduces the concept of "Disagree and Commit" within multi-agent AI systems using formal argumentation. It proposes a framework for measuring degrees of agreement among agents with differing preferences or using different reasoning methods (argumentation semantics), even when complete consensus isn't achievable. It explores how these degrees of agreement change as new information is introduced, particularly in the context of value-based argumentation where agents' subjective values influence their reasoning.
Key points for LLM-based multi-agent systems:
- Partial Agreements: The framework acknowledges that LLMs, like humans, might not always achieve full consensus, and offers a way to quantify and work with partial agreements.
- Value Alignment: The emphasis on value-based argumentation is crucial for LLM agents, as aligning their values with human stakeholders is paramount. This research provides tools to model and measure this alignment.
- Dynamic Knowledge: The research investigates how agreements change with new information, a key aspect of LLM-based systems that constantly learn and update their knowledge. The framework offers ways to assess the reliability and stability of agreements in these dynamic settings.
- Simulating Disagreements: The paper provides a software implementation that can be used to simulate disagreements between LLM agents, facilitating the study of value alignment, negotiation strategies, and the impact of new information on established agreements.