Can LLMs debate for efficient legal prediction?
Debate-Feedback: A Multi-Agent Framework for Efficient Legal Judgment Prediction
This paper proposes "Debate-Feedback," a multi-agent framework for predicting legal judgments more efficiently. Inspired by courtroom debates, multiple LLM agents argue different sides of a legal case, and a "judge" LLM synthesizes these arguments to predict the outcome. A reliability model evaluates the agent's arguments to refine the judge's decision. Key points for LLM-based multi-agent systems include: a novel multi-agent debate structure for LLM-based judgment prediction, minimizing the need for large training datasets; a reliability evaluation component to improve the robustness of debate outcomes; a smoothing mechanism to stabilize predictions across multiple rounds of debate; and the potential for extending this approach to other fields beyond legal judgment prediction.