How to make LLM agents fair using utilitarian optimization?
Reducing Leximin Fairness to Utilitarian Optimization
September 17, 2024
https://arxiv.org/pdf/2409.10395This research paper introduces a method for achieving fairness in the allocation of resources among AI agents. It focuses on a specific type of fairness called "leximin fairness," which prioritizes maximizing the well-being of the worst-off agent, then the second worst-off, and so on.
The key finding is that leximin fairness can be achieved by repeatedly optimizing for "utilitarian welfare," which simply aims to maximize the total well-being across all agents. This is particularly relevant to LLM-based multi-agent systems because it suggests that existing techniques for optimizing utilitarian welfare in these systems can be readily adapted to achieve leximin fairness, promoting more equitable outcomes.