How to ensure fair rewards in multi-agent systems?
Using Protected Attributes to Consider Fairness in Multi-Agent Systems
October 18, 2024
https://arxiv.org/pdf/2410.12889This research paper proposes a way to evaluate fairness in multi-agent AI systems, especially when some agents might be disadvantaged due to specific attributes.
The key takeaway for LLM-based systems is the concept of "protected attributes". These are characteristics that should not negatively impact an agent's performance (like being a human driver vs. an AI driver). The paper adapts existing fairness metrics to measure and potentially mitigate bias against agents with these protected attributes. This is crucial for developing LLM-based agents that interact fairly in shared environments, ensuring that certain agents aren't unfairly penalized based on inherent characteristics.