How can I assess ADM fairness as an individual?
Am I Being Treated Fairly? A Conceptual Framework for Individuals to Ascertain Fairness
April 4, 2025
https://arxiv.org/pdf/2504.02461This paper proposes a framework called "ascertainable fairness" to help users understand and challenge decisions made by AI systems (specifically Algorithmic Decision-Making or ADM). It aims to give individuals more power in situations where AI impacts their lives, moving beyond technical fairness metrics to a more user-centric approach.
Key points for LLM-based multi-agent systems:
- Explainability and Contestability: The framework emphasizes the need for clear explanations of AI decisions and mechanisms for users to challenge those decisions (contestation). This directly relates to designing LLMs that can explain their reasoning and engage in dialogue to justify their actions. Multi-agent contestation dialogues could involve negotiation and argumentation between user agents and system agents.
- Fairness of Recourse: The framework considers the fairness of the steps a user must take to change an AI's decision. This is relevant to LLM agents that provide recommendations or suggest actions. The "recourse" offered by these agents should be fair and unbiased.
- Auditing: The framework includes mechanisms for external audits of AI systems, which can ensure accountability and transparency. This relates to developing methods for auditing the behavior of LLM agents, including their internal decision-making processes and their interactions with other agents.
- User-Centric Fairness: The framework emphasizes the importance of individual perceptions of fairness, recognizing that users may have different values and priorities than the developers of AI systems. This highlights the need for LLM agents to be adaptable and personalize their interactions to align with individual user preferences and values.