Can LLMs fairly distribute resources?
Distributive Fairness in Large Language Models: Evaluating Alignment with Human Values
This paper investigates whether large language models (LLMs) can make fair decisions when allocating resources among multiple individuals, comparing LLM behavior with human choices. It focuses on fairness concepts like equitability (equal outcomes), envy-freeness (no one prefers another's allocation), and maximizing the minimum payoff (helping the worst-off).
Key findings relevant to LLM-based multi-agent systems are: LLMs struggle to prioritize fairness, often favoring efficiency (maximizing total value) even when leading to unequal outcomes. They are better at selecting fair solutions from a given set of options than generating them independently. LLMs don't effectively use transferable resources (like money) to improve fairness, unlike humans. Their behavior can be influenced by instructions, assigned "personas" (like focusing on a specific fairness notion), and biases, but assigning personas doesn't consistently improve fairness. LLMs are also sensitive to how the problem is presented, such as the order of goods or wording in the prompt.