How can AI agents balance performance and user preference?
HUMAN-AI COLLABORATION: TRADE-OFFS BETWEEN PERFORMANCE AND PREFERENCES
March 4, 2025
https://arxiv.org/pdf/2503.00248This paper explores how different AI agent designs impact human-AI collaboration in a target interception game. It investigates the trade-off between an AI's performance and how much humans like working with it.
Key points for LLM-based multi-agent systems:
- Human-centric design matters: People prefer AI collaborators that are considerate of human actions and intentions, even if it slightly reduces the AI's individual performance. Features like predictability, transparency, and allowing for meaningful human contribution are crucial.
- Adaptability to context is key: Simpler AI agents were preferred in resource-constrained environments, while more complex agents excelled in resource-rich settings. LLM agents should be designed to dynamically adapt their strategies based on the task's demands.
- Subjective metrics are important: How much people like working with an AI is a strong predictor of successful collaboration and can be even more important than objective performance metrics. LLM agents should be evaluated based on both their performance and their perceived collaborative abilities.
- Small changes can have big impacts: Simple modifications to an LLM's inputs or training process (e.g., adding constraints based on human intent) can significantly improve its collaborative behavior without needing a complete overhaul. This allows for iterative design and improvement of LLM-based multi-agent systems.