Can AI assistants prevent pilot spatial disorientation?
Combating Spatial Disorientation in a Dynamic Self-Stabilization Task Using AI Assistants
September 24, 2024
https://arxiv.org/pdf/2409.14565The paper explores using AI agents to help pilots maintain balance in disorienting spatial conditions (like spaceflight) where sensory input can be misleading. They use data from human pilots in a simulated disorientation task to train both "digital twin" pilots and AI "assistant" agents.
Key findings relevant to LLM-based multi-agent systems:
- AI embodiment matters for trust: AI agents trained directly on the task's physics (reinforcement learning) performed well but were less trusted by humans than agents trained to mimic human behavior (deep learning from pilot data).
- Human-like strategies are preferred: Even if suboptimal, assistants with human-like actions were better received and led to less "disagreement" during collaboration. This highlights the importance of training LLMs on human data to better align with human expectations and preferences.
- Fine-tuning with human feedback improves performance: AI assistants improved when fine-tuned on data from actual human-AI interactions, demonstrating the value of human-in-the-loop learning for multi-agent systems.