Who's liable when LLMs misbehave?
Inherent and emergent liability issues in LLM-based agentic systems: a principal-agent perspective
This paper explores the legal and ethical implications of using Large Language Model (LLM)-based AI agents, especially in multi-agent systems (MAS). It examines how existing legal frameworks, primarily Principal-Agent Theory (PAT), apply to the complex relationships between humans, AI agents, and AI agent platforms. Key points for LLM-based MAS include: LLMs' limitations (instability, inconsistency, short "memory", limited planning) create "agency gaps" affecting liability; task delegation and oversight are crucial but difficult due to information asymmetry and potential for AI manipulation or deception; in MAS, responsibility becomes diffused, raising questions about liability allocation among agents, orchestrators, and platforms; and this diffusion requires new technical approaches to interpretability, behavior evaluation, reward/conflict management, and misalignment mitigation to support transparency and accountability.