How does task structure impact AI-human collaboration?
Modeling AI-Human Collaboration as a Multi-Agent Adaptation
This paper explores how AI and humans can best collaborate in organizations based on how tasks are structured (modular vs. sequential). It uses simulations to model AI as a rule-based agent with a broad search space, and humans as heuristic-based agents with narrower search.
For LLM-based multi-agent systems, key points include: 1) Task structure matters more than industry in determining whether AI augments or replaces humans. 2) For modular tasks, specialized AI often outperforms humans unless human expertise is very high. 3) For sequential tasks, a human expert initiating and AI refining is the most effective, outperforming AI initiating and human refining. 4) "Hallucinatory" AI (memory-less, random search) can surprisingly outperform rule-based AI when assisting less-skilled humans by helping them escape local optima. This raises important ethical questions about AI reliability and trustworthiness. 5) The choice of human-first or AI-first sequence depends on the level of human expertise. High human expertise favors human-first sequences.