How can LLMs learn division of labor for collective intelligence?
How Collective Intelligence Emerges in a Crowd of People Through Learned Division of Labor: A Case Study
This paper studies how collective intelligence (CI) emerges in groups controlling a shared resource, using a case study of a massively multiplayer online game where 2000 players controlled a single car. It identifies the spontaneous division of labor (DOL) as key to CI, and finds that both the total number of participants and the proportion of highly skilled "elite" players are crucial for this DOL and subsequent CI to emerge. The research develops a distributed learning method where individual agents estimate group actions without needing global information, facilitating decentralized role learning. This is particularly relevant for LLM-based multi-agent systems, as it offers a model for decentralized, efficient collaboration among LLMs with varying skill levels, without requiring a central coordinator to manage each agent's actions.