How can LLM agents learn to leverage social structures in adaptive environments?
AdaSociety: An Adaptive Environment with Social Structures for Multi-Agent Decision-Making
-
AdaSociety is a new customizable multi-agent environment that creates complex, evolving tasks by expanding the game world and changing social connections between agents during gameplay. This contrasts with traditional environments that have fixed tasks and world settings, limiting agent learning. The researchers tested existing Reinforcement Learning (RL) and Large Language Model (LLM) agents and found current methods struggle to leverage these dynamic social structures and changing environments.
-
AdaSociety introduces explicit social structures (represented as graphs) that influence reward and information access for agents. It allows for dynamic connections where agents can join and leave groups, mimicking realistic social dynamics. This social layer is key for research on LLM-based multi-agent systems because it demands cooperation, negotiation, and adaptation to other agents' actions, all crucial aspects of social intelligence. Initial tests with an LLM-based agent (LLM-C) leveraging GPT-4 showed promising results, outperforming most tested RL methods. The LLM-C combined high-level planning from the LLM with a rule-based controller for execution. This suggests LLMs might be well-suited for navigating complex social dynamics and adapting to evolving game worlds. However, LLM limitations like hallucination and context length were also observed, highlighting areas for further research. A curriculum learning approach was also explored for RL agents to improve their adaptability.