How can LLMs learn reusable skills for multi-agent cooperation?
LEARNING GENERALIZABLE SKILLS FROM OFFLINE MULTI-TASK DATA FOR MULTI-AGENT COOPERATION
This paper introduces HiSSD (Hierarchical and Separate Skill Discovery), a new method for training AI agents to cooperate in complex tasks across different scenarios (varying numbers of agents and targets) using previously collected data (offline multi-task learning). HiSSD improves upon existing methods by teaching agents both general cooperative strategies (common skills) and task-specific adaptations (task-specific skills). This hierarchical approach allows for more efficient transfer of learned knowledge to new, unseen tasks.
For LLM-based multi-agent systems, HiSSD offers a potential pathway for training LLMs to collaborate effectively in various applications by learning both general communication/cooperation skills and how to adapt these skills to specific task requirements. This hierarchical skill learning could improve the efficiency and adaptability of multi-agent LLM systems in complex environments. The concept of separating common and task-specific knowledge is particularly relevant when considering the prompting of LLMs, suggesting a framework for building both baseline collaborative abilities and injecting specialized knowledge for different tasks.