How to make robots work together using LLMs?
EMOS: Embodiment-Aware Heterogeneous Multi-Robot Operating System with LLM Agents
This paper introduces EMOS, a new framework for controlling teams of different robots (like drones, wheeled robots, and legged robots) using a team of large language model (LLM) agents. Instead of pre-assigning roles to each robot, EMOS has each LLM agent read the robot's design file (URDF) to understand its physical capabilities, creating a "robot resume." The agents then discuss and plan how to complete a task, assigning subtasks based on each robot's strengths. This approach is tested in a new simulated environment called Habitat-MAS, which includes multi-floor homes and challenges like navigation, object manipulation, and perception. Results show that understanding the robots' physical abilities, through the robot resume, significantly improves the system's performance.