Can LLMs improve taxi routing efficiency?
Data-Efficient Multi-Agent Spatial Planning with LLMs
This paper explores using Large Language Models (LLMs) for multi-agent spatial planning, specifically taxi routing. It demonstrates that LLMs, even with zero training (zero-shot), can perform reasonably well in this task, outperforming some traditional algorithms. Fine-tuning the LLM with a small amount of data using a technique called "rollout" further improves performance, surpassing previous state-of-the-art while being significantly more data-efficient. Key points for LLM-based multi-agent systems include: prompting strategies for encoding spatial information, incorporating domain knowledge (like shortest paths) into prompts, mitigating LLM "hallucinations" through fine-tuning and feasibility checks, and leveraging rollout for efficient training and online performance improvement. The research also suggests that LLMs can generalize across different demand levels and scale to larger environments.