How can V2X improve multi-agent perception?
Wireless Communication as an Information Sensor for Multi-agent Cooperative Perception: A Survey
This paper surveys cooperative perception in autonomous vehicles, where vehicles share sensor data (like vision and LiDAR) via V2X communication to enhance their individual perception. It examines this from an "information sensor" perspective, highlighting the challenges of representing, fusing, and managing the flow of information, particularly in large-scale deployments.
Key points for LLM-based multi-agent systems: Representing information efficiently (data, feature, object levels and compression methods) is crucial for bandwidth-constrained communication. Fusion methods must address heterogeneity in models and sensor data, as well as imperfect communication (latency, packet loss). System-level design for scalability is paramount, exploring decentralized, centralized, and hybrid approaches to manage communication flow in dense multi-agent scenarios. The emergence of large vision-language models suggests potential for a unified feature space simplifying information exchange.