Can LLMs automate post-disaster response?
Integration of Large Vision Language Models for Efficient Post-disaster Damage Assessment and Reporting
This paper introduces DisasTeller, a multi-agent framework using Large Vision Language Models (LVLMs) like GPT-4 to automate post-disaster tasks like damage assessment, alerts, resource allocation, and recovery planning. Each agent specializes in a specific role (expert, alerts, emergency, assignment) coordinating with each other. Key points relevant to LLM-based multi-agent systems include demonstrating the potential for LVLMs to autonomously collaborate, streamlining complex processes; highlighting challenges like "hallucinations" and data security; emphasizing the importance of prompt engineering for reliable output; and showing the need for robust evaluation metrics for LVLM-based systems and real-time integration with data sources.