Can Hybrid Event-B verify autonomous system safety?
Autonomous System Safety Properties with Multi-Machine Hybrid Event-B
November 22, 2024
https://arxiv.org/pdf/2411.14168This paper explores using a formal method called Hybrid Event-B to design and verify safety properties of autonomous systems, particularly those with multiple interacting agents. It uses an incident response scenario with a controller, drones, and responders as a case study.
Key points for LLM-based multi-agent systems:
- Formal Verification: Hybrid Event-B offers a way to formally specify and verify agent behaviors and interactions, which could be adapted for LLM-based agents to ensure they adhere to specific safety and performance constraints. This addresses the "trust" and "understanding" challenges highlighted for autonomous systems.
- Multi-Agent Architecture: The paper's multi-machine approach in Hybrid Event-B directly reflects the distributed nature of multi-agent systems, where each agent (machine) has its own logic and interacts with others through well-defined interfaces. This maps well to LLM agents communicating via defined protocols.
- Hybrid System Modeling: The hybrid aspect of Hybrid Event-B, combining discrete and continuous dynamics, can be relevant to LLM agents whose actions might have both discrete (e.g., sending a message) and continuous (e.g., navigating a virtual space) consequences.
- Refinement and Decomposition: The concepts of refinement and decomposition in Hybrid Event-B could help manage the complexity of LLM-based multi-agent systems. Breaking down complex tasks into smaller, verifiable sub-tasks handled by different agents is crucial for scalability and robustness.
- Synchronization: The paper discusses mechanisms for synchronizing mode events (discrete actions) across multiple machines, which offers a potential solution for coordinating the actions of multiple LLM agents in a time-sensitive manner.