Can multi-agent AI improve ethical clinical decision support?
Reinforcing Clinical Decision Support through Multi-Agent Systems and Ethical AI Governance
April 8, 2025
https://arxiv.org/pdf/2504.03699This paper explores using a multi-agent AI system to improve clinical decision support, particularly in intensive care units (ICUs). Each agent specializes in a different task, such as analyzing lab results, vital signs, or patient history, and their findings are combined for a comprehensive prediction. This approach aims to make AI-driven decisions more transparent and trustworthy.
Key points for LLM-based multi-agent systems:
- Modular Design: The system uses specialized agents, mirroring how a clinical team works, with each agent potentially leveraging LLMs for its specific task.
- Ethical AI Governance: Transparency and accountability are emphasized, with structured outputs and logging enabling traceability of predictions.
- LLM Integration: The system utilizes Anthropic Claude 3.7 Sonnet as the LLM and intelli.flow for asynchronous agent orchestration.
- Few-Shot Learning: Real ICU patient data is used as few-shot examples to improve the prediction agent's generalizability.
- Transparency Evaluation: The TransparencyMetrics class is used to quantify the interpretability of the system's predictions.
- Comparison with Single-Agent: A single-agent approach was also tested, revealing comparable predictive performance, though with higher transparency scores. This suggests that the lack of explicit coordination between the multi-agents may hinder its potential benefits.