How can LLMs learn medical norms in distributed healthcare?
Multi-Agent Norm Perception and Induction in Distributed Healthcare
This paper proposes a multi-agent model for learning and sharing medical norms (best practices and protocols) in a distributed healthcare setting, mimicking how human doctors learn and adapt.
The model addresses both descriptive norms (what doctors tend to do) and prescriptive norms (what doctors should do). For descriptive norms, agents develop individual perceptions of collective tendencies by sharing information (like diagnostic preferences) and updating their beliefs through a Gaussian Mixture Model. For prescriptive norms (rules), agents use a modified rational inductive logic model, incorporating "practice verification" within a Markov game environment to learn and refine their understanding of optimal protocols. The model emphasizes the importance of interaction and continuous learning for aligning individual and collective behaviors, using real-world data to ground the learning process and avoid convergence to unrealistic or ineffective norms. The learning process for prescriptive norms incorporates adaptive learning rates and momentum for more stable and accurate updates, addressing a key challenge in Bayesian rule induction within multi-agent systems.