Can LLMs simulate human behavior for policy decisions?
Can A Society of Generative Agents Simulate Human Behavior and Inform Public Health Policy? A Case Study on Vaccine Hesitancy
This paper explores using a multi-agent simulation, VACSIM, powered by Large Language Models (LLMs) to model human behavior in the context of vaccine hesitancy and public health policy. It examines whether such a system can realistically simulate attitude changes under different interventions.
Key points for LLM-based multi-agent systems: LLMs like Llama and Qwen show potential for simulating human behavior but face challenges like inconsistency with demographic profiles and biases from pre-training data. The paper proposes techniques like "attitude modulation" and "simulation warmup" to address these issues. It highlights the potential of LLM-driven agents for policy exploration while acknowledging limitations in mirroring real-world outcomes. The study emphasizes the importance of evaluating global and local consistency within the simulation and demonstrates qualitative analysis of agent behaviors to understand decision-making processes. Finally, it suggests future research directions in mitigating prompt sensitivity and improving alignment with real-world behaviors.