Do LLMs truly improve agent-based modeling?
Do Large Language Models Solve the Problems of Agent-Based Modeling? A Critical Review of Generative Social Simulations
This paper reviews the nascent field of "generative ABMs" (agent-based models), where large language models (LLMs) control agents in simulations. It argues that while LLMs make agents appear more realistic, they exacerbate existing validation challenges for ABMs. Specifically, LLMs' black-box nature, potential biases, tendency to "hallucinate," and high computational cost make rigorous validation difficult. The paper questions whether and how generative ABMs can mature beyond proof-of-concept to meaningfully contribute to social science. For LLM-based multi-agent systems, key takeaways are the need for robust validation methods beyond subjective assessments, addressing LLM biases, and carefully considering the balance between realism and the interpretability/explainability vital for scientific progress.