How can LLMs extract ABM code from prompts?
Prompt Engineering Guidance for Conceptual Agent-based Model Extraction using Large Language Models
This paper explores using Large Language Models (LLMs) to automatically extract information from conceptual descriptions of Agent-Based Models (ABMs) and translate that information into executable code, specifically JSON for human readability and potential LLM-driven code generation. It focuses on prompt engineering techniques to effectively query LLMs for extracting specific model components, such as agent characteristics, environment variables, and execution parameters. The structured JSON output facilitates automated code generation or manual implementation by developers. This approach addresses the complexity and labor-intensive process of translating ABM designs into working simulations. A key finding is that complex nested prompts are less effective than a sequence of simpler, more focused prompts when querying current LLMs for accurate model extraction.