How can I build adaptive, two-layer agent models?
ADAGE: A generic two-layer framework for adaptive agent based modelling
This paper introduces ADAGE, a two-layer framework for creating adaptive agent-based models (ABMs). It addresses the "Lucas critique" – the idea that agent behavior should change in response to changes in their environment, and the environment should also adapt to changes in agent behavior. ADAGE frames this as a Stackelberg game where an outer layer modifies environment characteristics, and an inner layer simulates agents whose behavior adapts to these changes. This allows ADAGE to tackle various ABM tasks like policy design, calibration, scenario generation, and robust behavioral learning within a single, unified framework.
For LLM-based multi-agent systems, ADAGE offers a structure for building adaptive environments where LLMs can act as agents. The framework supports complex scenarios by allowing the environment to react to the LLMs’ actions, creating a more dynamic and realistic interaction. The ability to train for robust behavior across different settings and preferences makes ADAGE potentially useful for creating multi-agent systems where LLMs can generalize and adapt to new situations without retraining. The conditional behavioral policies enable LLMs to learn behaviors based on changing circumstances within the simulation.