Can LLMs learn interpretable human behavior models?
FULLY DATA-DRIVEN BUT INTERPRETABLE HUMAN BEHAVIOURAL MODELLING WITH DIFFERENTIABLE DISCRETE CHOICE MODEL
This paper introduces Diff-DCM, a novel method for creating data-driven, interpretable models of human decision-making. It uses differentiable programming to automatically learn complex utility functions from observed choices, eliminating the need for hand-crafted models based on expert knowledge. This allows for faster model creation, potential for greater accuracy, and the ability to conduct sensitivity analysis and calculate optimal intervention paths to influence behavior.
For LLM-based multi-agent systems, Diff-DCM offers a potential way to model agent behavior based on observed data, improving their realism and facilitating a better understanding of their decision-making processes. The automated nature of Diff-DCM aligns well with the data-driven approach of LLMs, making it suitable for creating complex agent behaviors without extensive manual design. The differentiability also opens avenues for optimizing agent interactions and learning within multi-agent simulations.