Can LLMs simulate persuasive meat-reduction dialogues?
Simulating Persuasive Dialogues on Meat Reduction with Generative Agents
April 8, 2025
https://arxiv.org/pdf/2504.04872This research explores using Large Language Model (LLM)-powered generative agents to simulate conversations aimed at persuading individuals to reduce their meat consumption. The goal is to identify effective communication strategies while minimizing social costs, eventually for use in real-world interventions.
Key points for LLM-based multi-agent systems:
- LLMs can simulate survey responses and dialogue: The study shows LLMs can generate plausible conversation turns and fill out psychological questionnaires, mimicking human participants in persuasive dialogues.
- Model size impacts reliability and validity: Larger models (70B parameters) yielded more reliable and valid results compared to smaller models (8B and 3B).
- Simulated dialogues show promising persuasive effects: Simulated conversations showed trends consistent with human behavior, including initial resistance to persuasion followed by increasing intention change.
- Uniformity in some simulated responses requires attention: Some psychological constructs showed less variability than expected, potentially limiting generalizability and requiring further refinement of the simulation setup.
- Ethical considerations are crucial: The generalizability of these persuasive techniques raises ethical concerns about potential misuse for manipulative or misleading purposes. Responsible development and transparent discussion are emphasized.