How effective are LLMs at persuasion and resisting it?
Persuade Me if You Can: A Framework for Evaluating Persuasion Effectiveness and Susceptibility Among Large Language Models
This paper introduces PMIYC, a framework for automatically evaluating how effective and susceptible Large Language Models (LLMs) are to persuasion in conversations. It simulates multi-agent interactions where one LLM tries to persuade another to agree with a claim. Key findings relevant to LLM-based multi-agent systems include: LLMs' persuasiveness and susceptibility vary by context (subjective vs. misinformation) and conversation length (single-turn vs. multi-turn); larger LLMs tend to be more persuasive; multi-turn conversations make LLMs more susceptible to persuasion, including misinformation; and LLMs are generally consistent in their initial opinions and reliable in self-reporting their agreement levels, which facilitates automated evaluation.