Can LLMs improve crowdsourced fact-checking?
Assessing the Potential of Generative Agents in Crowdsourced Fact-Checking
April 29, 2025
https://arxiv.org/pdf/2504.19940This paper investigates using LLM-powered generative agents as simulated crowds for fact-checking. The agents evaluate the truthfulness of news statements, mimicking a crowdsourced fact-checking process.
Key findings relevant to LLM-based multi-agent systems are that compared to human crowds, the agent crowds:
- Demonstrate higher accuracy in classifying truthfulness, especially in binary (true/false) scenarios.
- Exhibit greater internal consistency, suggesting more stable and coherent judgment processes.
- Rely more heavily on informative criteria like accuracy and precision, indicating a more structured evaluation strategy.
- Show less susceptibility to demographic and ideological biases, highlighting their potential for impartial assessment.