Can LLMs detect media bias automatically?
Unraveling Media Perspectives: A Comprehensive Methodology Combining Large Language Models, Topic Modeling, Sentiment Analysis, and Ontology Learning to Analyse Media Bias
This paper proposes a new methodology for analyzing media bias in political news. It combines several NLP techniques, including topic modeling, sentiment analysis, and ontology learning with LLMs, to examine different forms of bias (event selection, labeling/word choice, commission/omission) across multiple news sources. The methodology aims to be scalable and minimally biased itself, reducing reliance on human labeling. Key aspects relevant to LLM-based multi-agent systems include using LLMs for ontology learning to extract structured knowledge from articles and facilitate comparisons of information presentation across different news outlets. The combination of sentiment analysis with topic modeling within this multi-source framework could potentially be extended to model the "perspectives" or "strategies" of different LLM agents interacting within a simulated news environment. Additionally, the concept of comparing multiple sources to identify bias, rather than relying on a single "ground truth," offers an interesting approach to evaluating the trustworthiness and potential biases of individual LLMs in a multi-agent context.