Can LLMs reason better with iterative KG retrieval?
KG-IRAG: A Knowledge Graph-Based Iterative Retrieval-Augmented Generation Framework for Temporal Reasoning
This paper introduces KG-IRAG, a framework that enhances Large Language Models (LLMs) by enabling them to retrieve and reason over knowledge graphs (KGs) iteratively. This addresses limitations of current retrieval-augmented generation (RAG) methods, which struggle with multi-step reasoning and temporal queries. KG-IRAG uses two LLMs: one to formulate an initial retrieval plan and reasoning prompts, and another to iteratively retrieve information from the KG, evaluate its sufficiency, and generate the final answer.
Key points for multi-agent LLM systems: KG-IRAG demonstrates a multi-agent approach where distinct LLMs collaborate on a complex reasoning task. One agent specializes in planning and prompting, while the other specializes in retrieval and evaluation. This division of labor and iterative refinement process are relevant for developing complex LLM-based multi-agent applications that require external knowledge and reasoning. The iterative retrieval process is crucial for handling dynamic, real-world scenarios where the relevant information might not be immediately apparent. Furthermore, the proposed datasets, weatherQA-Irish, weatherQA-Sydney, and trafficQA-TFNSW, highlight the need for benchmarks focused on temporal reasoning in multi-agent scenarios.