How can LLMs improve cross-document entity linking?
Cross-Document Contextual Coreference Resolution in Knowledge Graphs
April 9, 2025
https://arxiv.org/pdf/2504.05767This paper introduces a new method for resolving coreferences (identifying when different words refer to the same entity) across multiple documents, specifically within the context of knowledge graphs. It uses a dynamic linking mechanism that connects entities in the knowledge graph to their mentions in text. By employing contextual embeddings from LLMs like Llama-3 and GPT-3.5 and graph-based inference, it captures relationships between entities to improve coreference resolution accuracy. Evaluation shows significant improvements over existing methods, especially in complex scenarios.
Key points for LLM-based multi-agent systems:
- Contextual Embeddings from LLMs: The use of LLMs to generate contextual embeddings is crucial for capturing the nuances of language and relationships between entities. This can be applied to multi-agent communication where understanding context is vital.
- Graph-Based Inference: Modeling the relationships between entities as a graph and using inference techniques can be applied to multi-agent systems to represent agent interactions and infer shared goals or collaborative strategies.
- Dynamic Linking: The dynamic linking mechanism, associating textual mentions with entities in a knowledge graph, could be adapted to dynamically link agent communication with shared world models or ontologies.
- Cross-Document Coreference Resolution: The ability to resolve coreferences across multiple documents is analogous to agents needing to understand and synthesize information from diverse sources to build a coherent picture of their environment.