How can LLMs anticipate actions in multi-agent scenarios?
HiMemFormer: Hierarchical Memory-Aware Transformer for Multi-Agent Action Anticipation
This paper introduces HiMemFormer, a new model for predicting actions in multi-agent scenarios. It uses a hierarchical transformer architecture to incorporate both long-term shared history and short-term individual agent information. This approach allows the model to learn agent-specific preferences for utilizing historical and contextual data, leading to more accurate action anticipation. This is particularly relevant to LLM-based multi-agent systems as it offers a mechanism to integrate and effectively use both global context and individual agent histories, mimicking real-world multi-agent interactions. The hierarchical nature of the memory encoding and decoding is key for managing long sequences and providing flexible, agent-specific context awareness, which can be beneficial for LLM agents operating within complex environments.