How can diverse prompts improve small LLM reasoning?
Dipper: Diversity in Prompts for Producing Large Language Model Ensembles in Reasoning tasks
December 23, 2024
https://arxiv.org/pdf/2412.15238This paper introduces Dipper, a method for creating ensembles of large language models (LLMs) by using diverse prompts rather than multiple model instances. This allows for improved performance on reasoning tasks, particularly for smaller LLMs with resource constraints.
Key points for LLM-based multi-agent systems:
- Prompt Engineering for Diversity: Dipper emphasizes the importance of diverse prompts in generating a range of reasoning pathways within the ensemble. This highlights prompt engineering as a key element in multi-agent LLM system design.
- Homogeneous Agents with Diverse Behaviors: Dipper demonstrates that even with identical LLMs (homogeneous agents), varied behaviors can be elicited through prompt diversity, suggesting a new approach to specialization within a multi-agent system.
- Optimization of Prompt Selection: The paper introduces a method for optimizing prompt selection based on fidelity (performance on a development set) and diversity (semantic difference between prompts), which is crucial for effective multi-agent collaboration.
- Response Aggregation: Dipper explores different methods for combining agent outputs, including majority voting and using another LLM as an aggregator, highlighting the importance of aggregation strategies in multi-agent systems.
- Synergy with other Prompting Techniques: Dipper's compatibility with techniques like Reflexion underscores its potential for integration within broader multi-agent frameworks and interaction paradigms.