Can single-LLM prompts mimic multi-agent systems?
Agent-Centric Projection of Prompting Techniques and Implications for Synthetic Training Data for Large Language Models
This paper proposes a framework for understanding the relationship between prompting techniques in Large Language Models (LLMs) and multi-agent systems. It argues that complex prompting techniques, especially those involving branching or multi-path interactions (non-linear contexts), are equivalent to multi-agent systems. This equivalence suggests that research findings from one area can be applied to the other, and that simulating multi-agent interactions within single LLMs (e.g., through dialogue transcripts) can generate valuable synthetic training data to improve LLM performance in both multi-agent and complex prompting scenarios. It also differentiates between prompt engineering (optimizing prompts for a given task) and instruction engineering (modifying the task itself to be more LLM-friendly).