Can LLMs evolve via population-based methods?
NATURE-INSPIRED POPULATION-BASED EVOLUTION OF LARGE LANGUAGE MODELS
This paper introduces GENOME(+), a novel framework inspired by biological evolution for adapting and improving Large Language Models (LLMs). It treats LLM weights as "genes" and uses evolutionary operations like crossover (combining weights of different LLMs), mutation (introducing small changes to weights), and selection (keeping the best-performing LLMs). GENOME+ extends this with succession (learning from the best and worst performers) and ensemble (combining outputs of multiple LLMs) to enhance performance. Experiments show GENOME+ outperforms other LLM merging and adaptation methods, particularly in reasoning tasks, and effectively generalizes to new tasks with few or no samples. It is computationally efficient, requiring only a single GPU, and scales well with larger populations of LLMs. Key points relevant to multi-agent systems include the crossover mechanism as a means of agent knowledge transfer and the ensemble mechanism as a means of collective decision-making between agents. This work provides a potential foundation for evolving multi-agent systems of LLMs by merging, mutating, and selecting agent "genes" (weights), potentially leading to emergent capabilities and adaptable multi-agent systems.