Can LLM emotions improve collective intelligence?
Enhancing Collective Intelligence in Large Language Models Through Emotional Integration
This research explores how incorporating diverse emotions into Large Language Models (LLMs) can enhance their collective intelligence, similar to the "wisdom of crowds" effect in humans. By fine-tuning a LLM with an emotion dataset, the study examines how different emotional contexts and social attributes impact the LLM's accuracy on a factual task. Key findings include that: social context is highly influential, emotion integration introduces complex interactions between social and emotional cues, and there's a trade-off between raw accuracy and the LLM's ability to process nuanced emotional information. This suggests potential for creating more emotionally aware LLMs but highlights challenges in balancing emotional depth with factual precision, especially in multi-agent systems where LLMs interact and could potentially influence each other's responses.