How do LLMs form factions?
Large Language Models can Achieve Social Balance
October 8, 2024
https://arxiv.org/pdf/2410.04054This research studies whether Large Language Models (LLMs) can learn to achieve "social balance" - a state where a group divides into factions with positive relationships within a faction and negative ones between.
- All tested LLMs (Llama 3 70B, 8B, and Mistral) achieved social balance after repeated interactions, but differed in how often, what kind of balance, and how stable it was.
- Model size didn't directly translate to better balance.
- LLM behavior changed with group size, suggesting different internal logic when handling multiple relationships.
- Analysis of the words LLMs used offered some insight into their decision-making, though much remains unclear.