Can language games unlock ASI via multi-agent LLMs?
Language Games as the Pathway to Artificial Superhuman Intelligence
This paper proposes "language games" as a method to push Large Language Models (LLMs) towards Artificial Superhuman Intelligence (ASI). The core idea is that current LLM training methods get stuck in a "data reproduction trap," simply regurgitating existing knowledge. Language games, inspired by Wittgenstein's philosophy, offer a solution by introducing dynamic multi-agent interactions with three key mechanisms:
- Role fluidity: Agents switch roles (e.g., teacher, student) to diversify training data.
- Reward variety: Multiple success criteria (creativity, ethics, etc.) encourage exploring beyond current capabilities.
- Rule plasticity: Changing interaction rules force adaptation and further novelty.
Scaling these language games globally with human participation creates a continuous feedback loop, accelerating LLM evolution towards ASI through diverse interactions and collective intelligence. The paper highlights potential benefits like cross-cultural concept fusion and distributed proof markets but also addresses challenges like ethical concerns and potential manipulation. It contrasts this language-centric approach with embodied AI, suggesting they could be complementary rather than competing paths to ASI.