Can LLMs reason better with charts than text data?
LLM Knows Geometry Better than Algebra: Numerical Understanding of LLM-Based Agents in A Trading Arena
This research explores how Large Language Models (LLMs) perform in a simulated stock trading environment involving multiple AI agents. The study finds that LLMs are better at geometric reasoning with visual data (charts, graphs) than algebraic reasoning with textual data (numbers, tables). Key points for LLM-based multi-agent systems: Visual data significantly improves LLM agent trading performance; a "reflection" module that allows agents to learn from past trades further enhances results, particularly with visual data; and these systems perform well even against traditional trading algorithms when tested on real-world NASDAQ data. The research suggests that for complex numerical tasks in multi-agent systems, visual representation and learning from feedback are crucial for LLM effectiveness.