How can agents share info optimally in a competitive hypothesis test?
Sequential Binary Hypothesis Testing with Competing Agents under Information Asymmetry
This paper explores how two AI agents can work together to figure something out (like which of two possibilities is true) even when they're competing and might try to mislead each other.
For LLM-based multi-agent systems, it shows: 1. Agents benefit from being somewhat truthful but also unpredictable in their communications. Randomly mixing true and false information is a good strategy. 2. Surprisingly, agents perform best by mostly ignoring what other agents tell them, relying on their own understanding until the very end. 3. The first agent to confidently figure out the answer has a significant advantage over the second. This "first-mover advantage" could be important in designing multi-agent web applications.