More Agents Is All You Need: LLMs performance scales with the number of agents

By simply using a sampling-and-voting method, our research shows that the performance of large language models (LLMs) improves with more instantiated agents. This method stands apart from complex techniques typically used to enhance LLMs, with the level of improvement tied to the difficulty of the task at hand. Through extensive experiments on various LLM benchmarks, we confirm our discovery and explore the factors that support it. Our findings are available for review, and our code can be accessed publicly at the provided link. (Note: No controversial information or surprising content detected.)

https://arxiv.org/abs/2402.05120

To top