Many-Shot In-Context Learning

In their paper, Agarwal et al. discuss the capabilities of large language models (LLMs) in few-shot in-context learning (ICL), showcasing significant performance gains when transitioning to many-shot ICL with expanded context windows. They introduce Reinforced and Unsupervised ICL settings to address limitations in human-generated examples, demonstrating their effectiveness on complex reasoning tasks. Surprisingly, many-shot learning can override pretraining biases and tackle high-dimensional functions with numerical inputs. The authors highlight the shortcomings of next-token prediction loss as an indicator of ICL performance. This research sheds light on the potential of LLMs in overcoming challenges in learning from limited examples.

https://arxiv.org/abs/2404.11018

To top