How long can open-source LLMs truly promise on context length?

Introducing LongChat-7B and LongChat-13B, the latest chatbot models that offer extended context length up to 16K tokens. LongChat-13B has shown to have 2x higher long-range retrieval accuracy compared to other open models like MPT-7B-storywriter, MPT-30B-chat, and ChatGLM2-6B. These models not only handle long context length but also follow human instructions accurately and perform well in the human preference benchmark. They are available for use on HuggingFace. The blog post also discusses the development of LongChat models, evaluation tools like LongEval, and comparisons with other models in terms of performance and context length.

To top