Long context prompting for Claude 2.1

Claude 2.1 is a state-of-the-art model with a 200K token context window, making it excellent for retrieval tasks across longer contexts. It has been trained using real-world tasks and shows a reduction in incorrect answers compared to its predecessor. However, there is a slight reluctance on the part of Claude 2.1 to answer questions based on an individual sentence in a document, especially if it seems out of place. This reluctance can be overcome by using a minor prompting edit. By adding just one sentence or directing the model to look for relevant sentences first, Claude’s performance improves significantly. The model is constantly being trained to better handle such tasks and welcomes feedback from the community.

https://www.anthropic.com/index/claude-2-1-prompting

To top