Anthropic expands Claude Sonnet 4’s context window to one million tokens

Anthropic has announced that its AI model Claude Sonnet 4 can now process up to one million tokens in a single request, a fivefold increase from its previous limit. This expanded “context window” allows the model to analyze an amount of information equivalent to an entire book series like The Lord of the Rings or dozens of research papers at once.

According to Anthropic, the larger capacity is designed for tasks involving extensive documents. Users can synthesize information across hundreds of files, such as legal contracts or technical specifications, without the model losing the overall context. The feature is currently available in public beta through Anthropic’s API and on Amazon Bedrock, with availability on Google Cloud planned to follow.

While the company states the model maintains high accuracy, early hands-on testing by the publication Every provided a mixed review. In a text analysis test comparing it to Google’s Gemini models, Claude Sonnet 4 was significantly faster but produced less detailed answers. Anthropic has introduced a new pricing tier for this feature, doubling the cost for prompts that use more than 200,000 tokens.

Related posts:

Stay up-to-date: