Anthropic releases Claude Opus 4.7 with stronger coding and vision capabilities

Anthropic has released Claude Opus 4.7, its most capable publicly available AI model. The company says the model performs better than its predecessor, Claude Opus 4.6, across software engineering, document analysis, and visual tasks.

One of the model’s key traits is self-verification. In internal tests, Opus 4.7 built a text-to-speech engine in the Rust programming language and then ran its own audio output through a speech recognizer to check whether the result matched a reference. Anthropic says users can now hand off complex, long-running tasks with greater confidence.

The model also supports higher-resolution images, accepting inputs up to 2,576 pixels on the longest edge — more than three times the limit of previous Claude models. VentureBeat reports this helped one security firm jump from a 54.5% to a 98.5% success rate on visual tests.

On the GDPVal-AA knowledge work benchmark, Opus 4.7 scored an Elo rating of 1753, ahead of OpenAI’s GPT-5.4 at 1674 and Google’s Gemini 3.1 Pro at 1314. However, competitors still lead in areas such as agentic search and multilingual question answering.

Anthropic warns that the model follows instructions more literally than previous versions. Prompts written for older models may produce unexpected results and may need adjustment.

The release includes new features such as an “xhigh” effort setting for finer control over reasoning depth, a task budget tool for managing token costs, and a new code review command called /ultrareview in Claude Code.

Pricing stays the same as Opus 4.6 at $5 per million input tokens and $25 per million output tokens. The model is available through the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

Sources: Anthropic, VentureBeat

Stay up to date

AI for content creation: the latest tools, tips and trends. Every two weeks in your inbox:

More info …

About the author

Related posts:

Advertisement

×