
Anthropic introduces an extra high effort level, file-based memory across sessions, and new verification program
Anthropic has released Claude Opus 4.7, an upgrade to its flagship model that sharpens the capabilities developers have leaned on most heavily, autonomous coding, high-resolution image processing, and sustained performance across long, multi-session tasks.
The model is available today across Claude's full product suite, the API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry, at unchanged pricing of $5 per million input tokens and $25 per million output tokens.
The headline improvement is in software engineering. Early-access users report being able to hand off their most demanding coding work tasks that previously required close supervision, and trust Opus 4.7 to complete them with minimal hand-holding.
The model verifies its own outputs before reporting back, a behavioural shift that reduces the back-and-forth typically needed on complex agentic runs.
Instruction-following has also been tightened significantly, and developers should take note: where earlier models interpreted prompts loosely or skipped steps, Opus 4.7 takes instructions literally.
Anthropic advises users to re-tune existing prompts before migrating, as the stricter parsing can produce unexpected results from legacy system prompts.
Opus 4.7 now accepts images up to 2,576 pixels on the long edge roughly 3.75 megapixels, more than three times the ceiling of prior Claude models. T
hat jump is not cosmetic. It opens practical use cases that were previously blocked by resolution limits: computer-use agents reading dense UI screenshots, data extraction from complex diagrams, and any workflow that depends on pixel-level visual accuracy.
Because higher-resolution images consume more tokens, Anthropic notes that users who don't require the extra fidelity can downsample images before sending them to the model to manage costs.
Opus 4.7 is the first Claude model to carry Anthropic's new cybersecurity guardrails, introduced under the company's Project Glasswing framework.
Automatic detection and blocking is active for requests that indicate prohibited or high-risk cyber uses a deliberate decision to test these controls on a less capable model before applying them to the more powerful Claude Mythos Preview.
Security professionals with legitimate needs penetration testing, vulnerability research, red-teaming, can apply to Anthropic's new Cyber Verification Program to access the model for those purposes.
Alongside the model itself, Anthropic is shipping several supporting features. A new xhigh effort level sits between the existing high and max settings, giving developers finer control over the reasoning-versus-latency tradeoff. Task budgets are entering public beta on the API, allowing developers to guide token spend across longer autonomous runs.
In Claude Code, the new /ultrareview command runs a dedicated review session that flags bugs and design issues the way a careful human reviewer would. Pro and Max users get three free ultrareviews to test the feature.