
Anthropic's latest model brings meaningful improvements to software engineering
Anthropic has released Claude Opus 4.7, the latest iteration of its flagship model line, bringing a notable set of improvements to software engineering performance and visual processing capability. The release marks a meaningful step forward from the previous Opus 4.6 model, with changes that span raw capability, safety architecture and the tools available to developers building on top of the platform.
For the growing community of engineers, researchers and enterprise users who rely on Claude for complex technical work, Thursday's announcement introduces several features that could meaningfully change how they interact with the model on a daily basis.
Stronger coding and dramatically improved vision
The two headline capability upgrades in Opus 4.7 center on software engineering and image processing. According to Anthropic, the new model demonstrates measurable gains in handling complex coding tasks that previously required close human supervision -- a shift that could reduce friction for developers using Claude as an active collaborator in their workflows rather than a passive assistant.
On the vision side, the improvement is substantial. Opus 4.7 processes images at resolutions up to 2,576 pixels on the long edge, representing more than three times the capacity of prior Claude models. For users working with detailed diagrams, technical documents, charts or high-resolution visual content, that expanded processing capability opens up a broader range of practical applications.
Where Opus 4.7 sits in Anthropic's model lineup
While Opus 4.7 represents a clear upgrade over its predecessor, Anthropic is transparent that it remains less capable than Claude Mythos Preview, the company's most powerful model currently in existence. Mythos Preview continues to operate under a limited release due to safety concerns outlined in Project Glasswing, a framework Anthropic announced just last week. For most users, Opus 4.7 will represent the most capable Claude model they can access in full.
Built-in cyber safeguards and a new verification program
Safety architecture received significant attention in this release. Anthropic has implemented cyber safeguards directly into Opus 4.7 that automatically detect and block requests flagged as prohibited or high-risk cybersecurity uses. The company also reduced the model's cyber capabilities during training compared to Mythos Preview, a deliberate choice to limit potential misuse at the model level rather than relying solely on downstream filtering.
For legitimate security professionals who need access to more advanced capabilities for authorized work, Anthropic has introduced a new Cyber Verification Program that provides a verified pathway to the model's fuller functionality in that domain.
New developer tools and a refined effort control system
Anthropic also used the Opus 4.7 launch to roll out several tools aimed specifically at developers and API users. A new effort level called "xhigh" has been introduced, sitting between the existing high and max settings and giving users more granular control over the tradeoff between reasoning depth and response speed -- a balance that matters significantly depending on the nature of the task at hand.
Task budgets are now available in public beta for API users, and a new ultrareview command has been added to Claude Code specifically for bug detection. Together, these additions reflect Anthropic's continued investment in making Claude a more capable and controllable tool for professional development environments.
On the instruction-following front, Anthropic noted that while Opus 4.7 shows genuine improvement in this area, some users may need to adjust prompts that were written and optimized for earlier models, as the new version responds somewhat differently to certain input patterns.
Availability, pricing and a new tokenizer
Claude Opus 4.7 is available across Claude products, the Claude API, Amazon Bedrock, Google Cloud's Vertex AI and Microsoft Foundry. Pricing remains unchanged from Opus 4.6, holding at $5 per million input tokens and $25 per million output tokens -- a decision that makes the upgrade accessible without adding cost for existing users.
One technical change worth noting is a new tokenizer introduced with Opus 4.7. Depending on content type, the same input may now generate between 1.0 and 1.35 times more tokens than it would have under previous models, a factor that users processing large volumes of text will want to account for when estimating usage costs.
On benchmarks including finance agent evaluations and GDPval-AA -- a measure of economically valuable knowledge work across finance and legal domains -- Opus 4.7 scored higher than its predecessor, suggesting real-world performance gains in the professional contexts where the model is most heavily used.