Anthropic's Claude Mythos Preview Aims To Find Dangerous Software Bugs
Market Updates

Anthropic's Claude Mythos Preview Aims To Find Dangerous Software Bugs

Yahoo Tech19d ago

Anthropic, the company behind the Claude chatbot, says it is testing a powerful new AI system that can spot serious weaknesses in software. That matters because software bugs not only prevent proper operation of a system, they let hackers break into computers, steal data, or shut down important systems. Anthropic says its new model, called Claude Mythos Preview, is much better than earlier systems at finding those weak spots.

Most people have never heard of Mythos because Anthropic has not released it widely. Think of it as a test version of Claude that is far better at code and bug hunting. Anthropic says Mythos can help find hidden flaws in operating systems, browsers and other core software that much of the world relies on. This is especially important when it comes to critical open source libraries and software that users are increasingly becoming dependent on.

Recently there has been a string of so-called "supply chain" attacks in which attackers have compromised critical open source libraries that are in turn used by other systems. Anthropic says Mythos has already helped uncover previously unknown vulnerabilities, including flaws in major software systems. Some of those bugs could have been used in real attacks if they had stayed hidden.

What is Project Glasswing?

Anthropic is being careful with Mythos. Instead of opening it to everyone, the company has placed it inside a locked down pilot called Project Glasswing. Project Glasswing is Anthropic's program for giving the tool to a small group of trusted partners first. The company aims to first let security teams, infrastructure providers and open source maintainers use the model to find problems before criminals do.

The reason for Anthropic's caution is that it is important to have the people fixing software get this tool before the people trying to break software get something similar. If a model gets very good at finding software flaws, it could help defenders patch systems faster. But it could just as easily help attackers figure out where to hit.

Picture a metal detector that can find weak points inside the beams of a bridge. In the hands of engineers, that tool helps prevent disaster. In the hands of a saboteur, it shows exactly where to attack. Anthropic seems to believe Mythos is strong enough that it cannot be treated like a normal public release. It suggests top AI companies now think some models could affect real world cybersecurity in a serious way.

That puts the company in a difficult position. A better AI model sounds exciting when it writes emails faster or helps a programmer clean up code. It sounds far less harmless when it can uncover the kinds of flaws hackers spend years trying to find. Project Glasswing is built around the idea that people defending important systems should see this first.

The Shift in AI's Central Position for Code

In the past, AI companies mostly talked about chat quality, benchmark scores and workplace productivity. Anthropic is talking about restricted access, risk reports, government briefings and cybersecurity partnerships. The company sees Mythos and the more sophisticated uses of Claude as a sensitive system with real security consequences.

There is another reason this story has drawn attention. Reports about Mythos started circulating before Anthropic fully explained the project in public. That gave the rollout a hush around it and raised harder questions about how companies should handle advanced AI systems that may help defenders and attackers at the same time.

Anthropic sent the message that Mythos is powerful, but it is not for everyone. If an AI company feels the need to limit access to a model because it may uncover dangerous vulnerabilities too effectively, then AI has moved into new territory. Will we start seeing powerful models controlled with export or other restrictions?

That is why Project Glasswing may end up mattering more than Claude Mythos Preview as a standalone label. Models will keep getting smarter. The market almost guarantees that. The real contest will sit in the operating model around them including who gets access, who gets warned, who patches first, who audits the redactions and who bears the cost when private labs discover public weaknesses. Anthropic is betting that a closed circle of selected companies will buy the rest of the ecosystem time. It may be right. Or it might be that the AI industry is simply moving too fast to provide effective layers of control.

This article was originally published on Forbes.com

Originally published by Yahoo Tech

Read original source →
Anthropic