Anthropic's Mythos Model Heralds New Era for AI Releases
Market Updates

Anthropic's Mythos Model Heralds New Era for AI Releases

Bloomberg Business19d ago

You're reading Bloomberg Technology's Q&AI newsletter.

You're reading Bloomberg Technology's Q&AI newsletter.

You're reading Bloomberg Technology's Q&AI newsletter.

Subscribe below.

Subscribe below.

Subscribe below.

Plus Signed UpPlus Sign UpPlus Sign Up

By continuing, I agree to the Privacy Policy and Terms of Service.

An unreleased new AI model is the talk of Silicon Valley. Plus: New developments in the fight to prevent Chinese developers from copying US AI models. But first...

Three things to know:

  • Meta debuts first AI model from new Superintelligence group

  • Anthropic completes tender offer, but employees hold onto shares

  • OpenAI advocates electric grid, safety net spending for new AI era

A 'terrifying' model

Over the past three years, the top AI developers have raced to one-up each other by introducing better artificial intelligence models, with new releases becoming increasingly frequent. Now, we may be entering an era when AI progress is defined in part by what gets held back.

On Tuesday, Anthropic unveiled Mythos, a more powerful general-purpose model that it says significantly outperforms prior offerings on a range of benchmarks, including for coding and reasoning. Yet, the company has decided to limit the release to a small group of trusted partner companies because of concerns about its advanced cybersecurity capabilities.

During Anthropic's testing, its in-house security team found that Mythos was capable of identifying and then exploiting vulnerabilities "in every major operating system and every major web browser when directed by a user to do so," according to a blog post. The hope, according to Anthropic, is a narrow release will allow companies to use Mythos to find their own cybersecurity vulnerabilities before hackers do.

Meanwhile, OpenAI is also finalizing a product with greater cybersecurity capabilities that it intends to release to select partners, Axios reported on Thursday. The ChatGPT maker had previously introduced a pilot program meant to put its tools "in the hands of defenders first."

Many AI industry insiders applauded Anthropic -- a company that has long positioned itself as a safety-focused organization -- for taking what appears to be a responsible approach to rolling out the technology. Some were also spooked by a model that one Anthropic employee said "should feel terrifying."

Given Mythos is not publicly available, the initial feedback on what it can do is largely coming from the company and a few partners with early access. In a system card for the new model, Anthropic said early users found Mythos was better at correcting itself and generally "works more like a senior engineer," spotting and repairing issues that other models passed over.

On the whole, however, Anthropic said Mythos "does not seem close to being able to substitute for research scientists and research engineers -- especially relatively senior ones."

Beyond security concerns, Anthropic may have other reasons to rethink its approach for rolling out models, including rationing its computing resources. Mythos is a large, costly system that's coming out at a time when Anthropic is already straining to meet surging demand for its existing products.

"We are thoroughly in the era of the labs' best models may well not be public in the way we are used to," Dean Ball, a former AI adviser in the Trump administration, wrote on X. "This will be because of a combination of compute constraints, economic reality, competitive advantage, and safety concerns."

Combating distillation

The decision to refrain from a wider release of Mythos may also help address another pressing concern for Anthropic and its peers: keeping Chinese AI firms in check.

As my colleague Maggie Eastland and I reported this week, Anthropic, OpenAI and Alphabet Inc.'s Google -- normally fierce competitors -- are now swapping notes on how to stop China's AI developers from ripping off their models through a technique known as adversarial distillation.

With distillation, a developer tries to extract outputs from one "teacher" model to help train another "student" model. Some forms of distillation are widely accepted, such as when AI labs create smaller, more efficient versions of their own models. But the fear in Silicon Valley is that Chinese firms are using this tactic to build competing AI systems with similar capabilities developed for a fraction of the cost.

The stakes for US tech firms are high. At the moment, the country is still widely viewed as being ahead of China in AI development. But China is flooding the market with affordable, capable models that risk undercutting the ability of companies like OpenAI and Anthropic to charge more for their products. One estimate by US officials found that unauthorized distillation is costing US companies billions in annual profit, as we reported.

At the same time, some have argued the practice poses a security risk, since distilled models often lack safety guardrails designed to prevent bad actors from using AI tools for malicious activities. That issue may resonate even more in the wake of an advanced model like Mythos, which is theoretically capable of wreaking havoc on critical software.

Distillation is becoming an increasingly urgent matter for US firms as they anticipate the release of a long-awaited new model from DeepSeek. The Chinese startup rattled markets early last year by introducing a credible competitor to OpenAI's reasoning models that purportedly cost far less to build.

OpenAI has accused DeepSeek of exfiltrating large amounts of data by systematically prompting its application programming interface and obscuring where the requests were coming from. If a US based company were to do this, OpenAI might try to block them from using the app and sue them. When the users are in China and operating through obscured channels, that becomes more difficult.

Microsoft and OpenAI previously launched an investigation into DeepSeek's alleged practices, as I first reported. In February of this year, OpenAI also sent a memo to Congress noting it has found continued and sophisticated distillation attempts from actors in China and Russia.

"DeepSeek's next model (whatever its form) should be understood in the context of its ongoing efforts to free-ride on the capabilities developed by OpenAI and other US frontier Labs," OpenAI wrote in the memo.

Some commentators have been quick to point out the apparent hypocrisy in this rhetoric. The same AI labs accusing Chinese developers of effectively copying their technology without permission have also previously been sued for ripping off the works of artists and writers to train their own models.

Whether or not the US companies have the moral high ground, they do have the attention of the Trump administration, which is focused on a technological arms race with China. As companies like Anthropic and OpenAI push out more capable models with new national security implications, the discussion around distillation will likely only grow louder in Washington and beyond.

Human quote of the week

"When I predicted that it would be 2029 for AGI in 1999, the big controversy was, 'Would that happen?' Now the controversy is whether or not it's good for people."

Ray Kurzweil

Futurist and author of The Singularity is Near

In an interview at the HumanX conference on Tuesday, Kurzweil took a victory lap of sorts for his early predictions that a more powerful form of AI known as artificial general intelligence would be achieved this decade. Other tech leaders now see a similar timeline, thanks to recent advances in AI. "Competition is what fosters exponential growth," he told me in the discussion.

One to watch

Deep learning

  • CoreWeave, Meta strike latest $21 billion deal for AI computing

  • UAE's AI leader plans big US expansion, looking past Iran war

  • Musk seeks ouster of OpenAI CEO Sam Altman as trial looms

  • Former DeepMind researchers bet on visual AI with new startup

More from Bloomberg

Get Tech In Depth and more Bloomberg Tech newsletters in your inbox:

  • Cyber Bulletin for coverage of the shadow world of hackers and cyber-espionage

  • Game On for diving deep inside the video game business

  • Power On for Apple scoops, consumer tech news and more

  • Screentime for a front-row seat to the collision of Hollywood and Silicon Valley

Originally published by Bloomberg Business

Read original source →
Anthropic