Anthropic's AI sparks concerns over a new national security risk
Market Updates

Anthropic's AI sparks concerns over a new national security risk

POLITICO19d ago

AI and cybersecurity professionals have feverishly raised concerns this week over a large language model that Anthropic says is too dangerous to release.

On Tuesday, Anthropic announced Claude Mythos Preview, a model the company claims is capable of exploiting vulnerabilities in every major operating system and internet browser. According to Anthropic, more than 99 percent of the thousands of vulnerabilities that Mythos has identified aren't patched yet, and many have gone unnoticed for decades. Anthropic is now launching Project Glasswing, an initiative that involves collaborating with companies like AWS and Google to safely deploy these AI capabilities and enhance cybersecurity.

However, there is some debate among AI researchers about the benchmarks and analytical tools underlying Anthropic's claims. OpenAI also initially held back the release of GPT-2 in 2019 for fear that it would cause immeasurable safety and security harms, a concern that now seems quaint given the powerful models released since then.

If we take Anthropic at its word, then Mythos would have devastating consequences for national security. This raises the wider societal question: Should we trust any private company to have control over a technology that could potentially upend cybersecurity at this scale?

"To the extent that it's true that Mythos is 'the one ring to rule them all' in cyber capabilities, having that concentrated anywhere would not be great," said Bill Drexel, a Hudson Institute senior fellow who studies AI policy. "But if it had to be concentrated somewhere, you would much prefer a government with democratic oversight than a private corporation."

Regulators are in a difficult position when it comes to Mythos. In the past, the government has either led the way, or been closely involved in the development of powerful technologies like nuclear weapons or autonomous drones. The dual-use nature of AI technology, however, means that private industry has been at the forefront of its advancement.

Competition in cutting-edge American markets has long driven innovation and bolstered the economy. However, the government must also balance the productive creativity of the free market with protecting national security.

"What we're going to be seeing more of is the government trying to take measures to check the power of these private companies," said Cornell government and tech professor Sarah E. Kreps. "But the government has to be somewhat careful as well, because those same companies are a big engine economically."

Even if the government does want to boost oversight of companies developing potentially dangerous models, the full extent of its power to do so remains unclear. This is playing out in real time through Anthropic's disagreement with the Trump administration over how its technology should be used by the Pentagon, a fight currently before the courts.

Other laws allow the government to exert certain control over companies for national security purposes, but they're unlikely to apply to models like Mythos, according to former National Security Council deputy legal adviser Ashley S. Deeks. For instance, the Defense Production Act allows the president to force companies to prioritize government contracts and direct them in manufacturing important defense products. Yet the act wouldn't give the government complete authority over Mythos.

"That statute doesn't envision the president fully taking over a company that has developed a really powerful tool," said Deeks, who added that the DPA also likely wouldn't allow the government to "buy up an entire product [like Mythos] that a company has made, and not allow any sales to anyone else." What national security officials could do is restrict exports of Mythos to ensure it doesn't land in the hands of foreign adversaries.

So if there aren't many existing policies to handle something like Mythos, it may be worth considering new ones. Peter Wildeford, the AI Policy Network's policy head, suggested that there could be rules requiring a company like Anthropic to give the government a heads up before deciding to release Mythos or a similar model. Wildeford added that Project Glasswing could be a good statutory model for these kinds of situations in the future, though he would want requirements for government involvement in the process.

"Differential access where you give vetted defenders access before it goes to [the] general public is a very good way of doing things, but it's not [currently] required by law by any means," he said. "Less responsible companies could be constrained by these requirements."

A White House official, granted anonymity to discuss sensitive talks, said the government is working with AI companies to ensure that models are used to address major software vulnerabilities. An Anthropic spokesperson pointed DFD to its blog post on Project Glasswing, which notes that the company has been "in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities."

Florida Attorney General James Uthmeier announced on Thursday that his office is opening an investigation into OpenAI over public safety and national security concerns with ChatGPT, POLITICO's Andrew Atterbury reports.

While its full extent is currently unclear, the investigation is partly tied to reports that ChatGPT may have advised the alleged perpetrator of a mass shooting at Florida State University last year. Uthmeier further suggested that his office will look into how AI technology may be falling into the hands of adversaries like the Chinese Communist Party, and how it contributes to the proliferation of child sexual abuse material.

"AI should exist to supplement, support and advance mankind, not lead to an existential crisis or our ultimate demise," Uthmeier said in a video posted to X. "As Big Tech rolls out these technologies, they should not, they cannot, put our safety and security at risk." OpenAI did not immediately respond to a request for comment.

New York City is considering a rule that would penalize companies for having difficult subscription cancellation policies, POLITICO's Alfred Ng reports.

The rule would impose an initial $525 fine per violation on businesses that do not offer a simple way to cancel subscriptions, with penalties potentially reaching $3,500 for repeat offenders. The New York City Department of Consumer and Worker Protection claims that it fielded more than 100 complaints about difficult cancellation processes last year.

The proposal was submitted by DCWP Commissioner Sam Levine, who had advocated for such a policy during his time at the Federal Trade Commission. "If it's easy to sign up for something, it should be just as easy to cancel," he said in a statement.

  • FBI obtained deleted Signal messages from the iPhone notifications database.

  • Meta takes down ads from attorneys seeking clients addicted to social media.

  • Waymo robotaxis share pothole data they collect with Waze users.

  • A pro-Iran meme operation trolls Trump with AI Lego cartoons.

  • Security researchers induced Apple Intelligence to curse at users.

Stay in touch with the whole team: Aaron Mak ([email protected]); Bob King ([email protected]); Nate Robson ([email protected]); John Hewitt Jones ([email protected]).

Originally published by POLITICO

Read original source →
Anthropic