Why Anthropic's Mythos AI has sparked a global security scramble- Moneycontrol.com
Market Updates

Why Anthropic's Mythos AI has sparked a global security scramble- Moneycontrol.com

MoneyControl13h ago

When Anthropic revealed its new AI model Mythos, the reaction wasn't just excitement -- it was alarm.

The company itself made it clear that this wasn't a typical release. Mythos was considered too powerful to be widely shared. Instead, access was restricted to a small group of organisations, mostly in the United States, with one key exception: the United Kingdom.

Within days, that decision triggered a global response.

Central banks, intelligence agencies and governments began trying to understand what the model could do -- and what it might mean if it fell into the wrong hands, the New York Times reported.

What makes Mythos different

At the heart of the concern is what Mythos can actually do.

Unlike earlier AI systems, it doesn't just identify software vulnerabilities -- it can exploit them. That means it can potentially break into systems that run critical infrastructure, from financial networks to energy grids.

Experts have warned that this represents a step change.

The governor of the Bank of England described it as something that could "crack the whole cyber-risk world open." Others have compared its potential impact to major geopolitical disruptions, highlighting just how seriously it's being taken.

In simple terms, this isn't just a better tool. It's a fundamentally more powerful one.

A race that now looks geopolitical

The response to Mythos has also revealed something bigger.

AI is no longer just a technology race between companies. It's becoming a competition between countries.

Whoever builds the most advanced models doesn't just gain commercial advantage. They gain influence over security, infrastructure and even global stability.

That's why Mythos has quickly turned into more than a product.

It's become a geopolitical asset.

For countries like China and Russia, the development has reinforced concerns about falling behind. For others, it's raised uncomfortable questions about dependence on a handful of companies based in the United States.

Limited access, growing tension

Anthropic's decision to tightly control access has added another layer to the situation.

On one hand, many experts have praised the caution. Limiting who can use such a powerful tool reduces the risk of misuse.

On the other hand, it creates tension.

Governments and organisations that don't have access are left trying to assess a threat they can't fully examine. European regulators, for example, have held multiple meetings with the company but still haven't been given direct access to the model.

That imbalance raises a bigger question.

Who gets to decide who can use a technology this powerful?

A gap in global coordination

Another issue is the lack of a global framework to deal with something like this.

There are no clear international rules, no shared inspection systems and no equivalent of a treaty that governs how advanced AI should be handled. Each country is reacting on its own, often with limited information.

That makes coordination difficult.

Even as the risks are becoming clearer, there's no agreed way to manage them.

Why time may be limited

Anthropic has also warned that Mythos may not stay unique for long.

The company expects that similar models with comparable capabilities could emerge within the next 18 months. That creates urgency.

Organisations now have a limited window to strengthen their systems before such tools become more widely available.

And once they do, controlling access becomes much harder.

The bigger picture

What Mythos shows is how quickly the conversation around AI is changing.

This is no longer just about innovation or productivity. It's about security, control and power.

Technologies that were once seen as tools are starting to look more like strategic assets, ones that can shape global dynamics in ways we're only beginning to understand.

And as that shift accelerates, governments and companies alike are being forced to answer a difficult question.

Not just how to build powerful AI, but how to live with it safely.

Originally published by MoneyControl

Read original source →
Anthropic