Commentary: Anthropic's 'too dangerous to release' Mythos AI model is a wake-up call for everyone
Market Updates

Commentary: Anthropic's 'too dangerous to release' Mythos AI model is a wake-up call for everyone

CNA8d ago

The Treasury is now pushing for access to Mythos. One organisation that already has it is the UK's AI Security Institute, which has become the world's top neutral arbiter of what counts as safe and secure AI.

It found that some of the hype around Mythos is warranted. It is indeed more capable of being used for complex cyberattacks than other AI tools such as OpenAI's ChatGPT or Google's Gemini. But it is most perilous for "weakly defended" or simplified systems.

Large banks have some of the most secure IT in the world, and while Mythos and other powerful AI poses a threat in the wrong hands, it's the much broader array of small and medium-sized companies that look most vulnerable to hackers and bad actors using the tools.

Cyber specialists have long complained that companies treat security as an afterthought, and the result is online services and software that are riddled with bugs, handing hackers a possible way to infiltrate a computer system.

Tech companies have an approach for dealing with this, called "responsible disclosure". Once a flaw is found in their software, they'll announce it to the world with a suggested fix, giving their customers time to make the patch and move on with their lives. Microsoft's version of this is Patch Tuesday, which despite its name refers to a monthly disclosure of flaws the company has found in Office 365, Windows and other products.

IT staff at banks like Barclays and Wells Fargo will take those suggested patches, test them to make sure they don't break any of their existing systems, get sign-off from management, and then deploy them. That takes weeks or months.

Up until the advent of generative AI, the process worked just fine because it would typically take an even longer time for bad actors to find a way to attack a system based on the flaw that had been disclosed. They'd have to study the bug and also experiment with different methods for exploiting it.

Originally published by CNA

Read original source →
Anthropic