UK financial authorities assess potential risks of Anthropic's new AI model: Report
Company Updates

UK financial authorities assess potential risks of Anthropic's new AI model: Report

Firstpost13d ago

Anthropic new AI model leaked, reported to be the most powerful one. File image/Reuters

UK financial regulators are holding critical talks with the government's cybersecurity agency and major banks to assess risks posed by the latest artificial intelligence model from Anthropic. The Financial Conduct Authority, officials at the Bank of England, and HM Treasury are in talks with the National Cyber Security Centre to examine potential vulnerabilities in critical IT systems highlighted by Anthropic's latest AI model, as reported by the Financial Times.

Representatives from major British banks, insurers, and exchanges are expected to be briefed on the covert security risks posed by the model, Claude Mythos Preview, at a meeting with regulators in the next fortnight, it said, citing two people briefed on the talks.

Soon afterwards, a meeting called by US Treasury Secretary Scott Bessent with major Wall Street banks will also examine the model's cyber risk potential. The startup has said that the model is being deployed as part of "Project Glasswing nL4b40Q0LK," a controlled initiative under which only select institutions are allowed to use the unreleased Claude Mythos Preview model for defensive cybersecurity purposes.

Earlier posts by the AI platform had revealed how it had already identified thousands of major vulnerabilities across operating systems, web browsers, and other widely used software.

Claude Mythos's exceptional cybersecurity capabilities make it dangerous for public release. Its profound findings of a critical piece of security infrastructure and multiple vulnerabilities in the Linux kernel, essential for computer systems worldwide, have further alarmed the industry about its capabilities.

Since these capabilities could be made widely available with its launch, cybersecurity experts have started raising alarm. Aware of the catastrophe such functionality would lead to if users were to access it, Anthropic is offering the new model to companies first to use it to find issues in their critical infrastructure, including Apple, Microsoft, and Google. The hope is that they would be able to find gaps in their security and fix them before the model is fully launched.

Originally published by Firstpost

Read original source →
Anthropic