US Banks urged to test Anthropic Claude Mythos by Trump administration: Here's why
Market Updates

US Banks urged to test Anthropic Claude Mythos by Trump administration: Here's why

The Financial Express11d ago

Anthropic introduced Mythos earlier this month as its most capable frontier AI model to date.

The Trump administration officials are reportedly encouraging Wall Street's largest banks to internally test Anthropic's latest AI model, known as Mythos - the one that Anthropic is scared to release to the public and is only willing to share it with few institutions for strengthening security vulnerabilities in their systems. This comes months after the Pentagon warned the AI firm as a 'supply chain risk', flagging it for use by government institutions.

According to reports, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting this week with chief executives from systemically important banks, including Goldman Sachs, Bank of America, Citigroup, Morgan Stanley, and Wells Fargo. JPMorgan Chase was specifically named in connection with testing initiatives, while others have gained or expect access soon. During the gathering, officials urged bank leaders to take the model seriously and deploy its capabilities for vulnerability detection.

Claude Mythos urges top US banks to test Mythos AI model

Anthropic introduced Mythos earlier this month as its most capable frontier AI model to date. The company has chosen not to release it publicly due to its claimed unprecedented cybersecurity prowess.

Instead, it is being shared selectively with partners, including major tech firms like Apple, Amazon, and Microsoft, as part of a program aimed at securing critical software.

Anthropic claims the model has already exposed thousands of vulnerabilities in operating systems, web browsers, and popular applications, raising alarms about a new era of AI-driven cyberattacks. Officials did not hint at an imminent specific attack but stressed the need for banks to run the model against their infrastructures to strengthen defences. One source described the message as a warning - allowing advanced AI like Mythos to test internal systems could risk exposing customer information if not handled carefully.

Banking leaders, including Goldman Sachs CEO David Solomon, Bank of America's Brian Moynihan, and Citigroup's Jane Fraser, attended the session. JPMorgan's Jamie Dimon was invited but could not participate. The focus on testing Mythos internally positions it as a powerful tool for proactive defence that could also be weaponised by malicious actors.

Testing directive follows rift between Anthropic-Pentagon

The directive to US banks for testing the Claude Mythos model follows the episode between the AI firm and the Pentagon involving the usage of the former's AI model for defense projects. Anthropic had signed a $200 million contract with the Department of Defense in July 2025, making Claude the first major frontier AI model deployed on US military classified networks for tasks including intelligence analysis, operational planning, and cyber operations. Tensions, however, escalated in January 2026 when reports surfaced that Claude had assisted in a US military raid to capture former Venezuelan President Nicolás Maduro. An Anthropic employee reportedly raised questions with partner Palantir about potential violations of the company's usage policies, which strictly prohibit assistance with mass domestic surveillance of Americans or fully autonomous lethal weapons. Defense Secretary Pete Hegseth viewed this as overreach by the company, prompting a renegotiation of the contract. The Pentagon demanded unrestricted "any lawful use" of the models, while Anthropic insisted on maintaining its core safety guardrails.

Negotiations didn't work out after Hegseth issued a deadline for Anthropic to drop its restrictions. Despite last-minute offers and a meeting between CEO Dario Amodei and Pentagon officials, the two sides failed to agree on terms regarding surveillance and autonomous systems. Hegseth designated Anthropic a 'supply chain risk', ordering federal agencies and military contractors to stop business with the company. President Trump publicly backed the move, and rival firms OpenAI and xAI quickly secured deals to fill the gap. Anthropic sued to challenge the designation by the Pentagon.

Originally published by The Financial Express

Read original source →
xAIAnthropic