Anthropic Model Sparks Fed-Wall Street Alarm Over AI Cyber Risk
Market Updates

Anthropic Model Sparks Fed-Wall Street Alarm Over AI Cyber Risk

International Business Times, Singapore Edition12d ago

Banks and regulators warn advanced AI could trigger cyberattacks and disrupt financial stabilityU.S. regulators, bank CEOs discuss AI-driven cyber risks to financeJerome Powell, Scott Bessent meet major bank executivesAnthropic restricts Mythos model over misuse concernsMarkets react as AI cyber threats raise systemic financial stability risks

Federal regulators and top Wall Street executives held urgent discussions this week over the potential cyber risks posed by a new artificial intelligence model developed by Anthropic, highlighting growing concerns that advanced AI systems could disrupt the stability of the financial system.

Jerome Powell and Scott Bessent met with chief executives of major U.S. banks to assess the implications of Anthropic's newly introduced Mythos model. The meeting underscores how AI-driven cyber capabilities are increasingly being treated as systemic risks, with potential consequences for banking operations, financial security, and global markets.

The discussions come as Anthropic rolled out its Claude Mythos Preview model in a limited capacity, citing concerns that its advanced capabilities could be exploited by malicious actors. Restricting access reflects the high-risk nature of such systems, where misuse could lead to large-scale cyberattacks, financial disruptions, or data breaches affecting both institutions and consumers.

AI Cyber Capabilities Trigger System-Level Concerns

The urgency of the meeting signals that AI is no longer viewed solely as a productivity tool but as a potential threat vector. Advanced models capable of automating cyber operations can significantly lower the barrier for executing sophisticated attacks, increasing the scale and frequency of risks facing financial institutions.

Major bank leaders, including executives from Bank of America, Citigroup, Goldman Sachs, Morgan Stanley, and Wells Fargo, attended the meeting, reflecting the breadth of concern across the financial sector. The absence of even a single large institution in such discussions can highlight how critical coordinated responses are in managing systemic threats.

Anthropic has been working with partners including Apple, Google, Microsoft, and Nvidia under a cybersecurity initiative known as Project Glasswing. These collaborations aim to strengthen defensive capabilities, but they also underscore the dual-use nature of AI, where the same tools can be applied for both protection and attack. Dario Amodei shared in X stating: "The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities."

Government And Industry Alignment Still Evolving

The Trump administration's engagement with Anthropic reflects growing recognition that AI governance is becoming a national security issue. The company has held ongoing discussions with agencies including the Cybersecurity and Infrastructure Security Agency, as policymakers attempt to understand and regulate emerging risks.

At the same time, tensions remain between Anthropic and the U.S. Department of Defense, which recently labeled the company a supply chain risk. Legal challenges are ongoing, with conflicting court rulings allowing Anthropic to continue working with some government agencies while remaining restricted from defense contracts.

This regulatory uncertainty can influence how quickly AI technologies are adopted across sectors. Delays or restrictions may slow innovation but are also aimed at preventing misuse that could lead to large-scale economic or security consequences.

Financial Markets React To Emerging Risks

The release of the Mythos model has already had visible effects in financial markets. Reports of its advanced cyber capabilities contributed to a decline in cybersecurity stocks.

Such reactions show how sensitive markets are to developments in AI. Perceived increases in cyber risk can shift capital flows, affect valuations, and influence investment strategies across technology and financial sectors.

Previous incidents have demonstrated the risks associated with AI misuse. Anthropic disclosed that its models had been used in past cyber operations, including automated attacks targeting government and corporate systems.

The meeting between regulators and bank executives signals a shift in how AI is being integrated into risk assessment frameworks. Rather than focusing solely on innovation, institutions are increasingly preparing now for scenarios where AI could destabilize critical systems.

The situation reflects the dual nature of artificial intelligence as both a tool for progress and a potential source of systemic risk, with policymakers and industry leaders working to balance innovation with safeguards as the technology continues to evolve.

Originally published by International Business Times, Singapore Edition

Read original source →
Anthropic