Anthropic's Locked-Down Mythos Model Hit by Access Claim | This Week in IT - Techopedia
Market Updates

Anthropic's Locked-Down Mythos Model Hit by Access Claim | This Week in IT - Techopedia

Techopedia.com1d ago

Suswati Basu is a multilingual, award-winning editor. She was shortlisted for the Guardian Mary Stott Prize and longlisted for the Guardian International Development Journalism Award....

Anthropic is investigating reports that Claude Mythos Preview, an unreleased version of its AI model, may have been accessed without authorization through a third-party vendor environment tied to development work.

Speaking to Techopedia, an Anthropic spokesperson said: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments."

Only a week after releasing the Claude Mythos Preview to a select group of organizations, people familiar with the matter said the reported activity appears linked to an external development platform rather than Anthropic's production API systems.

The sources added there is no evidence at this stage that the incident extended beyond that external environment or affected the company's internal infrastructure.

Anthropic's Mythos Model Is Already Being Put To The Test

Anthropic has not said whether any data was removed or when the alleged access took place, however, Bloomberg reported that it took place on a "private online forum."

The users were reportedly part of a private Discord group focused on uncovering details about unreleased AI models, using bots to scan unsecured websites such as GitHub for stray references posted by major labs.

The news outlet reported that the group gained access to Mythos after some members made an educated guess about the model's online location using naming patterns Anthropic had used for earlier releases. Some of those clues allegedly emerged from a recent data breach involving Mercor, a startup that works with several leading AI developers.

The irony is hard to miss. Anthropic positioned Mythos as a model so powerful that it required an unusually cautious rollout. It limited access to a small number of trusted partners because of fears it could be misused by hackers or destabilize cybersecurity defenses.

Yet the first major controversy surrounding the system is not what Mythos itself might do. It's the possibility that third parties have already gained access through the carelessness of an external development partner.

For a company that has built much of its identity around AI safety and controlled deployment, this risks reinforcing a familiar lesson in tech: A system is only as strong as the weakest link in its wider supply chain. Quite often, that weak link is basic human nature.

Also in Tech News

Tim Cook Steps Down As Apple Addresses Its AI Problem

After more than a decade at the helm, Apple's head honcho Tim Cook passed the baton to John Ternus, signaling a change in direction for the $4 trillion company. In a statement, the 65-year-old said Ternus would attempt to "make something better, bolder, more beautiful, and more meaningful."

Ternus has been serving as the tech giant's senior vice president of Hardware Engineering.

The changing of the guard comes at a time when Apple appears to have stalled in the AI race against the likes of OpenAI, Google, and Grok. Cook's tenure as CEO will end on September 1, bringing to an end an era defined by operational efficiency and financial growth

Although he ushered Apple into its trillion-dollar era, Cook has often lived in the shadow of his predecessor. Analysts have built a mythos around company cofounder Steve Jobs, next to whom Cook has seemed perhaps too straight-laced. Now, Ternus will be expected to step up as both a master of managing sprawling operations and an innovation wizard for this new tech era.

Ming-Chi Kuo, a tech analyst at TF International, wrote on X that one of Ternus's major achievements was overseeing the transition from Intel processors to the firm's own proprietary silicon. Kuo added: "Without this, there wouldn't be the success of today's MacBook Neo or the advantage Apple now holds as it gears up for AI devices."

Meta Plans to Track Employee Keystrokes for AI Training

Meta has found itself in hot water after reports emerged that it plans to track the computer activity of U.S. employees to help train its AI models. The software is expected to capture mouse movements, clicks, and keystrokes as the company looks to build AI agents capable of working more autonomously, Reuters first reported, citing an internal memo.

According to the report, the company's Model Capability Initiative tool would run across work-related apps and websites, while also taking occasional snapshots of content displayed on employees' screens. Techopedia contacted Meta for comment, but an initial email bounced back. We will continue to seek a response.

The move has already drawn criticism from privacy and ethics experts. Veith Weilnhammer, a Max Planck Fellow in Computational Psychiatry, wrote on LinkedIn: "Beyond questions about AI systems that emulate human behavior, such as their impact on the job market, privacy, and the growing commercial value of human behavioral knowledge, this raises an important societal issue: How should we govern access to human-computer interactions, and how can these data be used for public good?"

For now, the data collection is reportedly limited to the U.S., with stricter privacy rules likely to make a similar rollout more difficult in Europe.

UK Cyber Chief Warns Frontier AI Is Accelerating Exploit Discovery

Britain's top cybersecurity official is expected to warn that frontier AI models are making it easier to discover and exploit software flaws at scale, as the UK confronts a rising mix of technological disruption and geopolitical threats.

In remarks due to be delivered at the CYBERUK conference in Glasgow on Wednesday (April 23), National Cyber Security Centre chief executive Richard Horne is set to say that while AI has the potential to strengthen cyber defense, adversaries will also move quickly to weaponize the technology.

Politico reported Horne will caution that frontier AI is already "rapidly enabling discovery and exploitation of existing vulnerabilities at scale," increasing pressure on organizations to patch systems, replace legacy technology, and improve basic cyber hygiene.

Researchers said Anthropic's Mythos, for example, was too dangerous for general release because of its alleged ability to help users identify and exploit sophisticated vulnerabilities. And just like that, we've gone full circle back to Anthropic.

Originally published by Techopedia.com

Read original source →
AnthropicDiscordMercor