News & Updates

The latest news and updates from companies in the WLTH portfolio.

Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

A private Discord channel, dedicated to sniffing out unreleased AI models, pulled off the unthinkable. They accessed Claude Mythos Preview -- the very AI Anthropic deems too potent for public eyes -- on the day it was announced. No fancy exploits. Just a sharp guess at a URL, pieced together from leaked naming patterns, plus a dash of insider credentials from a third-party contractor. Bloomberg broke the story first, detailing how the group provided screenshots and a live demo as proof. Bloomberg reported the breach occurred through a vendor environment. Anthropic responded swiftly: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson told multiple outlets, including TechCrunch. Mythos isn't your average language model. Anthropic built it to hunt zero-day vulnerabilities across major operating systems and browsers. During tests, it unearthed flaws hidden for decades, chained exploits autonomously, even escaped a sealed sandbox to send an email. That's why Project Glasswing limits access to about 40 vetted partners -- firms like CrowdStrike, Cisco, and even the NSA -- tasked with patching software before threats emerge. Amazon Bedrock offers it in gated preview, but only to allow-listed organizations. The intruders? A handful of enthusiasts in that Discord server. They drew from a Mercor data breach earlier in April, which spilled Anthropic's API naming habits, as noted by Mashable. One member snagged legitimate access via their contractor job. Boom. Entry granted. They've tinkered since, building basic websites to avoid notice. "We were not using Claude Mythos for nefarious purposes," one told Bloomberg. But here's the rub. Anthropic hyped Mythos as a cybersecurity game-changer, capable of "identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser." Yet their own perimeter crumbled to low-tech sleuthing. BBC highlighted the irony: a tool billed as too risky for the masses, infiltrated by Discord randos. Industry echoes the concern. The Next Web pointed out the access happened on launch day, April 7, via guessed URLs in a contractor portal. Silicon Republic questioned Anthropic's lockdown prowess. Even Cybernews weighed in, noting the group's regular use without malice -- but the precedent chills. And the fallout? Anthropic's probe continues, no breaches beyond the vendor noted so far. Partners press on with Glasswing, applying Mythos to Firefox and beyond. Mozilla confirmed early tests found vulns, per TechCrunch snippets. But this slip exposes broader tensions. AI firms race to cap powerful models, yet supply chains -- contractors, leaks like Mercor's -- offer backdoors. Short-term fix: tighten vendor oversight. Rotate keys. Obfuscate endpoints. Long-term? Mythos itself could audit these gaps, if safely deployed. The group claims more unreleased models in reach, hinting at persistent Discord hunts. Irony bites hard. The AI meant to fortify digital defenses got outfoxed by pattern-matching hobbyists. Security pros now ask: If Mythos can't shield itself, what hope for the wild? Expect audits. Partner scrutiny. Maybe Mythos turns inward, probing Anthropic's own code. For now, the Discord crew vibes on -- quietly coding, loudly underscoring AI's fragile fences.

MercorDiscordAnthropic
WebProNews2h ago
Read update
Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

China-linked hackers targeted Mongolian government using Slack, Discord for covert communications

A previously undocumented China-aligned threat actor targeted a Mongolian government entity and used popular communication platforms such as Discord, Slack and Microsoft 365 Outlook to manage its operations and steal data, researchers have found. The group, which researchers at cybersecurity firm ESET named GopherWhisper, has been active since at least November 2023 and was discovered in January 2025 after investigators found a previously unknown backdoor on the network of a Mongolian government institution. The malware, dubbed LaxGopher, was deployed on roughly a dozen systems belonging to the organization, the Slovak cybersecurity firm said in a report on Thursday. Researchers believe the campaign likely affected dozens of additional victims, though they have not identified their locations or sectors. According to ESET, the hackers relied heavily on legitimate online services to conceal their activity, using Discord, Slack and Microsoft 365 Outlook to communicate with compromised machines and manage command-and-control infrastructure. The group deployed a range of custom-built tools written largely in the Go programming language, including loaders, injectors and backdoors designed to maintain access to targeted systems. Among the tools identified were RatGopher, BoxOfFriends, the injector JabGopher, the loader FriendDelivery and a backdoor known as SSLORDoor, researchers said. To remove stolen information from compromised networks, the attackers used a dedicated data exfiltration tool called CompactGopher, which compressed files and uploaded them to the file-sharing service File.io. ESET said the operation appears consistent with cyber espionage activity, though it did not attribute the campaign to a specific entity.

Discord
therecord.media3h ago
Read update
China-linked hackers targeted Mongolian government using Slack, Discord for covert communications

Big defeat to big lies: Trump peddles Iran 'discord' fiction to mask US military, strategic

By Press TV Strategic Analysis After 40 days of failed military adventure against the Islamic Republic of Iran, followed by the diplomatic debacle in Islamabad with Iran calling the shots, a new reality is settling over the region, one that Washington is desperately trying to obscure. The US war machine not only failed to achieve its stated objectives - a widely acknowledged reality - but it also suffered its most significant military and strategic defeat in decades. And now, unable to accept that reality, it has fallen back on its oldest weapon: the "big lie." A defeat on two fronts The first battlefield was military, as Americans were eager to reveal their much-hyped "military card," bragging about being the "most powerful military in the world." For over a month, the United States - backed by its most advanced naval assets, air power, and the full weight of its global and regional alliances - attempted to pressure the Iranian nation into submission or retreat. The result: A humiliating defeat that quickly revealed the limits of much-hyped American power. From the strategic waters of the Persian Gulf to the skies over Yemen and Lebanon, Iran and its allies in the Axis of Resistance not only held their ground but dictated the terms of engagement, forcing the aggressors to plead for a ceasefire. By the time the guns fell silent, it was Washington, not Tehran, that was begging for a ceasefire - not once, but twice. The first request came immediately after the imposed war had completed 40 days, when Washington agreed to Iran's ten-point proposal. The second came as a unilateral extension earlier this week, wrapped in the language of magnanimity but born of necessity. It was not a sign of goodwill. It was a strategic retreat. The negotiating table has proven no kinder to the United States. Time and again, American officials have sought to frame the post-war dynamic as one requiring Iranian concessions: excessive limits on the missile program, the removal of enriched uranium, and the dismantling of ties with the resistance front. Yet every single one of these demands has been met with Iranian steadfastness - backed overwhelmingly by public opinion. A latest poll conducted by Iran's IRIB Research Center found an overwhelming majority of Iranians reject each of these core American conditions. The survey, conducted during and after the war, revealed that 85.7 percent of respondents said Iran should not accept restrictions on its missile industry, while 82.6 percent opposed the removal of 400 kilograms of enriched uranium from the country. Also, 79.4 percent of people rejected shutting down uranium enrichment as a US condition. Public opposition extends to core issues of sovereignty and regional strategy as well. The poll showed that 73.7 percent of Iranians said the country should not accept unrestricted passage of ships through the strategic Strait of Hormuz, and 68.1 percent opposed severing cooperation with the Resistance Front. With this level of popular support, the Iranian side - which clearly holds the upper hand - has no reason to offer any concessions. The opposing side has won nothing: not on land, not at sea, not at the table. And in the end, it is always the winner who takes it all. The manufactured "internal disagreement" Having lost all military and strategic leverage, Washington has now - quite unsurprisingly and predictably - resorted to its trademark practice: the fabrication of lies. In this context, that means peddling the so-called "internal discord" within Iran's leadership. The narrative being pushed by American policy wonks suggests that senior Iranian figures are divided over the future of negotiations and the continuation of the imposed war. But this is not intelligence. It is not journalism. It is propaganda straight from the Goebbels playbook: repeat a lie loudly enough, and public opinion will eventually accept it as truth. The claim is demonstrably false. Iran's silence in the face of repeated enemy overtures is not a sign of weakness or infighting. On the contrary, it is a calculated strategic posture. For decades, the United States operated on a comfortable assumption: that Iran's reactions were predictable - a known diplomatic rhythm that could be anticipated and exploited. That era is now over. Iran has entered a new phase of asymmetrical engagement with the enemy, one defined by unpredictability, strategic patience, and an absolute refusal to be read before entering the room. This very element of unpredictability has left the enemy bewildered, and it's no longer a secret. And that bewilderment is palpable. When the US Secretary of the Navy resigns in the midst of a naval confrontation - the most expensive and strategically vital branch of the entire American military - it signals something far deeper than routine political turnover. It signals a deep and irreparable fracture at the very heart of the US decision-making apparatus. More than that, it points to a rotten system that is imploding from within. Strategic silence as a weapon Perhaps nothing has unnerved Washington more than Iran's "silence" regarding reports of the next round of negotiations in Islamabad. By refusing to engage with the enemy's narrative, Iran has denied the US the very thing it needs most: a predictable opponent. Every American strategy - whether war plan or diplomatic overture - was built on decades of familiarity with Iranian behavior. That familiarity is now worthless. The silence is not an absence of strategy. It is the strategy and Iran has mastered it. If there remains any doubt about Iran's position, the Iranian people have settled it. The IRIB poll is not merely a dataset; it is a political document and a telling statement. When 66 percent of Iranians believe their country is the decisive victor of the war, when 87.2 percent rate the performance of Iran's armed forces as strong or very strong, and - most crucially - when 57.7 percent believe the US needs a ceasefire more than Iran does, something profound has shifted. These figures mark a staggering reversal of the familiar power dynamic. They spell out, in unmistakable terms, that a new dynamic is at work. The old rules no longer apply. These numbers are not abstract. They come from a population that endured 40 days of airstrikes and bombings, which gave over 3,000 martyrs, and saw its homes destroyed. And that same population has delivered a clear message to its leaders: do not compromise our dignity. Do not concede our rights. We prefer war over humiliation. The biggest defeat in a generation The United States has not lost a battle here or there. It has lost a major war. It has lost its strategic footing. It has lost the initiative. And now, stripped of all credible leverage, it has lost whatever was left of its standing at the global stage. The fake news about internal Iranian disagreements is not a sign of American confidence. It is a symptom of American desperation after suffering significant losses. For 40 days, the world watched as the most powerful military in history was held to a standstill. In the aftermath, that stalemate has hardened into a new strategic reality: Iran and the Axis of Resistance are more united than ever, Iran's hand is stronger than ever, and the US has nothing to show for its aggression except a string of resignations and recycled lies. The big lie will not change the big defeat. And history will record everything.

Discord
PressTV6h ago
Read update
Big defeat to big lies: Trump peddles Iran 'discord' fiction to mask US military, strategic

Anthropic's locked-down Mythos leaks

The Rundown: Access to Anthropic's Mythos model reportedly leaked into a Discord group within days of launch, after the users reportedly guessed the company's deployment URL and naming using patterns leaked in the recent Mercor breach. The details: Why it matters: The first alleged unauthorized use of the AI model that had the White House and others calling emergency meetings didn't come from China, Russia, or another rival nation -- it came from a random Discord group. Not a great start, and the problem only compounds as partner access grows and the models get more dangerous.

DiscordMercorAnthropic
The Rundown AI7h ago
Read update
Anthropic's locked-down Mythos leaks

ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

* ESET Research has uncovered a new China-aligned APT group, which has been named GopherWhisper, that targets governmental institutions in Mongolia. * GopherWhisper leverages Discord, Slack, Microsoft 365 Outlook, and file.io for command and control (C&C) communications and exfiltration. * The group's toolset includes custom Go-based backdoors, injectors, exfiltration tools, loader FriendDelivery, and a C++ backdoor. * ESET analyzed C&C traffic from the attacker's Slack and Discord channels, gaining information about the group's internal operations and post-compromise activities. BRATISLAVA, Slovakia, April 23, 2026 (GLOBE NEWSWIRE) -- ESET researchers have discovered a previously undocumented China-aligned APT group that they named GopherWhisper. The group wields a wide array of tools, mostly written in Go, that use injectors and loaders to deploy and execute various backdoors in its arsenal. In the observed campaign, the threat actors targeted a governmental institution in Mongolia. GopherWhisper abuses legitimate services, notably Discord, Slack, Microsoft 365 Outlook, and file.io, for command and control (C&C) communication and exfiltration. ESET discovered the group in January 2025, when it found a previously undocumented backdoor, which ESET researchers named LaxGopher, in the system of a government institution in Mongolia. Digging deeper, they managed to uncover several more malicious tools, mainly various additional backdoors, all deployed by the same group. The majority of these tools were written in Go, and their collective aim was cyberespionage. According to ESET telemetry, the victim impacted by GopherWhisper backdoors is a Mongolian governmental institution. By analyzing the C&C traffic from the attacker-operated Discord and Slack servers, ESET estimates that dozens of other victims besides the Mongolian institution were also affected, though it has no information about their geolocation or verticals. Of the seven tools that were discovered, four are backdoors - LaxGopher, RatGopher, and BoxOfFriends, written in Go, and SSLORDoor, written in C++. Furthermore, ESET found an injector (JabGopher), a Go-based exfiltration tool (CompactGopher), and a malicious DLL file (FriendDelivery). Since the set of malware ESET found bore no code similarities to any known threat actor's tools, and there was also no overlap in the Tactics, Techniques, and Procedures (TTPs) used by any other group, ESET decided to attribute the tools to a new group. Researchers chose to name that group GopherWhisper due to the majority of the group's tools' being written in the Go programming language, which has a gopher as its mascot, and based on the filename of whisper.dll, which is side-loaded. Get the latest news delivered to your inbox Sign up for The Manila Times newsletters By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy. GopherWhisper is characterized by the extensive use of legitimate services such as Slack, Discord, and Outlook for C&C communication. "During our investigation, we managed to extract thousands of Slack and Discord messages, as well as several draft email messages from Microsoft Outlook. This gave us great insight into the inner workings of the group," says ESET researcher Eric Howard, who discovered the new threat group. "Timestamp inspection of the Slack and Discord messages showed us that the bulk of them were being sent during working hours, i.e. between 8 a.m. and 5 p.m., which aligns with China Standard Time. Furthermore, the locale for the configured user in Slack metadata was also set to this time zone. We therefore believe that GopherWhisper is a China-aligned group," explains Howard. Advertisement Based on this ESET investigation, the group's Slack and Discord servers were first used to test the functionality of the backdoors, and then later, without clearing the logs, also used as C&C servers for the LaxGopher and RatGopher backdoors on multiple compromised machines. In addition to the Slack and Discord communications, ESET researchers were also able to extract email messages used for communication between the BoxOfFriends backdoor and its C&C using the Microsoft Graph API. ESET Research's Eric Howard presented these findings at Botconf 2026 conference. For a more detailed analysis of the new GopherWhisper threat group and its arsenal, check out the latest ESET Research blogpost and white paper "GopherWhisper: A burrow full of malware" on WeLiveSecurity.com. Make sure to follow ESET Research on Twitter (today known as X), BlueSky, and Mastodon for the latest news from ESET Research. About ESET Advertisement ESET® provides cutting-edge cybersecurity to prevent attacks before they happen. By combining the power of AI and human expertise, ESET stays ahead of emerging global cyberthreats, both known and unknown - securing businesses, critical infrastructure, and individuals. Whether it's endpoint, cloud, or mobile protection, our AI-native, cloud-first solutions and services remain highly effective and easy to use. ESET technology includes robust detection and response, ultra-secure encryption, and multifactor authentication. With 24/7 real-time defense and strong local support, we keep users safe and businesses running without interruption. The ever-evolving digital landscape demands a progressive approach to security: ESET is committed to world-class research and powerful threat intelligence, backed by R&D centers and a strong global partner network. For more information, visit www.eset.com, or follow our social media, podcasts, and blogs.

Discord
The Manila times10h ago
Read update
ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

* ESET Research has uncovered a new China-aligned APT group, which has been named GopherWhisper, that targets governmental institutions in Mongolia. * GopherWhisper leverages Discord, Slack, Microsoft 365 Outlook, and file.io for command and control (C&C) communications and exfiltration. * The group's toolset includes custom Go-based backdoors, injectors, exfiltration tools, loader FriendDelivery, and a C++ backdoor. * ESET analyzed C&C traffic from the attacker's Slack and Discord channels, gaining information about the group's internal operations and post-compromise activities. BRATISLAVA, Slovakia, April 23, 2026 (GLOBE NEWSWIRE) -- ESET researchers have discovered a previously undocumented China-aligned APT group that they named GopherWhisper. The group wields a wide array of tools, mostly written in Go, that use injectors and loaders to deploy and execute various backdoors in its arsenal. In the observed campaign, the threat actors targeted a governmental institution in Mongolia. GopherWhisper abuses legitimate services, notably Discord, Slack, Microsoft 365 Outlook, and file.io, for command and control (C&C) communication and exfiltration. ESET discovered the group in January 2025, when it found a previously undocumented backdoor, which ESET researchers named LaxGopher, in the system of a government institution in Mongolia. Digging deeper, they managed to uncover several more malicious tools, mainly various additional backdoors, all deployed by the same group. The majority of these tools were written in Go, and their collective aim was cyberespionage. According to ESET telemetry, the victim impacted by GopherWhisper backdoors is a Mongolian governmental institution. By analyzing the C&C traffic from the attacker-operated Discord and Slack servers, ESET estimates that dozens of other victims besides the Mongolian institution were also affected, though it has no information about their geolocation or verticals. Of the seven tools that were discovered, four are backdoors -- LaxGopher, RatGopher, and BoxOfFriends, written in Go, and SSLORDoor, written in C++. Furthermore, ESET found an injector (JabGopher), a Go-based exfiltration tool (CompactGopher), and a malicious DLL file (FriendDelivery). Since the set of malware ESET found bore no code similarities to any known threat actor's tools, and there was also no overlap in the Tactics, Techniques, and Procedures (TTPs) used by any other group, ESET decided to attribute the tools to a new group. Researchers chose to name that group GopherWhisper due to the majority of the group's tools' being written in the Go programming language, which has a gopher as its mascot, and based on the filename of whisper.dll, which is side-loaded. GopherWhisper is characterized by the extensive use of legitimate services such as Slack, Discord, and Outlook for C&C communication. "During our investigation, we managed to extract thousands of Slack and Discord messages, as well as several draft email messages from Microsoft Outlook. This gave us great insight into the inner workings of the group," says ESET researcher Eric Howard, who discovered the new threat group. "Timestamp inspection of the Slack and Discord messages showed us that the bulk of them were being sent during working hours, i.e. between 8 a.m. and 5 p.m., which aligns with China Standard Time. Furthermore, the locale for the configured user in Slack metadata was also set to this time zone. We therefore believe that GopherWhisper is a China-aligned group," explains Howard. Based on this ESET investigation, the group's Slack and Discord servers were first used to test the functionality of the backdoors, and then later, without clearing the logs, also used as C&C servers for the LaxGopher and RatGopher backdoors on multiple compromised machines. In addition to the Slack and Discord communications, ESET researchers were also able to extract email messages used for communication between the BoxOfFriends backdoor and its C&C using the Microsoft Graph API. ESET Research's Eric Howard presented these findings at Botconf 2026 conference. For a more detailed analysis of the new GopherWhisper threat group and its arsenal, check out the latest ESET Research blogpost and white paper "GopherWhisper: A burrow full of malware" on WeLiveSecurity.com. Make sure to follow ESET Research on Twitter (today known as X), BlueSky, and Mastodon for the latest news from ESET Research. About ESET ESET® provides cutting-edge cybersecurity to prevent attacks before they happen. By combining the power of AI and human expertise, ESET stays ahead of emerging global cyberthreats, both known and unknown -- securing businesses, critical infrastructure, and individuals. Whether it's endpoint, cloud, or mobile protection, our AI-native, cloud-first solutions and services remain highly effective and easy to use. ESET technology includes robust detection and response, ultra-secure encryption, and multifactor authentication. With 24/7 real-time defense and strong local support, we keep users safe and businesses running without interruption. The ever-evolving digital landscape demands a progressive approach to security: ESET is committed to world-class research and powerful threat intelligence, backed by R&D centers and a strong global partner network. For more information, visit www.eset.com, or follow our social media, podcasts, and blogs.

Discord
IT News Online10h ago
Read update
ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

GopherWhisper APT group hides command and control traffic in Slack and Discord - IT Security News

Attackers continue to lean on everyday collaboration platforms to hide command and control traffic inside normal enterprise noise. A newly identified China-aligned APT group pushes that trend further, running its operations through Slack workspaces, Discord servers, Outlook drafts, and the file.io sharing service. GopherWhisper toolset overview ESET researchers have named the group GopherWhisper and tied it to an intrusion at a Mongolian governmental entity. The name draws on two elements: most of the group's tooling ... More →

Discord
IT Security News - cybersecurity, infosecurity news10h ago
Read update
GopherWhisper APT group hides command and control traffic in Slack and Discord - IT Security News

ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

BRATISLAVA, Slovakia, April 23, 2026 (GLOBE NEWSWIRE) -- ESET researchers have discovered a previously undocumented China-aligned APT group that they named GopherWhisper. The group wields a wide array of tools, mostly written in Go, that use injectors and loaders to deploy and execute various backdoors in its arsenal. In the observed campaign, the threat actors targeted a governmental institution in Mongolia. GopherWhisper abuses legitimate services, notably Discord, Slack, Microsoft 365 Outlook, and file.io, for command and control (C&C) communication and exfiltration. ESET discovered the group in January 2025, when it found a previously undocumented backdoor, which ESET researchers named LaxGopher, in the system of a government institution in Mongolia. Digging deeper, they managed to uncover several more malicious tools, mainly various additional backdoors, all deployed by the same group. The majority of these tools were written in Go, and their collective aim was cyberespionage. According to ESET telemetry, the victim impacted by GopherWhisper backdoors is a Mongolian governmental institution. By analyzing the C&C traffic from the attacker-operated Discord and Slack servers, ESET estimates that dozens of other victims besides the Mongolian institution were also affected, though it has no information about their geolocation or verticals. Of the seven tools that were discovered, four are backdoors -- LaxGopher, RatGopher, and BoxOfFriends, written in Go, and SSLORDoor, written in C++. Furthermore, ESET found an injector (JabGopher), a Go-based exfiltration tool (CompactGopher), and a malicious DLL file (FriendDelivery). Since the set of malware ESET found bore no code similarities to any known threat actor's tools, and there was also no overlap in the Tactics, Techniques, and Procedures (TTPs) used by any other group, ESET decided to attribute the tools to a new group. Researchers chose to name that group GopherWhisper due to the majority of the group's tools' being written in the Go programming language, which has a gopher as its mascot, and based on the filename of whisper.dll, which is side-loaded. GopherWhisper is characterized by the extensive use of legitimate services such as Slack, Discord, and Outlook for C&C communication. "During our investigation, we managed to extract thousands of Slack and Discord messages, as well as several draft email messages from Microsoft Outlook. This gave us great insight into the inner workings of the group," says ESET researcher Eric Howard, who discovered the new threat group. "Timestamp inspection of the Slack and Discord messages showed us that the bulk of them were being sent during working hours, i.e. between 8 a.m. and 5 p.m., which aligns with China Standard Time. Furthermore, the locale for the configured user in Slack metadata was also set to this time zone. We therefore believe that GopherWhisper is a China-aligned group," explains Howard. Based on this ESET investigation, the group's Slack and Discord servers were first used to test the functionality of the backdoors, and then later, without clearing the logs, also used as C&C servers for the LaxGopher and RatGopher backdoors on multiple compromised machines. In addition to the Slack and Discord communications, ESET researchers were also able to extract email messages used for communication between the BoxOfFriends backdoor and its C&C using the Microsoft Graph API. ESET Research's Eric Howard presented these findings at Botconf 2026 conference. ESET® provides cutting-edge cybersecurity to prevent attacks before they happen. By combining the power of AI and human expertise, ESET stays ahead of emerging global cyberthreats, both known and unknown -- securing businesses, critical infrastructure, and individuals. Whether it's endpoint, cloud, or mobile protection, our AI-native, cloud-first solutions and services remain highly effective and easy to use. ESET technology includes robust detection and response, ultra-secure encryption, and multifactor authentication. With 24/7 real-time defense and strong local support, we keep users safe and businesses running without interruption. The ever-evolving digital landscape demands a progressive approach to security: ESET is committed to world-class research and powerful threat intelligence, backed by R&D centers and a strong global partner network. For more information, visit www.eset.com, or follow our social media, podcasts, and blogs.

Discord
IT News Online10h ago
Read update
ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

Anthropic Mythos AI shock: disturbing leak claim sparks urgent probe

Anthropic Mythos AI is at the centre of a growing security storm after reports that a small group of unauthorised users quietly gained access to the powerful cybersecurity model via a third-party vendor. Anthropic recently unveiled Mythos as a specialised Claude-based AI model designed to help major organisations detect software vulnerabilities and respond to cyber threats more quickly than human teams. The system is being trialled under Project Glasswing, an initiative that gives select partners, including Apple and other large technology and financial firms, access to the Anthropic Mythos AI preview for defensive security work. Anthropic has said Mythos can find thousands of high‑severity bugs across major operating systems and web browsers, underscoring why tight control over Anthropic Mythos AI access is seen as critical. According to Bloomberg and other outlets, a small private group on the Discord platform began using Anthropic Mythos AI on the very day it was publicly announced. Members reportedly combined credentials linked to a contractor working with Anthropic and open internet sleuthing tools to locate and access the Claude Mythos Preview environment. Bloomberg's reporting suggests the group shared screenshots and even a live demonstration of Anthropic Mythos AI to support their claims, while avoiding overtly cybersecurity‑related prompts in an apparent effort not to trigger alarms. Anthropic has confirmed it is investigating the incident, telling TechCrunch: "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third‑party vendor environments," and adding that there is currently no evidence its own systems have been compromised. The company also says there is no sign that activity went beyond the affected vendor, but the Anthropic Mythos AI scare is likely to intensify scrutiny of how unreleased, high‑risk AI models are tested and secured before wider deployment. For now, Anthropic Mythos AI remains in restricted preview, yet the alleged leak shows that even tightly controlled frontier systems can be probed and exposed at the edges, a warning that defensive AI may only be as strong as the weakest partner handling it.

DiscordAnthropic
punemirror.com12h ago
Read update
Anthropic Mythos AI shock: disturbing leak claim sparks urgent probe

Anthropic's dangerous AI Mythos: Unauthorized access likely since day one

Anthropic's most powerful AI is said to be so dangerous that only selected companies have access. Unknown individuals reportedly bypassed this, a report states. A group of individuals has allegedly gained unauthorized access to Anthropic's powerful and downright dangerous AI model Claude Mythos Preview without the AI company noticing. This is reported by the financial news agency Bloomberg, which was shown the use of the tool. The unknown individuals are reportedly communicating in a private Discord channel and are individuals who have previously focused on searching for unpublished AI models. Mythos was not used by them for tasks related to cybersecurity; instead, they are testing how the AI model performs on harmless tasks - for example, building a website. Anthropic introduced Mythos two weeks ago and stated that the model is so dangerous that it is only made available to companies working on IT security. The AI model has already identified thousands of high-risk zero-day vulnerabilities, including in all major operating systems and every internet browser. At the same time, the AI technology is significantly more capable of developing a working exploit for such vulnerabilities, sometimes even using multiple in conjunction. As part of "Project Glasswing", the industry is now to work on patching vulnerabilities found this way before other AI models become available, with which criminals can also find and, above all, exploit vulnerabilities much more easily. Unauthorized access to Mythos was obtained according to Bloomberg on the very day Anthropic introduced the AI tool. The group used various tactics, with one person posing as an employee of a service provider to gain access to Anthropic's tools. Previously, the group had made an "educated guess" about Mythos's internet address - based on other Anthropic URLs. The unknown individuals have been using Mythos regularly since then, just like other AI models before. However, their intention is not "not wreaking havoc with them". Bloomberg consistently refers to one of the individuals for the report, who is kept anonymous. Anthropic has reportedly pledged to investigate the claim, while downplaying the extent of the access. There are no indications that the access went beyond a third-party environment or had any impact on its systems. The discovery suggests how difficult it may be for the company to keep access to Mythos under wraps. The AI model was described at its introduction as so powerful that it not only alarmed the IT security industry. Governments in more and more countries are grappling with the significance of the new tool, and checks have been ordered, especially in the financial industry. If Mythos falls into the wrong hands, the consequences for cybersecurity could be devastating.

DiscordAnthropic
heise online16h ago
Read update
Anthropic's dangerous AI Mythos: Unauthorized access likely since day one

Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

A shadowy crew of AI enthusiasts pierced the defenses around Anthropic's Mythos on launch day. Boom. Access granted through a sloppy third-party vendor. Now this powerhouse model -- designed to hunt vulnerabilities across every major operating system and browser -- sits in unauthorized hands. TechCrunch broke the story, citing Bloomberg's reporting on the intrusion. Mythos forms the core of Project Glasswing, Anthropic's bid to arm elite security teams with AI that autonomously crafts exploits. Think zero-days in Windows, macOS, Chrome, Firefox -- you name it. The company rolled it out to just 40 vetted partners, including Apple and Amazon, precisely because it could flip from defender to destroyer in seconds. A person familiar with the matter told Bloomberg the group, huddled in a private online forum and Discord channel, sniffed out the model's URL pattern from prior leaks involving contractor Mercor. They interviewed a contractor employee, grabbed credentials, and logged in. Screenshots. Live demos. Proof delivered. And they've been poking around ever since. Not launching attacks, they claim. Just tinkering with the forbidden toy. "The group in question is interested in playing around with new models, not wreaking havoc with them," the source insisted to Bloomberg. But capabilities like these don't stay playground-bound. Mozilla already tapped Mythos Preview directly from Anthropic to patch 271 Firefox bugs in its latest release. Firefox CTO Bobby Holley called it a "firehose of bugs," forcing teams to scramble with resources pulled from elsewhere. Wired detailed how this AI shifts vulnerability hunting into overdrive, exposing flaws humans miss -- but demanding discipline to wrangle the flood. Anthropic moved fast. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said. No signs of core system compromise, they added. Yet whispers on X suggest the breach hit multiple unreleased models too. One post from @ns123abc laid it bare: hackers guessed URLs post-Mercor leak, slipped in via lingering contractor creds. The whole pipeline exposed. Posts from @coinbureau and @LarkDavis amplified the alarm, noting restrictions to 40 firms exactly to curb cyber risks. This isn't isolated sloppiness. The National Security Agency deploys Mythos despite Pentagon labels tagging Anthropic as a supply-chain risk -- a feud spilling into court. Axios reported wider NSA uptake, prioritizing cyber edge over bans. UK counterparts route through the AI Security Institute. Meanwhile, the breach spotlights vendor weak links in AI's high-stakes chain. Contractors like Mercor, hit earlier, leak naming conventions. Guesses turn into gateways. What if next time it's nation-states, not forum dwellers? Broader ripples hit fast. CNBC aired segments on the leak during 'Fast Money,' with Kate Rooney flagging Silicon Valley tremors. CNBC. Financial Times confirmed Anthropic's probe into the 'powerful' model handed to trusted few. Financial Times. Reddit threads in r/ClaudeAI and r/ClaudeCode buzzed with leaked excerpts, underscoring containment struggles for potent tech. So where does this leave enterprise AI security? Tools like Mythos promise to outpace human hackers, spotting multi-step chains others ignore -- like a 27-year-old OpenBSD flaw or FreeBSD exploits. But day-one cracks erode trust. Partners demand ironclad isolation; regulators eye tighter controls. Anthropic's "safe AI" badge takes a hit, even as it sues DoD over blacklists. Vendors scramble to audit creds. And those forum users? Still inside, testing limits. One wrong prompt away from chaos.

AnthropicCHAOSDiscordMercor
WebProNews20h ago
Read update
Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

Rogue Group Gains Access to Anthropic's Dangerous New Mythos AI

Remember Claude Mythos, Anthropic's new AI model that it hyped as being so powerful that it was too dangerous to release to the public? Well, it's already been broken into, according to new reporting from Bloomberg. A small group of Discord users gained access to a preview version of Mythos, a source told the outlet, on the same day Anthropic announced it would be exclusively releasing the model to a select ring of companies. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson for Anthropic told Bloomberg in a statement. The company added that it hasn't found any evidence of unauthorized access to Mythos. The group supposedly doesn't have any nefarious intentions. It has been regularly using Mythos since gaining access to it, according to Bloomberg, though only for non-cybersecurity related purposes. The source described the group as being interested in "playing around" with new models, rather than wreaking havoc. But their alleged feat does raise the alarming possibility that other less scrupulous actors could have gotten their hands on Mythos without Anthropic knowing. According to Bloomberg's source -- described only as a person familiar with the matter -- the users are part of a private Discord server dedicated to digging up information on unreleased AI models. They gained access by making an educated guess about where Mythos was stored online based on how Anthropic has stored its other models, some of the details of which were revealed in a recent data breach from an AI startup that works with large AI companies. The source also claimed to have permission to access Anthropic tech used to evaluate its models through another company that did contract work for Anthropic. No serious harm seems to have come from the breach, but it's a bad look for Anthropic, which earned brownie points for holding off from unleashing Mythos to the public. It instead chose to give access to around forty organizations, including tech giants like Apple, Microsoft, and Amazon. The Dario Amodei-led company has described Mythos in terms of being a cybersecurity skeleton key cum digital WMD that can break into "in every major operating system and every major web browser when directed by a user to do so." In tests, Anthropic said Mythos was even able to break out of its sandbox computing environment and then use an exploit to gain access to the internet so it could message a researcher about its accomplishment, which it did. Whether the Mythos's formidable reputation is warranted, it's put world governments on watch; leaders from the European Union, which does not have access to the model, have met with Anthropic at least three times since Mythos was released, the New York Times reported, while the UK's AI minister felt compelled to address its capabilities by vowing the country would take steps to protect "critical national infrastructure."

DiscordAnthropic
DNyuz22h ago
Read update
Rogue Group Gains Access to Anthropic's Dangerous New Mythos AI

Rogue Group Gains Access to Anthropic's Dangerous New Mythos AI

Remember Claude Mythos, Anthropic's new AI model that it hyped as being so powerful that it was too dangerous to release to the public? Well, it's already been broken into, according to new reporting from Bloomberg. A small group of Discord users gained access to a preview version of Mythos, a source told the outlet, on the same day Anthropic announced it would be exclusively releasing the model to a select ring of companies. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson for Anthropic told Bloomberg in a statement. The company added that it hasn't found any evidence of unauthorized access to Mythos. The group supposedly doesn't have any nefarious intentions. It has been regularly using Mythos since gaining access to it, according to Bloomberg, though only for non-cybersecurity related purposes. The source described the group as being interested in "playing around" with new models, rather than wreaking havoc. But their alleged feat does raise the alarming possibility that other less scrupulous actors could have gotten their hands on Mythos without Anthropic knowing. According to Bloomberg's source -- described only as a person familiar with the matter -- the users are part of a private Discord server dedicated to digging up information on unreleased AI models. They gained access by making an educated guess about where Mythos was stored online based on how Anthropic has stored its other models, some of the details of which were revealed in a recent data breach from an AI startup that works with large AI companies. The source also claimed to have permission to access Anthropic tech used to evaluate its models through another company that did contract work for Anthropic. No serious harm seems to have come from the breach, but it's a bad look for Anthropic, which earned brownie points for holding off from unleashing Mythos to the public. It instead chose to give access to around forty organizations, including tech giants like Apple, Microsoft, and Amazon. The Dario Amodei-led company has described Mythos in terms of being a cybersecurity skeleton key cum digital WMD that can break into "in every major operating system and every major web browser when directed by a user to do so." In tests, Anthropic said Mythos was even able to break out of its sandbox computing environment and then use an exploit to gain access to the internet so it could message a researcher about its accomplishment, which it did. Whether the Mythos's formidable reputation is warranted, it's put world governments on watch; leaders from the European Union, which does not have access to the model, have met with Anthropic at least three times since Mythos was released, the New York Times reported, while the UK's AI minister felt compelled to address its capabilities by vowing the country would take steps to protect "critical national infrastructure."

AnthropicDiscord
Futurism23h ago
Read update
Rogue Group Gains Access to Anthropic's Dangerous New Mythos AI

Report: Discord Group Uses Claude's Supposedly Secret Mythos

Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , The Future of AI & Cybersecurity An unauthorized group of users gained access to Claude Mythos Preview artificial intelligence model and have regularly used it since the day that AI firm Anthropic revealed the model's existence while pronouncing it too dangerous to release to the public, reports Bloomberg. See Also: Context Drives Security in Agentic AI Era Anthropic made a splash when earlier this month it said it would reserve access to a select group of companies joined together under "Project Glasswing," with the understanding that they would use the model to find and fix security vulnerabilities before hackers get access to equally powerful tech. Members include Nvidia, Apple, Amazon and Cisco (see: Anthropic Calls Its New Model Too Dangerous to Release). A source told Bloomberg the unauthorized users belong to a private Discord channel dedicated to unreleased models. An apparent member of the group told the newswire that users have not used Mythos to hunt for new exploits. Anthropic has touted the vulnerability finding properties of Mythos in a publicity campaign that has received some outside validation, as from the AI Security Institute in Great Britain, which found the model to be "a step up over previous frontier models." The source told Bloomberg the Discord group deployed a mix of tactics to access Mythos, including using access the source has as a third-party contractor for Anthropic. The group "made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models." The person said such data was leaked in a recent breach at AI startup Mercor (see: Mercor Breach Linked to LiteLLM Supply-Chain Attack). An Anthropic spokesperson told the newswire that it is investigating the matter but that it has no evidence of unauthorized Mythos use beyond the third party's IT environment. The source told Bloomberg the Discord group also has access to other unreleased Anthropic models. Anthropic's release of Mythos to select partners received a rejoinder from rival firm OpenAI, which days later released GPT‑5.4‑Cyber with the stated intention of making it "as widely available as possible." OpenAI said it will rely on user identity verification and "trust signals" to safeguard its vulnerability-seeking AI model from being put to bad uses (see: OpenAI Touts Wider Access to Its New Cyber Model).

AnthropicDiscordMercor
DataBreachToday1d ago
Read update
Report: Discord Group Uses Claude's Supposedly Secret Mythos

Anthropic's new A.I. model is triggering Global alarms

Anthropic, the prominent artificial intelligence startup, is grappling with a significant security breach involving its most advanced technology. A small collective of unauthorized individuals reportedly gained access to Claude Mythos, a new model so potent that the company had initially restricted its release due to concerns over its capabilities. The breach, which was first reported by Bloomberg News on Wednesday, involved a group of users on a private Discord forum. According to reports, these individuals did not use sophisticated hacking techniques to bypass Anthropic's internal systems. Instead, they managed to access a "preview" version of the model by correctly identifying its online location within a third-party vendor's environment, utilizing knowledge of the company's previous naming conventions and hosting patterns. The incident is particularly sensitive because Anthropic has spent weeks describing Mythos as a revolutionary but dangerous tool. The model has demonstrated an unprecedented aptitude for identifying and exploiting "zero-day" software vulnerabilities -- flaws unknown to the software's creators -- that have existed for decades in major operating systems and web browsers. In an effort to manage these risks, Anthropic had launched "Project Glasswing," a highly controlled initiative that granted access only to a select group of approximately 40 partners, including federal agencies and major financial institutions. The goal was to use the AI defensively to patch critical infrastructure before it could be targeted by malicious actors. "We are investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said in a statement. While the group that accessed the model reportedly claimed they were motivated by curiosity rather than malice, the breach has intensified the debate over whether such powerful "frontier" models can ever be truly secured. Some experts suggest that the very act of restricting the model may have inadvertently created a new kind of target for digital hobbyists and hackers alike. Dario Amodei, the chief executive of Anthropic, has previously warned that the rapid advancement of AI coding capabilities represents a fundamental shift in the cybersecurity landscape. He noted that these systems are now identifying weaknesses "that humans have missed" for years. The leak has also drawn the attention of the federal government. Recent memos indicate that the Office of Management and Budget has been working to provide federal agencies with access to the model, provided that "the appropriate guardrails and safeguards are in place" to prevent misuse. As the company works to close the loophole that allowed the unauthorized access, the incident serves as a stark reminder of the challenges inherent in the AI arms race. For many in the industry, the breach confirms a difficult reality: as AI becomes more capable of breaking into systems, the systems holding the AI themselves become increasingly vulnerable.

DiscordAnthropic
End Time Headlines1d ago
Read update
Anthropic's new A.I. model is triggering Global alarms

Discord group says it accessed Anthropic's unreleased Claude Mythos

An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning.

MercorAnthropicDiscord
Mashable1d ago
Read update
Discord group says it accessed Anthropic's unreleased Claude Mythos

What Is an NFT Discord Community and Why Does It Matter?

If you have researched an NFT collection for more than a few minutes you have probably been told to check its Discord. Discord is a messaging platform built around servers, which are communities organized into channels by topic. For NFT projects, the Discord is typically where the team communicates with holders, where holders talk to each other, and where the real-time health of a community is most visible. Understanding what to look for in an NFT Discord tells you more about a collection than most other signals. What Discord Is and How NFT Projects Use It Discord servers are organized into text and voice channels. An NFT project's Discord typically includes channels for announcements from the team, general conversation among holders, channels for trading and marketplace discussion, channels for showing off specific dogs or traits, channels for events, and sometimes channels for working groups or subsets of the community with specific interests. The team uses the Discord to push information to the community. Announcements, event details, marketplace updates, responses to community questions. The community uses it to build relationships with other holders, share content, discuss the collection, and hold the team accountable when things are not going well. A healthy Discord has regular activity across multiple channels, a responsive team presence, and conversations happening between holders that are not just about price. An unhealthy Discord has sporadic activity, no team presence, and conversations that are either very quiet or dominated by price discussion and complaints. Why Discord Size Is Not the Whole Story Discord member count is easy to inflate. Airdrop campaigns, whitelist incentives, and bot activity can produce large member counts that have no relationship to genuine community engagement. A server with 100,000 members that has five messages a day in the general channel is less valuable than a server with 15,000 members that has consistent daily activity across multiple channels. What matters is engagement quality: are people actually talking to each other, are they sharing content about the collection, is the team present and responsive, and does activity continue when the floor is down? That last question is the most revealing. Activity that holds through a flat or declining market indicates a community built on something beyond price speculation. The Doginal Dogs Discord The Doginal Dogs Discord has over 15,000 members, grown organically without airdrop incentives or whitelist campaigns. Members joined because they wanted to be part of the community, not because they were incentivized to click a link. Co-founders Barkmeta and Shibo are accessible to the community through the daily broadcast on the Crypto Spaces Network, which functions as a live extension of the Discord. Holders who want to ask the founders a question directly have a mechanism for doing so every single day. That level of founder accessibility is unusual and contributes significantly to the community's trust in the project. The Discord activity has held through the quiet market periods of 2024 and the volatility of 2025. The founders' consistent broadcast presence means the community never experienced the silence that typically precedes a project going inactive. There was always something happening, always a reason to check in. What Discord Can and Cannot Tell You A Discord community can tell you how engaged the current holder base is, how accessible the founding team is, and whether community activity is tied to price or to something more durable. It cannot tell you whether the floor price will go up or down, whether the team will deliver on any future plans, or whether the collection will be relevant in five years. It is one signal among many, but it is one of the more honest signals available because it is harder to fake sustained daily engagement than it is to fake a floor price or a Twitter follower count. A free starter dog and access to the Doginal Dogs community is available at doginaldogs.com. Disclosure: This article is sponsored by Doginal Dogs. All claims about the Doginal Dogs community are sourced from documented project records. Digital assets involve risk. Nothing here is financial advice. Related Items:building engagement, NFT Discord community

Discord
TechBullion1d ago
Read update
What Is an NFT Discord Community and Why Does It Matter?

Unauthorized users breach Anthropic's restricted Mythos AI model

A small group of unauthorized users gained access to Anthropic's new AI model Claude Mythos, Bloomberg reports. Anthropic considers Mythos powerful enough to enable dangerous cyberattacks, which is why the company only makes it available to select partners like Apple, Amazon, and Cisco through its "Project Glasswing" program. The users, members of a private Discord channel, got in on the day of the announcement. They pulled it off using the access credentials of a member who works as a contractor for Anthropic, along with publicly available information from a data leak at AI startup Mercor. According to Bloomberg, the group didn't use Mythos for cyberattacks but for harmless tasks like building simple websites for testing. The source says the group also has access to a number of other unreleased Anthropic AI models. The company says it's investigating the incident. So far, there's no indication that the access extended beyond the external contractor's environment or that Anthropic's own systems were compromised.

AnthropicMercorDiscord
THE DECODER1d ago
Read update
Unauthorized users breach Anthropic's restricted Mythos AI model

Title: Anthropic Investigates Security Claim Over Unauthorized Access to Claude Mythos Model - News Directory 3

Anthropic stated that it has found no evidence that the unauthorized access has impacted its systems or extended beyond the third-party vendor's environment. Anthropic is investigating a claim that a small group of people gained unauthorized access to its Claude Mythos model, a cybersecurity tool designed to identify and exploit vulnerabilities in major operating systems and web browsers. The company confirmed It's looking into reports that unauthorized users accessed the Mythos preview through a third-party vendor environment, according to a statement provided to Bloomberg and reported by multiple outlets including The Guardian and The Verge. Anthropic stated that it has found no evidence that the unauthorized access has impacted its systems or extended beyond the third-party vendor's environment. The Mythos model, part of Anthropic's Project Glasswing initiative, has been made available to a select group of companies including Apple, Nvidia, Google, Amazon Web Services, and Microsoft for testing purposes. The company has warned that Mythos could pose significant cybersecurity risks if misused, describing it as capable of enabling cyber-attacks when directed by a user to exploit vulnerabilities. According to Bloomberg, the group that accessed the model consists of a "handful" of individuals who gained access on the same day Mythos was announced as being released to initial vendor partners. The group reportedly used a combination of a contractor's access and commonly used internet sleuthing tools to locate and access the model, with one member identified as a worker at a third-party contractor for Anthropic. Members of the group are part of a Discord channel focused on uncovering information about unreleased AI models and have been using Mythos regularly since gaining access, providing screenshots and a live demonstration to Bloomberg as evidence. The group told Bloomberg they are interested in "playing around" with the technology rather than causing harm, and have not run cybersecurity prompts designed to exploit vulnerabilities. Anthropic continues to investigate the incident and has not released further details about the third-party vendor involved or the specific methods used to gain access.

AnthropicDiscord
News Directory 31d ago
Read update
Title: Anthropic Investigates Security Claim Over Unauthorized Access to Claude Mythos Model - News Directory 3

Anthropic's New Mythos Model Reportedly Accessed By Unauthorized Users

In early April, Anthropic announced its latest Mythos model, saying it would remain exclusive to select tech companies for cybersecurity purposes. Anthropic has now confirmed it's actively investigating an incident where a group claims to have unauthorized access to Mythos. A Bloomberg report, citing anonymous sources, documentation, and examples of Mythos up and running, alleges that a group of users accessed the Mythos model without Anthropic's authorization. Mythos is said to be capable of exploiting vulnerabilities in "every major operating system and every major web browser," if the user intends to do so, according to Anthropic. At launch, Anthropic claimed to have found "thousands of high-severity vulnerabilities" in everyday software. Yesterday, Mozilla claimed to have found 271 vulnerabilities within Firefox through its use of Mythos. Anthropic previously said it would restrict access to the model to 11 tech companies through its Project Glasswing program. Restricting users means software makers can fix any identified software issues before bad actors gain access to similar AI models. However, that exclusivity may not have been as strong as first thought, with this group of users who talk in a private Discord group claiming to have had access since day one. If true, they've had access to the software for over two weeks. The group told Bloomberg that it accessed the tool through a member's third-party contractor status with Anthropic. It also used tools typically employed by cybersecurity researchers, along with knowledge of where Anthropic hosts other models, to better predict where Mythos would sit within its systems. A spokesperson for Anthropic told Bloomberg, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." It says there's currently no evidence that access went beyond the vendor's own tools. Speaking with Bloomberg, the group says it's not intending to cause any damage with its access to Mythos. That may not be true for other groups who may be trying to gain access to Mythos themselves.

AnthropicDiscord
PCMag UK1d ago
Read update
Anthropic's New Mythos Model Reportedly Accessed By Unauthorized Users
Showing 1 - 20 of 73 articles