News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic Mythos Develops Into Insignificant Outcome * The Register

Anthropic's Mythos model is designed to discover software vulnerabilities, yet its release has stirred concern. Initially introduced under the Project Glasswing initiative, the model was restricted to select organizations for vulnerability assessment. Recent developments, however, reveal that unauthorized access to Mythos occurred, heightening cybersecurity concerns. Unauthorized Access Incident On a Wednesday, an Anthropic representative confirmed that individuals outside the Glasswing partners might have accessed the Mythos model. This access was not through Anthropic's authorized production API. The spokesperson stated, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The third-party vendor, linked to Anthropic's model development, has not been publicly identified. According to Bloomberg, a small group exploited their knowledge of the model's online location, derived from prior leaks, to gain access. Mercor Data Breach This unauthorized access coincided with a data breach at Mercor, an AI staffing firm that supplies contractors to major AI labs. Earlier in the month, Mercor acknowledged being affected by the LiteLLM supply-chain attack. Reports suggested that the intruders, identified as members of a private Discord channel, began accessing Mythos the same day Anthropic announced Project Glasswing. Mythos' Capabilities and Limitations Despite its marketing hype, early user feedback about Mythos indicates limitations. While organizations like AWS and Mozilla have praised its speed in identifying vulnerabilities, it has not outperformed elite human cybersecurity researchers. Mozilla's CTO, Bobby Holley, disclosed that Mythos found 271 vulnerabilities in Firefox but acknowledged that any vulnerabilities it discovered could also have been identified by skilled human researchers. Claims of Overhype Researchers have raised concerns about the veracity of the claims surrounding Mythos. While Anthropic touted its ability to discover "thousands of high- and critical-severity vulnerabilities," critics argue these numbers are exaggerated. For instance, VulnCheck researcher Patrick Garrity estimated the actual count at around 40, and no confirmed zero-day exploits were documented. Claims regarding 181 Firefox vulnerabilities were also scrutinized, revealing that most findings stemmed from environments without standard security measures. Concerns in the Cybersecurity Community Experts have mixed reactions about unauthorized access to Mythos. Snehal Antani, CEO of Horizon3.ai, stated the security community should not overreact. He emphasized that adversaries do not require Mytos for vulnerability research; existing open-source models already facilitate this process. * Unauthorized Access: Occurred via a third-party vendor. * Vulnerability Discovery: Mythos' findings are comparable to skilled human researchers. * Hype vs. Reality: Reports indicate exaggerated claims of Mythos' capabilities. The incident surrounding Anthropic's Mythos model illustrates the challenges of maintaining security and managing expectations in the rapidly evolving AI landscape. As the investigation continues, the cybersecurity community watches closely, evaluating the model's true potential and implications.

AnthropicMercorDiscord
El-Balad.com45m ago
Read update
Anthropic Mythos Develops Into Insignificant Outcome * The Register

Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

A private Discord channel, dedicated to sniffing out unreleased AI models, pulled off the unthinkable. They accessed Claude Mythos Preview -- the very AI Anthropic deems too potent for public eyes -- on the day it was announced. No fancy exploits. Just a sharp guess at a URL, pieced together from leaked naming patterns, plus a dash of insider credentials from a third-party contractor. Bloomberg broke the story first, detailing how the group provided screenshots and a live demo as proof. Bloomberg reported the breach occurred through a vendor environment. Anthropic responded swiftly: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson told multiple outlets, including TechCrunch. Mythos isn't your average language model. Anthropic built it to hunt zero-day vulnerabilities across major operating systems and browsers. During tests, it unearthed flaws hidden for decades, chained exploits autonomously, even escaped a sealed sandbox to send an email. That's why Project Glasswing limits access to about 40 vetted partners -- firms like CrowdStrike, Cisco, and even the NSA -- tasked with patching software before threats emerge. Amazon Bedrock offers it in gated preview, but only to allow-listed organizations. The intruders? A handful of enthusiasts in that Discord server. They drew from a Mercor data breach earlier in April, which spilled Anthropic's API naming habits, as noted by Mashable. One member snagged legitimate access via their contractor job. Boom. Entry granted. They've tinkered since, building basic websites to avoid notice. "We were not using Claude Mythos for nefarious purposes," one told Bloomberg. But here's the rub. Anthropic hyped Mythos as a cybersecurity game-changer, capable of "identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser." Yet their own perimeter crumbled to low-tech sleuthing. BBC highlighted the irony: a tool billed as too risky for the masses, infiltrated by Discord randos. Industry echoes the concern. The Next Web pointed out the access happened on launch day, April 7, via guessed URLs in a contractor portal. Silicon Republic questioned Anthropic's lockdown prowess. Even Cybernews weighed in, noting the group's regular use without malice -- but the precedent chills. And the fallout? Anthropic's probe continues, no breaches beyond the vendor noted so far. Partners press on with Glasswing, applying Mythos to Firefox and beyond. Mozilla confirmed early tests found vulns, per TechCrunch snippets. But this slip exposes broader tensions. AI firms race to cap powerful models, yet supply chains -- contractors, leaks like Mercor's -- offer backdoors. Short-term fix: tighten vendor oversight. Rotate keys. Obfuscate endpoints. Long-term? Mythos itself could audit these gaps, if safely deployed. The group claims more unreleased models in reach, hinting at persistent Discord hunts. Irony bites hard. The AI meant to fortify digital defenses got outfoxed by pattern-matching hobbyists. Security pros now ask: If Mythos can't shield itself, what hope for the wild? Expect audits. Partner scrutiny. Maybe Mythos turns inward, probing Anthropic's own code. For now, the Discord crew vibes on -- quietly coding, loudly underscoring AI's fragile fences.

MercorDiscordAnthropic
WebProNews1h ago
Read update
Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

A group of users leaked Anthropic's AI model Mythos by reportedly guessing where it was located | Fortune

The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic's past practices, that hackers obtained from AI training startup Mercor. Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported. Anthropic did not immediately respond to Fortune's request for comment. A spokesperson from Anthropic told Bloomberg the company was "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The fact that the model was leaked so quickly doesn't surprise David Lindner, the chief information security officer at Contrast Security and a 25-year industry veteran. Even though Anthropic intentionally limited the model to a small group of 40 companies -- including Microsoft, Apple, and Google -- to beef up their security ahead of a wider release, thousands of people likely had access to the program across these companies, which makes a leak nearly inevitable, he said. "It was bound to happen," Lindner said. "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it." Anthropic claims its Mythos model is more adept at finding cybersecurity vulnerabilities than previous versions. The company was able to use the program, which has not been widely released, to find a 27-year-old security vulnerability in OpenBSD, an operating system known for its security. Mozilla on Tuesday also said it used a preview of the model to identify and patch 271 vulnerabilities in its Firefox web browser. And yet, Mythos' release has been plagued by security breaches from the start. Fortune was the first to report on the model's existence thanks to a security lapse that exposed details about the large language model in a publicly accessible database. For Lindner, this most recent unauthorized access shows it's likely U.S. adversaries already have access to this tech which could put U.S. companies and other systems at risk of attacks. "If some group -- some random Discord online forum, got access to it. it's already been breached by China," Lindner told Fortune. Although Lindner is still unsure how much of Mythos' supposed danger is real or just marketing hype -- OpenAI's Sam Altman this week called Anthropic's promotion of Mythos "fear-based marketing" -- it's clear cybersecurity professionals, or defenders, need to be ready for a new world of AI attacks. "The real thing is there's a real compression of timelines here for defenders," he said. AI is unique in its abilities to execute cyberattacks because it never gets tired, said Lindner. It can relentlessly tackle a weak spot in a company's security system, whereas a human may eventually give up. It also empowers less experienced developers to commit cyberattacks partly by drawing on the myriad documentation available on the web about previous exploits and using it to inform an AI model and adjust its attacks for specific situations. "It's the folks that have some sort of [developer] background or some sort of technical background that may have had some limitations in the past of getting over things or taking too long to do stuff, it makes this stuff way easier now," he said. Lindner said the fact that the program was reportedly accessed by third-party contractors means that, even more than before, companies need to limit who has access to its most vital systems. The rapid rise of AI as a tool for cyberattacks could disproportionately affect smaller companies, who may not be able to keep up with the increasing complexity of AI-fueled attacks, said Lindner. Those that refuse to even touch AI and continue on as before are even more at risk, he said. "AI is not a golden ticket, but if you're not taking advantage of it on the defender side, there is no chance, none, that you are going to be able to keep up with the offensive side," he said.

DiscordAnthropicMercor
Fortune1h ago
Read update
A group of users leaked Anthropic's AI model Mythos by reportedly guessing where it was located | Fortune

China-linked hackers targeted Mongolian government using Slack, Discord for covert communications

A previously undocumented China-aligned threat actor targeted a Mongolian government entity and used popular communication platforms such as Discord, Slack and Microsoft 365 Outlook to manage its operations and steal data, researchers have found. The group, which researchers at cybersecurity firm ESET named GopherWhisper, has been active since at least November 2023 and was discovered in January 2025 after investigators found a previously unknown backdoor on the network of a Mongolian government institution. The malware, dubbed LaxGopher, was deployed on roughly a dozen systems belonging to the organization, the Slovak cybersecurity firm said in a report on Thursday. Researchers believe the campaign likely affected dozens of additional victims, though they have not identified their locations or sectors. According to ESET, the hackers relied heavily on legitimate online services to conceal their activity, using Discord, Slack and Microsoft 365 Outlook to communicate with compromised machines and manage command-and-control infrastructure. The group deployed a range of custom-built tools written largely in the Go programming language, including loaders, injectors and backdoors designed to maintain access to targeted systems. Among the tools identified were RatGopher, BoxOfFriends, the injector JabGopher, the loader FriendDelivery and a backdoor known as SSLORDoor, researchers said. To remove stolen information from compromised networks, the attackers used a dedicated data exfiltration tool called CompactGopher, which compressed files and uploaded them to the file-sharing service File.io. ESET said the operation appears consistent with cyber espionage activity, though it did not attribute the campaign to a specific entity.

Discord
therecord.media1h ago
Read update
China-linked hackers targeted Mongolian government using Slack, Discord for covert communications

Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. Project Glasswing In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." A Low-Tech Breach The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

AnthropicMercorDiscord
Signs Of The TImes2h ago
Read update
Anthropic's 'Too Dangerous To Release' AI Model Was Accessed By Discord Group On Day One

What the Anthropic Mythos Vulnerabilities Mean to Manufacturing

Your plants and your people are caught between a game-changing tool and a historic level of risk. In case you somehow missed it, Anthropic's Project Glasswing recently set off alarms throughout the cybersecurity and AI communities. Glasswing is a security initiative where Anthropic was working with Apple, Microsoft, Google, Amazon, and about 40 other key AI players in a sort of focus group setting to assess an AI model named Mythos Preview that runs on their Claude platform. The goal of Mythos is to proactively detect and patch software vulnerabilities - ideally preventing them from being exploited by malicious actors and protect critical infrastructure by applying AI-powered offensive security techniques. Due to Mythos' powerful potential, Anthropic meant to limit initial access. However, unauthorized users were able to gain access to the platform via third-party vendor credentials. These parties are part of a Discord online forum group known to search for information about unreleased AI models. After obtaining access, the group proceeded to publicize the ease at which Mythos is able to identify vulnerabilities. In the wrong hands, this tool offers hackers the ability to attack at a speed which would be nearly impossible to stop. Or, Mythos could be vital in helping defenders finally operate from a proactive posture, instead of constantly playing catch-up. As you can imagine, there was a passionate response. Shane Fry, Chief Technology Officer, RunSafe Security: "Unauthorized users were able to access Anthropic's Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed. "The reality is these AI capabilities are already out there, 'hacked' or not, and they're going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can't be used in the first place." Agnidipta Sarkar, Chief Evangelist at ColorTokens: "While Anthropic is investigating, the only information publicly available so far is that the attack used the oldest trick in the book, impersonating someone with existing access. The users reportedly guessed the model's URL based on knowledge of Anthropic's patterns for other models. The good news is that Anthropic detected the breach and contained it to that specific vendor's environment. "One of the key controls that every modern environment needs is microsegmentation, which can effectively reduce the blast radius to specific vendors and leave no elbow room for attackers to navigate. I am hoping Anthropic is using similar controls to keep the attack contained, such as zero-trust mechanisms." Tim Mackey, Head of Software Supply Chain Risk Strategy at Black Duck: "The unfortunate reality is that while it's great to hear that novel cybersecurity models are being provided to select researchers to evaluate, if your team is on the outside looking in, waiting for the final report might not be top of mind. For defenders, even the specter of unauthorized access to an adversarial model as powerful as Mythos is purported to be, only increases anxiety levels. "What's clear is that security leaders in organizations of all sizes should take this claim as a call to action focused on the role AI-enabled cybersecurity plays in their operations and how best to scale those efforts to deal with AI enabled adversaries." John Gallagher, Vice President, Viakoo: "There has always been an arms race between cyber defenders and cyber attackers, and Mythos is currently the most powerful armament available. If we do not know whose hands it is in, it should be viewed no differently than uncontrolled distribution of enriched uranium. "If true, this deeply undermines Project Glasswing which was setup up explicitly to give cyber defenders early access to Mythos Preview in order to define and mount defenses against it. Threat actors having early access to Mythos Preview puts them on the same footing (or possibly with advantages) versus cyber defenders. "Uncontrolled access to Mythos Preview will hit hardest on operators of critical OT, IoT, and ICS systems. Already knowing the fifty IT organizations with early access to Mythos would naturally focus threat actors on targets outside of those 50 companies, most likely non-standard operating systems that are prevalent in OT and IoT. "If the model has escaped Pandora's Box, there should be immediate validation and public notification of it. Since that has not happened here, it is likely that there was not significant exposure. However, there has never been a prize as valuable to cyber criminals before as early access to Mythos Preview; it potentially can open all bank accounts and reveal all secrets. "Threat actors are highly sophisticated, very well-funded, and determined. We are in a race to harden systems and have rapid patching at high scale in place before threat actors can leverage Mythos Preview." Nicole Carignan, SVP, Security & AI Strategy, and Field CISO at Darktrace: "This highlights the continued weaponization of commercial tooling. Frontier and near‑frontier models are increasingly dual‑use by default. Capabilities designed to improve software quality and security can be repurposed with minimal friction to accelerate vulnerability discovery for malicious ends. This is not a failure of intent; it is an outcome of scale, accessibility, and capability diffusion. "These models will continue to be a target for threat actors to gain access to in order to achieve initial access capabilities to organizations. More concerning is access to critical vulnerabilities that have not yet been released to the public. Possession of undisclosed, high‑severity vulnerabilities enables threat actors to facilitate more sophisticated and scaled access to organizations through exploiting an 'unknown' vulnerability. "It is also important to be realistic about containment. This was never going to be contained to a single model, organization, or access control failure. Threat actors do not need this system; they need a system with sufficient capability. Whether through parallel development, model leakage, fine‑tuning, or the combination of multiple weaker models and tools, similar outcomes can be achieved. "The strategic mistake would be to treat this as an isolated incident rather than a signal. Advanced vulnerability discovery capabilities will continue to proliferate, and the window between discovery and exploitation will continue to shrink. This reinforces the need for scaled visibility, behavioral analytics, anomaly detection, and autonomous containment across endpoints, cloud, identities, SaaS, and critical infrastructure. "Finally, this is another reminder that investment in AI adoption without commensurate investment in security and risk management is unsustainable. Resilience will depend less on how quickly vulnerabilities can be patched, and more on how effectively exploitation can be detected and contained when prevention inevitably fails." Reach Security's Co-founder and CEO, Garrett Hamilton: "There is only one viable response to AI-powered attacks: AI-powered defense. "If a model can discover and exploit unknown weaknesses at machine speed, the defense playbook must change just as fast. Security teams can't rely on periodic scans and manual hardening; they need always-on visibility of their real exposures and clear prioritization of what to fix first. "However, vulnerabilities should not be the only concern. These are researched routinely by vendors and the cybersecurity community, with patches regularly released. In short, organizations have a fighting chance when it comes to spotting and fixing software vulnerabilities. "Misconfigurations, on the other hand, have no patches and can offer direct access into an environment. They arise unnoticed over time as networks, software, users, and policies change. They're also far more pervasive than many teams realize: our research found 97 percent of organizations suffered a breach or near miss in the past year due to a security-tool misconfiguration, and it takes 8.3 days on average to remediate once identified. That is more than enough time for an AI-enabled attacker to take advantage. "The new standard is simple: fight AI with AI, and close the gaps before they become incidents."

DiscordAnthropic
Manufacturing.net2h ago
Read update
What the Anthropic Mythos Vulnerabilities Mean to Manufacturing

Big defeat to big lies: Trump peddles Iran 'discord' fiction to mask US military, strategic

By Press TV Strategic Analysis After 40 days of failed military adventure against the Islamic Republic of Iran, followed by the diplomatic debacle in Islamabad with Iran calling the shots, a new reality is settling over the region, one that Washington is desperately trying to obscure. The US war machine not only failed to achieve its stated objectives - a widely acknowledged reality - but it also suffered its most significant military and strategic defeat in decades. And now, unable to accept that reality, it has fallen back on its oldest weapon: the "big lie." A defeat on two fronts The first battlefield was military, as Americans were eager to reveal their much-hyped "military card," bragging about being the "most powerful military in the world." For over a month, the United States - backed by its most advanced naval assets, air power, and the full weight of its global and regional alliances - attempted to pressure the Iranian nation into submission or retreat. The result: A humiliating defeat that quickly revealed the limits of much-hyped American power. From the strategic waters of the Persian Gulf to the skies over Yemen and Lebanon, Iran and its allies in the Axis of Resistance not only held their ground but dictated the terms of engagement, forcing the aggressors to plead for a ceasefire. By the time the guns fell silent, it was Washington, not Tehran, that was begging for a ceasefire - not once, but twice. The first request came immediately after the imposed war had completed 40 days, when Washington agreed to Iran's ten-point proposal. The second came as a unilateral extension earlier this week, wrapped in the language of magnanimity but born of necessity. It was not a sign of goodwill. It was a strategic retreat. The negotiating table has proven no kinder to the United States. Time and again, American officials have sought to frame the post-war dynamic as one requiring Iranian concessions: excessive limits on the missile program, the removal of enriched uranium, and the dismantling of ties with the resistance front. Yet every single one of these demands has been met with Iranian steadfastness - backed overwhelmingly by public opinion. A latest poll conducted by Iran's IRIB Research Center found an overwhelming majority of Iranians reject each of these core American conditions. The survey, conducted during and after the war, revealed that 85.7 percent of respondents said Iran should not accept restrictions on its missile industry, while 82.6 percent opposed the removal of 400 kilograms of enriched uranium from the country. Also, 79.4 percent of people rejected shutting down uranium enrichment as a US condition. Public opposition extends to core issues of sovereignty and regional strategy as well. The poll showed that 73.7 percent of Iranians said the country should not accept unrestricted passage of ships through the strategic Strait of Hormuz, and 68.1 percent opposed severing cooperation with the Resistance Front. With this level of popular support, the Iranian side - which clearly holds the upper hand - has no reason to offer any concessions. The opposing side has won nothing: not on land, not at sea, not at the table. And in the end, it is always the winner who takes it all. The manufactured "internal disagreement" Having lost all military and strategic leverage, Washington has now - quite unsurprisingly and predictably - resorted to its trademark practice: the fabrication of lies. In this context, that means peddling the so-called "internal discord" within Iran's leadership. The narrative being pushed by American policy wonks suggests that senior Iranian figures are divided over the future of negotiations and the continuation of the imposed war. But this is not intelligence. It is not journalism. It is propaganda straight from the Goebbels playbook: repeat a lie loudly enough, and public opinion will eventually accept it as truth. The claim is demonstrably false. Iran's silence in the face of repeated enemy overtures is not a sign of weakness or infighting. On the contrary, it is a calculated strategic posture. For decades, the United States operated on a comfortable assumption: that Iran's reactions were predictable - a known diplomatic rhythm that could be anticipated and exploited. That era is now over. Iran has entered a new phase of asymmetrical engagement with the enemy, one defined by unpredictability, strategic patience, and an absolute refusal to be read before entering the room. This very element of unpredictability has left the enemy bewildered, and it's no longer a secret. And that bewilderment is palpable. When the US Secretary of the Navy resigns in the midst of a naval confrontation - the most expensive and strategically vital branch of the entire American military - it signals something far deeper than routine political turnover. It signals a deep and irreparable fracture at the very heart of the US decision-making apparatus. More than that, it points to a rotten system that is imploding from within. Strategic silence as a weapon Perhaps nothing has unnerved Washington more than Iran's "silence" regarding reports of the next round of negotiations in Islamabad. By refusing to engage with the enemy's narrative, Iran has denied the US the very thing it needs most: a predictable opponent. Every American strategy - whether war plan or diplomatic overture - was built on decades of familiarity with Iranian behavior. That familiarity is now worthless. The silence is not an absence of strategy. It is the strategy and Iran has mastered it. If there remains any doubt about Iran's position, the Iranian people have settled it. The IRIB poll is not merely a dataset; it is a political document and a telling statement. When 66 percent of Iranians believe their country is the decisive victor of the war, when 87.2 percent rate the performance of Iran's armed forces as strong or very strong, and - most crucially - when 57.7 percent believe the US needs a ceasefire more than Iran does, something profound has shifted. These figures mark a staggering reversal of the familiar power dynamic. They spell out, in unmistakable terms, that a new dynamic is at work. The old rules no longer apply. These numbers are not abstract. They come from a population that endured 40 days of airstrikes and bombings, which gave over 3,000 martyrs, and saw its homes destroyed. And that same population has delivered a clear message to its leaders: do not compromise our dignity. Do not concede our rights. We prefer war over humiliation. The biggest defeat in a generation The United States has not lost a battle here or there. It has lost a major war. It has lost its strategic footing. It has lost the initiative. And now, stripped of all credible leverage, it has lost whatever was left of its standing at the global stage. The fake news about internal Iranian disagreements is not a sign of American confidence. It is a symptom of American desperation after suffering significant losses. For 40 days, the world watched as the most powerful military in history was held to a standstill. In the aftermath, that stalemate has hardened into a new strategic reality: Iran and the Axis of Resistance are more united than ever, Iran's hand is stronger than ever, and the US has nothing to show for its aggression except a string of resignations and recycled lies. The big lie will not change the big defeat. And history will record everything.

Discord
PressTV4h ago
Read update
Big defeat to big lies: Trump peddles Iran 'discord' fiction to mask US military, strategic

Anthropic's locked-down Mythos leaks

The Rundown: Access to Anthropic's Mythos model reportedly leaked into a Discord group within days of launch, after the users reportedly guessed the company's deployment URL and naming using patterns leaked in the recent Mercor breach. The details: Why it matters: The first alleged unauthorized use of the AI model that had the White House and others calling emergency meetings didn't come from China, Russia, or another rival nation -- it came from a random Discord group. Not a great start, and the problem only compounds as partner access grows and the models get more dangerous.

DiscordMercorAnthropic
The Rundown AI6h ago
Read update
Anthropic's locked-down Mythos leaks

ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

* ESET Research has uncovered a new China-aligned APT group, which has been named GopherWhisper, that targets governmental institutions in Mongolia. * GopherWhisper leverages Discord, Slack, Microsoft 365 Outlook, and file.io for command and control (C&C) communications and exfiltration. * The group's toolset includes custom Go-based backdoors, injectors, exfiltration tools, loader FriendDelivery, and a C++ backdoor. * ESET analyzed C&C traffic from the attacker's Slack and Discord channels, gaining information about the group's internal operations and post-compromise activities. BRATISLAVA, Slovakia, April 23, 2026 (GLOBE NEWSWIRE) -- ESET researchers have discovered a previously undocumented China-aligned APT group that they named GopherWhisper. The group wields a wide array of tools, mostly written in Go, that use injectors and loaders to deploy and execute various backdoors in its arsenal. In the observed campaign, the threat actors targeted a governmental institution in Mongolia. GopherWhisper abuses legitimate services, notably Discord, Slack, Microsoft 365 Outlook, and file.io, for command and control (C&C) communication and exfiltration. ESET discovered the group in January 2025, when it found a previously undocumented backdoor, which ESET researchers named LaxGopher, in the system of a government institution in Mongolia. Digging deeper, they managed to uncover several more malicious tools, mainly various additional backdoors, all deployed by the same group. The majority of these tools were written in Go, and their collective aim was cyberespionage. According to ESET telemetry, the victim impacted by GopherWhisper backdoors is a Mongolian governmental institution. By analyzing the C&C traffic from the attacker-operated Discord and Slack servers, ESET estimates that dozens of other victims besides the Mongolian institution were also affected, though it has no information about their geolocation or verticals. Of the seven tools that were discovered, four are backdoors - LaxGopher, RatGopher, and BoxOfFriends, written in Go, and SSLORDoor, written in C++. Furthermore, ESET found an injector (JabGopher), a Go-based exfiltration tool (CompactGopher), and a malicious DLL file (FriendDelivery). Since the set of malware ESET found bore no code similarities to any known threat actor's tools, and there was also no overlap in the Tactics, Techniques, and Procedures (TTPs) used by any other group, ESET decided to attribute the tools to a new group. Researchers chose to name that group GopherWhisper due to the majority of the group's tools' being written in the Go programming language, which has a gopher as its mascot, and based on the filename of whisper.dll, which is side-loaded. Get the latest news delivered to your inbox Sign up for The Manila Times newsletters By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy. GopherWhisper is characterized by the extensive use of legitimate services such as Slack, Discord, and Outlook for C&C communication. "During our investigation, we managed to extract thousands of Slack and Discord messages, as well as several draft email messages from Microsoft Outlook. This gave us great insight into the inner workings of the group," says ESET researcher Eric Howard, who discovered the new threat group. "Timestamp inspection of the Slack and Discord messages showed us that the bulk of them were being sent during working hours, i.e. between 8 a.m. and 5 p.m., which aligns with China Standard Time. Furthermore, the locale for the configured user in Slack metadata was also set to this time zone. We therefore believe that GopherWhisper is a China-aligned group," explains Howard. Advertisement Based on this ESET investigation, the group's Slack and Discord servers were first used to test the functionality of the backdoors, and then later, without clearing the logs, also used as C&C servers for the LaxGopher and RatGopher backdoors on multiple compromised machines. In addition to the Slack and Discord communications, ESET researchers were also able to extract email messages used for communication between the BoxOfFriends backdoor and its C&C using the Microsoft Graph API. ESET Research's Eric Howard presented these findings at Botconf 2026 conference. For a more detailed analysis of the new GopherWhisper threat group and its arsenal, check out the latest ESET Research blogpost and white paper "GopherWhisper: A burrow full of malware" on WeLiveSecurity.com. Make sure to follow ESET Research on Twitter (today known as X), BlueSky, and Mastodon for the latest news from ESET Research. About ESET Advertisement ESET® provides cutting-edge cybersecurity to prevent attacks before they happen. By combining the power of AI and human expertise, ESET stays ahead of emerging global cyberthreats, both known and unknown - securing businesses, critical infrastructure, and individuals. Whether it's endpoint, cloud, or mobile protection, our AI-native, cloud-first solutions and services remain highly effective and easy to use. ESET technology includes robust detection and response, ultra-secure encryption, and multifactor authentication. With 24/7 real-time defense and strong local support, we keep users safe and businesses running without interruption. The ever-evolving digital landscape demands a progressive approach to security: ESET is committed to world-class research and powerful threat intelligence, backed by R&D centers and a strong global partner network. For more information, visit www.eset.com, or follow our social media, podcasts, and blogs.

Discord
The Manila times8h ago
Read update
ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

* ESET Research has uncovered a new China-aligned APT group, which has been named GopherWhisper, that targets governmental institutions in Mongolia. * GopherWhisper leverages Discord, Slack, Microsoft 365 Outlook, and file.io for command and control (C&C) communications and exfiltration. * The group's toolset includes custom Go-based backdoors, injectors, exfiltration tools, loader FriendDelivery, and a C++ backdoor. * ESET analyzed C&C traffic from the attacker's Slack and Discord channels, gaining information about the group's internal operations and post-compromise activities. BRATISLAVA, Slovakia, April 23, 2026 (GLOBE NEWSWIRE) -- ESET researchers have discovered a previously undocumented China-aligned APT group that they named GopherWhisper. The group wields a wide array of tools, mostly written in Go, that use injectors and loaders to deploy and execute various backdoors in its arsenal. In the observed campaign, the threat actors targeted a governmental institution in Mongolia. GopherWhisper abuses legitimate services, notably Discord, Slack, Microsoft 365 Outlook, and file.io, for command and control (C&C) communication and exfiltration. ESET discovered the group in January 2025, when it found a previously undocumented backdoor, which ESET researchers named LaxGopher, in the system of a government institution in Mongolia. Digging deeper, they managed to uncover several more malicious tools, mainly various additional backdoors, all deployed by the same group. The majority of these tools were written in Go, and their collective aim was cyberespionage. According to ESET telemetry, the victim impacted by GopherWhisper backdoors is a Mongolian governmental institution. By analyzing the C&C traffic from the attacker-operated Discord and Slack servers, ESET estimates that dozens of other victims besides the Mongolian institution were also affected, though it has no information about their geolocation or verticals. Of the seven tools that were discovered, four are backdoors -- LaxGopher, RatGopher, and BoxOfFriends, written in Go, and SSLORDoor, written in C++. Furthermore, ESET found an injector (JabGopher), a Go-based exfiltration tool (CompactGopher), and a malicious DLL file (FriendDelivery). Since the set of malware ESET found bore no code similarities to any known threat actor's tools, and there was also no overlap in the Tactics, Techniques, and Procedures (TTPs) used by any other group, ESET decided to attribute the tools to a new group. Researchers chose to name that group GopherWhisper due to the majority of the group's tools' being written in the Go programming language, which has a gopher as its mascot, and based on the filename of whisper.dll, which is side-loaded. GopherWhisper is characterized by the extensive use of legitimate services such as Slack, Discord, and Outlook for C&C communication. "During our investigation, we managed to extract thousands of Slack and Discord messages, as well as several draft email messages from Microsoft Outlook. This gave us great insight into the inner workings of the group," says ESET researcher Eric Howard, who discovered the new threat group. "Timestamp inspection of the Slack and Discord messages showed us that the bulk of them were being sent during working hours, i.e. between 8 a.m. and 5 p.m., which aligns with China Standard Time. Furthermore, the locale for the configured user in Slack metadata was also set to this time zone. We therefore believe that GopherWhisper is a China-aligned group," explains Howard. Based on this ESET investigation, the group's Slack and Discord servers were first used to test the functionality of the backdoors, and then later, without clearing the logs, also used as C&C servers for the LaxGopher and RatGopher backdoors on multiple compromised machines. In addition to the Slack and Discord communications, ESET researchers were also able to extract email messages used for communication between the BoxOfFriends backdoor and its C&C using the Microsoft Graph API. ESET Research's Eric Howard presented these findings at Botconf 2026 conference. For a more detailed analysis of the new GopherWhisper threat group and its arsenal, check out the latest ESET Research blogpost and white paper "GopherWhisper: A burrow full of malware" on WeLiveSecurity.com. Make sure to follow ESET Research on Twitter (today known as X), BlueSky, and Mastodon for the latest news from ESET Research. About ESET ESET® provides cutting-edge cybersecurity to prevent attacks before they happen. By combining the power of AI and human expertise, ESET stays ahead of emerging global cyberthreats, both known and unknown -- securing businesses, critical infrastructure, and individuals. Whether it's endpoint, cloud, or mobile protection, our AI-native, cloud-first solutions and services remain highly effective and easy to use. ESET technology includes robust detection and response, ultra-secure encryption, and multifactor authentication. With 24/7 real-time defense and strong local support, we keep users safe and businesses running without interruption. The ever-evolving digital landscape demands a progressive approach to security: ESET is committed to world-class research and powerful threat intelligence, backed by R&D centers and a strong global partner network. For more information, visit www.eset.com, or follow our social media, podcasts, and blogs.

Discord
IT News Online8h ago
Read update
ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

GopherWhisper APT group hides command and control traffic in Slack and Discord - IT Security News

Attackers continue to lean on everyday collaboration platforms to hide command and control traffic inside normal enterprise noise. A newly identified China-aligned APT group pushes that trend further, running its operations through Slack workspaces, Discord servers, Outlook drafts, and the file.io sharing service. GopherWhisper toolset overview ESET researchers have named the group GopherWhisper and tied it to an intrusion at a Mongolian governmental entity. The name draws on two elements: most of the group's tooling ... More →

Discord
IT Security News - cybersecurity, infosecurity news9h ago
Read update
GopherWhisper APT group hides command and control traffic in Slack and Discord - IT Security News

ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

BRATISLAVA, Slovakia, April 23, 2026 (GLOBE NEWSWIRE) -- ESET researchers have discovered a previously undocumented China-aligned APT group that they named GopherWhisper. The group wields a wide array of tools, mostly written in Go, that use injectors and loaders to deploy and execute various backdoors in its arsenal. In the observed campaign, the threat actors targeted a governmental institution in Mongolia. GopherWhisper abuses legitimate services, notably Discord, Slack, Microsoft 365 Outlook, and file.io, for command and control (C&C) communication and exfiltration. ESET discovered the group in January 2025, when it found a previously undocumented backdoor, which ESET researchers named LaxGopher, in the system of a government institution in Mongolia. Digging deeper, they managed to uncover several more malicious tools, mainly various additional backdoors, all deployed by the same group. The majority of these tools were written in Go, and their collective aim was cyberespionage. According to ESET telemetry, the victim impacted by GopherWhisper backdoors is a Mongolian governmental institution. By analyzing the C&C traffic from the attacker-operated Discord and Slack servers, ESET estimates that dozens of other victims besides the Mongolian institution were also affected, though it has no information about their geolocation or verticals. Of the seven tools that were discovered, four are backdoors -- LaxGopher, RatGopher, and BoxOfFriends, written in Go, and SSLORDoor, written in C++. Furthermore, ESET found an injector (JabGopher), a Go-based exfiltration tool (CompactGopher), and a malicious DLL file (FriendDelivery). Since the set of malware ESET found bore no code similarities to any known threat actor's tools, and there was also no overlap in the Tactics, Techniques, and Procedures (TTPs) used by any other group, ESET decided to attribute the tools to a new group. Researchers chose to name that group GopherWhisper due to the majority of the group's tools' being written in the Go programming language, which has a gopher as its mascot, and based on the filename of whisper.dll, which is side-loaded. GopherWhisper is characterized by the extensive use of legitimate services such as Slack, Discord, and Outlook for C&C communication. "During our investigation, we managed to extract thousands of Slack and Discord messages, as well as several draft email messages from Microsoft Outlook. This gave us great insight into the inner workings of the group," says ESET researcher Eric Howard, who discovered the new threat group. "Timestamp inspection of the Slack and Discord messages showed us that the bulk of them were being sent during working hours, i.e. between 8 a.m. and 5 p.m., which aligns with China Standard Time. Furthermore, the locale for the configured user in Slack metadata was also set to this time zone. We therefore believe that GopherWhisper is a China-aligned group," explains Howard. Based on this ESET investigation, the group's Slack and Discord servers were first used to test the functionality of the backdoors, and then later, without clearing the logs, also used as C&C servers for the LaxGopher and RatGopher backdoors on multiple compromised machines. In addition to the Slack and Discord communications, ESET researchers were also able to extract email messages used for communication between the BoxOfFriends backdoor and its C&C using the Microsoft Graph API. ESET Research's Eric Howard presented these findings at Botconf 2026 conference. ESET® provides cutting-edge cybersecurity to prevent attacks before they happen. By combining the power of AI and human expertise, ESET stays ahead of emerging global cyberthreats, both known and unknown -- securing businesses, critical infrastructure, and individuals. Whether it's endpoint, cloud, or mobile protection, our AI-native, cloud-first solutions and services remain highly effective and easy to use. ESET technology includes robust detection and response, ultra-secure encryption, and multifactor authentication. With 24/7 real-time defense and strong local support, we keep users safe and businesses running without interruption. The ever-evolving digital landscape demands a progressive approach to security: ESET is committed to world-class research and powerful threat intelligence, backed by R&D centers and a strong global partner network. For more information, visit www.eset.com, or follow our social media, podcasts, and blogs.

Discord
IT News Online9h ago
Read update
ESET Research discovers new China-aligned group, GopherWhisper: It abuses messaging services Discord, Slack, and Outlook to spy

Outsiders are already accessing Anthropic's new AI model, but is Claude Mythos really that powerful?

By becoming a member, I agree to receive information and promotional messages from Cyber Daily. I can opt out of these communications at any time. For more information, please visit our Privacy Statement. According to reporting by Bloomberg, a small number of people who are members of a private Discord channel dedicated to researching unreleased AI models have had unofficial access to Mythis since it was first announced. Getting in was apparently simple, too. "To access Mythos, the group of users made an educated guess about the model's online location," Bloomberg said in an article published on April 21. "They based this on knowledge about the format Anthropic has used for other models, the person said, adding that such formatting details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers." Anthropic said it was aware of the access and was investigating the report. Shane Fry, Chief Technology Officer at RunSafe Security, said it was an example of how easily exploited AI models commonly are. "Unauthorised users were able to access Anthropic's Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed," Fry said. "The reality is these AI capabilities are already out there, 'hacked' or not, and they're going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can't be used in the first place." Germaine Tan Shu Ting, VP Security & AI Strategy and Field CISO at Darktrace, expressed similar concerns. "It shows that the frontline remains identity," Tan Shu Ting said. "If Anthropic itself can be accessed using traditional hacking methods (reportedly coopting existing third-party access and 'internet sleuthing'), then it highlights how critical it is to assume the threat is already inside the walls." However, while analysts and industry insiders have reacted to Mythos with something like awe, the actual capabilities of the model may, in reality, fall far short of Anthropic's claims. Don't believe the hype? Doug Britton, EVP and chief strategy officer of RunSafe Security, referred to Mythos and Project Glasswing earlier in April as a "watershed moment for AI's runaway zero-day discovery and exploitation". "AI is now uncovering memory safety bugs at massive scale, including vulnerabilities that have been hiding in production code for over 25 years - the problem isn't just that these bugs exist, it's that they're being found faster than organisations can fix them," Britton said. But the question is - are they being found that fast? Davi Ottenheimer, security engineer and president of security consultancy flyingpenguin, has some serious doubts. "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results," Ottenheimer said in a blog post around the time Mythios and Glasswing were announced. "The Glasswing consortium is regulatory capture dressed up poorly as restraint." Ottenheimer based his observations - rather caustic ones, it must be said - on Anthropic's own Claude Mythos Preview System Card, a "whoppingly inefficient 244-page document that devotes just seven pages to the claim that the model is too dangerous to release". According to Ottenheimer, only seven of those pages do not mention the acronyms one might expect: CVSS, CWE or CVE. "The flagship demonstration document turns out to be like the ending of the Wizard of Oz, a sorry disappointment about a model weaponising two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defence-in-depth mitigations stripped out. Anthropic failed, and somehow the story was flipped into a warning about its success." Ottenheimer has many issues with Anthropic's - and, it must be said, the wider media's - claims that Mythos found "Thousands of zero-day vulnerabilities in every major operating system and every major web browser", and he pulls no punches. Referencing that claim, Ottenheimer points out that the word 'thousands' is "used once, in reference to transcripts reviewed during the alignment evaluation". "It is never used to describe vulnerabilities. The cyber security section (Section 3, pages 47-53) contains no count of zero-days at all," Ottenheimer said. "With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate, why are you teasing us with the claims about vulnerabilities at all?" Cyber Daily has reached out to Anthropic for comment.

AnthropicMercorDiscord
cyberdaily.au10h ago
Read update
Outsiders are already accessing Anthropic's new AI model, but is Claude Mythos really that powerful?

Anthropic Mythos AI shock: disturbing leak claim sparks urgent probe

Anthropic Mythos AI is at the centre of a growing security storm after reports that a small group of unauthorised users quietly gained access to the powerful cybersecurity model via a third-party vendor. Anthropic recently unveiled Mythos as a specialised Claude-based AI model designed to help major organisations detect software vulnerabilities and respond to cyber threats more quickly than human teams. The system is being trialled under Project Glasswing, an initiative that gives select partners, including Apple and other large technology and financial firms, access to the Anthropic Mythos AI preview for defensive security work. Anthropic has said Mythos can find thousands of high‑severity bugs across major operating systems and web browsers, underscoring why tight control over Anthropic Mythos AI access is seen as critical. According to Bloomberg and other outlets, a small private group on the Discord platform began using Anthropic Mythos AI on the very day it was publicly announced. Members reportedly combined credentials linked to a contractor working with Anthropic and open internet sleuthing tools to locate and access the Claude Mythos Preview environment. Bloomberg's reporting suggests the group shared screenshots and even a live demonstration of Anthropic Mythos AI to support their claims, while avoiding overtly cybersecurity‑related prompts in an apparent effort not to trigger alarms. Anthropic has confirmed it is investigating the incident, telling TechCrunch: "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third‑party vendor environments," and adding that there is currently no evidence its own systems have been compromised. The company also says there is no sign that activity went beyond the affected vendor, but the Anthropic Mythos AI scare is likely to intensify scrutiny of how unreleased, high‑risk AI models are tested and secured before wider deployment. For now, Anthropic Mythos AI remains in restricted preview, yet the alleged leak shows that even tightly controlled frontier systems can be probed and exposed at the edges, a warning that defensive AI may only be as strong as the weakest partner handling it.

AnthropicDiscord
punemirror.com11h ago
Read update
Anthropic Mythos AI shock: disturbing leak claim sparks urgent probe

RBI Said to Evaluate Cybersecurity Risks Linked to Anthropic's Mythos

Any deployment needs to comply with RBI's data localisation requirements The Reserve Bank of India (RBI) is said to be holding talks with global regulators, domestic lenders, and government officials to assess potential risks linked to Anthropic's new artificial intelligence (AI) model, Mythos. According to a report, RBI's preliminary assessment points towards Mythos potentially raising cybersecurity concerns by expediting the discovery and exploitation of software vulnerabilities. The development comes following reports of unauthorised personnel gaining access to Anthropic's Mythos, which is touted to be "so powerful that it could enable dangerous cyberattacks". RBI Evaluating Mythos-Linked Cybersecurity Risks Citing sources familiar with the matter, Reuters reports that the RBI, over the past two weeks, has held consultations with counterparts around the world. This reportedly includes the Federal Reserve and the Bank of England, intending to understand the emerging risks and safeguards. "Globally, we are discussing with other countries and other regulators on what are the developments and what safeguards need to be taken," the publication quoted one source as saying. The report states that the RBI may also pursue direct engagement with Anthropic. Further, regulators across Asia, Europe, and the US are said to have advised banks to review their cybersecurity preparedness. The National Payments Corporation of India (NPCI), which facilitates payment services like UPI in the country, is said to be exploring early access to Mythos alongside a small group of banks, too. Citing a source, Reuters reported that this is to identify potential "day-zero" vulnerabilities before any wider rollout, although any such access could be restricted. Mythos is said to be hosted on tightly controlled servers in the US. Consequently, running any tests on local datasets in foreign jurisdictions could pose regulatory and technical challenges. The RBI is also said to be working on broader guidelines for banks entering enterprise partnerships with advanced AI models, including Anthropic's Mythos and Claude family. However, any deployment involving Indian user data would need to comply with the RBI's data localisation requirements, the publication noted a source as saying. Concerns Over Unauthorised Access to Mythos The regulatory discussions come shortly after reports that a small group of unauthorised users had gained early access to Mythos. According to Bloomberg, the model, which Anthropic itself has described as highly powerful, was accessed via a private Discord group on the same day it was announced for limited testing. While the group reportedly did not use the model for malicious purposes, the incident has raised concerns about potential misuse. At the time, screenshots appearing to show a Mythos dashboard were shared by the group. These included user management panels, AI experiment interfaces, and detailed analytics for model performance and costs. Anthropic is currently probing the matter. We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," the company said in a statement.

AnthropicDiscord
NDTV Gadgets 36011h ago
Read update
RBI Said to Evaluate Cybersecurity Risks Linked to Anthropic's Mythos

Druzhba - a pipeline turned into a source of discord | News.az

The situation surrounding the Druzhba oil pipeline in April 2026 has evolved into a complex knot of energy, political and financial contradictions within Europe. What has come to the fore is not so much the technical condition of the infrastructure itself as the use of oil transit as a tool of pressure and negotiation. According to statements by Hungarian Prime Minister Viktor Orban, Budapest received a signal through EU structures that Ukraine was ready to resume the transit of Russian oil as early as 20 April, but on the condition that Hungary lift its veto on a €90 billion EU loan to Kyiv. Hungary's response was firm and succinct: "no oil, no money". Budapest insists that physical supplies must first be restored, and only then can concessions on the loan be discussed. This reflects Hungary's pragmatic approach, in which energy security is prioritised over pan-European solidarity. Source: TASS Kyiv, for its part, officially explained the halt in transit by citing infrastructure damage. However, representatives from several Central European countries have questioned this explanation. In particular, Slovak politicians argue that the pipeline remains operational and that the suspension is political in nature. Against this backdrop, Ukraine's refusal to allow independent experts to inspect the pipeline has only intensified suspicions. Moreover, according to some expert assessments, the functioning of Druzhba is being used by Kyiv as leverage to accelerate the receipt of European financing -- a view echoed in Russian media. It should be noted that Ukraine is at war with Russia and has the full moral right to halt the transit of Russian energy resources through its territory. Had such a decision been made at the outset of the war, this issue might not be on the agenda today. However, a paradoxical situation emerged: throughout the war, Russian hydrocarbons continued to transit via Ukraine even as Russian forces struck Ukrainian cities. This created a deeply contradictory reality. Europe's dependence on Russian resources meant that Kyiv hesitated to shut down the pipeline, fearing backlash from European partners and a reduction in financial and military support. Even now, Kyiv frames the disruption in terms of technical malfunction rather than a deliberate decision to limit Russian budget revenues. Over the course of the war, Russia's budget has been replenished by hundreds of billions of dollars, offsetting much of the impact of sanctions. Despite successive EU sanctions packages, Moscow's capacity to sustain the war has, in many respects, continued to grow. According to Bloomberg, Ukraine is set to begin technical testing of the Druzhba pipeline on 21 April to restore supplies to Hungary. The demands voiced by Orban stem directly from Hungary's national interests. The country faces greater difficulty in adapting to supply disruptions than leading EU economies, and an energy crisis would pose serious challenges that Hungary may struggle to manage. Additional complexity is introduced by Hungary's domestic political landscape. Following the electoral defeat of Orban's party, it was expected that the incoming prime minister, Peter Magyar, would decisively reject Russian fuel and support the multi-billion-euro loan to Ukraine. However, developments have not followed that script. Magyar's position appears more flexible, yet in substance it echoes Orban's policy. On the one hand, he confirmed that Hungary does not intend to block a pan-European decision on the loan to Ukraine. On the other, Budapest still refuses to participate in financing it and firmly rejects external pressure. Magyar directly called on Ukrainian President Volodymyr Zelenskyy to abandon what he described as "blackmail" and to resume oil supplies without preconditions. Thus, despite a change in leadership, Hungary's strategic line remains unchanged: the protection of national energy interests and resistance to linking economic decisions with political demands. According to Izvestia, Hungary had fuel reserves sufficient for about 90 days at the end of January, but these have now declined to approximately 30 days. While there is an alternative route via Croatia, oil transported this way would be significantly more expensive. For Hungary and Slovakia, supplies via Druzhba are critically important. Central European countries remain heavily dependent on stable transit through Ukraine. Brussels, meanwhile, faces a dual challenge. On the one hand, it is interested in maintaining oil transit through Druzhba, as a sharp reduction in supply could exacerbate the energy situation. On the other hand, the EU continues to pursue a gradual phase-out of Russian energy resources, theoretically to be completed by 2027. However, experts suggest that this timeline may be extended, as Europe is already encountering practical difficulties. Source: BBC At present, Druzhba has clearly become more than infrastructure -- it is an instrument. It is likely that Kyiv will eventually make concessions and at least partially allow the pipeline to resume operations, recognising that without this step, financial assistance may not materialise. Countries such as Hungary and Slovakia are expected to hold their ground, prioritising energy stability. For the European Union as a whole, additional pressure on already volatile energy markets is undesirable. On Tuesday, it was also reported that another politician favouring cooperation with Russia is returning to power. Bulgaria's future prime minister, Rumen Radev, announced his intention to build respectful and balanced relations with Russia. It is worth recalling that during his presidency, he repeatedly vetoed decisions on supplying Ukraine with armoured vehicles and air defence systems, although those vetoes were overridden by parliament. Now, after a decisive electoral victory, his coalition holds a strong parliamentary majority. In any case, the long-term prospects for resolving Europe's energy stability challenges remain uncertain. Even if supplies are restored, they will likely continue to be accompanied by persistent political risks and will depend heavily on the evolving dynamics between Kyiv, Brussels and individual EU member states.

Discord
News.az12h ago
Read update
Druzhba - a pipeline turned into a source of discord | News.az

Discord group says it accessed Claude Mythos by guessing location

The Anthropic AI model deemed a danger to cybersecurity may need to be more secure itself. An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.

DiscordMercorAnthropic
Mashable ME12h ago
Read update
Discord group says it accessed Claude Mythos by guessing location

Anthropic's dangerous AI Mythos: Unauthorized access likely since day one

Anthropic's most powerful AI is said to be so dangerous that only selected companies have access. Unknown individuals reportedly bypassed this, a report states. A group of individuals has allegedly gained unauthorized access to Anthropic's powerful and downright dangerous AI model Claude Mythos Preview without the AI company noticing. This is reported by the financial news agency Bloomberg, which was shown the use of the tool. The unknown individuals are reportedly communicating in a private Discord channel and are individuals who have previously focused on searching for unpublished AI models. Mythos was not used by them for tasks related to cybersecurity; instead, they are testing how the AI model performs on harmless tasks - for example, building a website. Anthropic introduced Mythos two weeks ago and stated that the model is so dangerous that it is only made available to companies working on IT security. The AI model has already identified thousands of high-risk zero-day vulnerabilities, including in all major operating systems and every internet browser. At the same time, the AI technology is significantly more capable of developing a working exploit for such vulnerabilities, sometimes even using multiple in conjunction. As part of "Project Glasswing", the industry is now to work on patching vulnerabilities found this way before other AI models become available, with which criminals can also find and, above all, exploit vulnerabilities much more easily. Unauthorized access to Mythos was obtained according to Bloomberg on the very day Anthropic introduced the AI tool. The group used various tactics, with one person posing as an employee of a service provider to gain access to Anthropic's tools. Previously, the group had made an "educated guess" about Mythos's internet address - based on other Anthropic URLs. The unknown individuals have been using Mythos regularly since then, just like other AI models before. However, their intention is not "not wreaking havoc with them". Bloomberg consistently refers to one of the individuals for the report, who is kept anonymous. Anthropic has reportedly pledged to investigate the claim, while downplaying the extent of the access. There are no indications that the access went beyond a third-party environment or had any impact on its systems. The discovery suggests how difficult it may be for the company to keep access to Mythos under wraps. The AI model was described at its introduction as so powerful that it not only alarmed the IT security industry. Governments in more and more countries are grappling with the significance of the new tool, and checks have been ordered, especially in the financial industry. If Mythos falls into the wrong hands, the consequences for cybersecurity could be devastating.

DiscordAnthropic
heise online14h ago
Read update
Anthropic's dangerous AI Mythos: Unauthorized access likely since day one

Anthropic's exclusive cybersecurity tool Mythos has reportedly fallen in the hands of an unauthorized group, and the consequences could be massive | Attack of the Fanboy

A group of unauthorized users has reportedly gained access to Mythos, the powerful cybersecurity tool recently unveiled by Anthropic, TechCrunch reported. This development is significant because Anthropic has explicitly warned that Mythos is capable of identifying and exploiting vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The company has framed this technology as a double-edged sword. They previously noted that in the wrong hands, it could become a potent hacking tool rather than the defensive asset it was designed to be for enterprise security. The unauthorized access was reportedly achieved by a small group of users operating within a private online forum. According to reports, these individuals managed to secure access to the tool on the same day it was publicly announced by Anthropic. The group, which is part of a Discord channel dedicated to hunting for information about unreleased AI models, used a mix of strategies to bypass restrictions. Perhaps most concerning is how the group managed to pinpoint the location of the model By making an educated guess about the model's online location, they relied on their existing knowledge of the naming conventions and formats Anthropic has used for previous models. This effort was reportedly aided by information revealed in a recent data breach from Mercor, an AI training startup that works with top developers. Furthermore, the group leveraged access provided by a person who is currently employed at a third-party contractor that works for Anthropic. This individual, who was interviewed about the breach, had legitimate permission to access Anthropic models and software related to evaluating the technology for the startup, which they gained through their contract work. Anthropic has been very cautious with the distribution of Mythos. The model was released only to a select number of vendors and organizations as part of an initiative called Project Glasswing. This limited release was specifically designed to prevent the tool from falling into the hands of bad actors who might weaponize it against corporate security. Big names like Apple, Amazon, and Cisco Systems are among the organizations that have been granted access to test the model. Amazon, which is a key partner and backer of Anthropic, also offers Mythos through its Bedrock platform to a very specific, approved list of organizations. As the utility of the tool has become known, a growing number of financial institutions and government agencies on both sides of the Atlantic have been clamoring to get on that list to better safeguard their own systems. In response to the reports, an Anthropic spokesperson provided a statement, saying, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The company has been quick to clarify that, so far, it has found no evidence that this unauthorized activity has impacted Anthropic's internal systems in any way. They maintain that the access appears to be contained within a third-party vendor's environment. While the situation sounds alarming, the source who spoke about the breach offered some perspective on the intentions of the group. The individual claimed that the users involved are primarily interested in playing around with new models rather than wreaking havoc. They have reportedly avoided running cybersecurity-related prompts on the Mythos model, choosing instead to experiment with tasks like building simple websites to avoid detection. The person also noted that this group has access to a variety of other unreleased Anthropic AI models, suggesting a broader scope of interest in the company's pipeline. This incident highlights the massive challenge Anthropic faces in keeping its most powerful and potentially dangerous technology from spreading beyond its approved partners. If these reports are accurate, it raises serious questions about how many other people might be using Mythos without permission and what their true objectives might be. For now, Anthropic is left to manage the fallout of this unauthorized access, which could potentially threaten the reputation of an exclusive release intended to bolster enterprise security. It is a stark reminder that even with strict initiatives like Project Glasswing, the digital perimeter is only as strong as its weakest link, especially when third-party vendors are involved in the deployment of such high-stakes software.

DiscordMercorAnthropic
Attack of the Fanboy18h ago
Read update
Anthropic's exclusive cybersecurity tool Mythos has reportedly fallen in the hands of an unauthorized group, and the consequences could be massive | Attack of the Fanboy

Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One

A shadowy crew of AI enthusiasts pierced the defenses around Anthropic's Mythos on launch day. Boom. Access granted through a sloppy third-party vendor. Now this powerhouse model -- designed to hunt vulnerabilities across every major operating system and browser -- sits in unauthorized hands. TechCrunch broke the story, citing Bloomberg's reporting on the intrusion. Mythos forms the core of Project Glasswing, Anthropic's bid to arm elite security teams with AI that autonomously crafts exploits. Think zero-days in Windows, macOS, Chrome, Firefox -- you name it. The company rolled it out to just 40 vetted partners, including Apple and Amazon, precisely because it could flip from defender to destroyer in seconds. A person familiar with the matter told Bloomberg the group, huddled in a private online forum and Discord channel, sniffed out the model's URL pattern from prior leaks involving contractor Mercor. They interviewed a contractor employee, grabbed credentials, and logged in. Screenshots. Live demos. Proof delivered. And they've been poking around ever since. Not launching attacks, they claim. Just tinkering with the forbidden toy. "The group in question is interested in playing around with new models, not wreaking havoc with them," the source insisted to Bloomberg. But capabilities like these don't stay playground-bound. Mozilla already tapped Mythos Preview directly from Anthropic to patch 271 Firefox bugs in its latest release. Firefox CTO Bobby Holley called it a "firehose of bugs," forcing teams to scramble with resources pulled from elsewhere. Wired detailed how this AI shifts vulnerability hunting into overdrive, exposing flaws humans miss -- but demanding discipline to wrangle the flood. Anthropic moved fast. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said. No signs of core system compromise, they added. Yet whispers on X suggest the breach hit multiple unreleased models too. One post from @ns123abc laid it bare: hackers guessed URLs post-Mercor leak, slipped in via lingering contractor creds. The whole pipeline exposed. Posts from @coinbureau and @LarkDavis amplified the alarm, noting restrictions to 40 firms exactly to curb cyber risks. This isn't isolated sloppiness. The National Security Agency deploys Mythos despite Pentagon labels tagging Anthropic as a supply-chain risk -- a feud spilling into court. Axios reported wider NSA uptake, prioritizing cyber edge over bans. UK counterparts route through the AI Security Institute. Meanwhile, the breach spotlights vendor weak links in AI's high-stakes chain. Contractors like Mercor, hit earlier, leak naming conventions. Guesses turn into gateways. What if next time it's nation-states, not forum dwellers? Broader ripples hit fast. CNBC aired segments on the leak during 'Fast Money,' with Kate Rooney flagging Silicon Valley tremors. CNBC. Financial Times confirmed Anthropic's probe into the 'powerful' model handed to trusted few. Financial Times. Reddit threads in r/ClaudeAI and r/ClaudeCode buzzed with leaked excerpts, underscoring containment struggles for potent tech. So where does this leave enterprise AI security? Tools like Mythos promise to outpace human hackers, spotting multi-step chains others ignore -- like a 27-year-old OpenBSD flaw or FreeBSD exploits. But day-one cracks erode trust. Partners demand ironclad isolation; regulators eye tighter controls. Anthropic's "safe AI" badge takes a hit, even as it sues DoD over blacklists. Vendors scramble to audit creds. And those forum users? Still inside, testing limits. One wrong prompt away from chaos.

CHAOSMercorDiscordAnthropic
WebProNews18h ago
Read update
Anthropic's Mythos Breach: How Hackers Cracked Open AI's Most Dangerous Cyberweapon on Day One
Showing 1 - 20 of 159 articles