The latest news and updates from companies in the WLTH portfolio.
Anthropic's Mythos model is designed to discover software vulnerabilities, yet its release has stirred concern. Initially introduced under the Project Glasswing initiative, the model was restricted to select organizations for vulnerability assessment. Recent developments, however, reveal that unauthorized access to Mythos occurred, heightening cybersecurity concerns. Unauthorized Access Incident On a Wednesday, an Anthropic representative confirmed that individuals outside the Glasswing partners might have accessed the Mythos model. This access was not through Anthropic's authorized production API. The spokesperson stated, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The third-party vendor, linked to Anthropic's model development, has not been publicly identified. According to Bloomberg, a small group exploited their knowledge of the model's online location, derived from prior leaks, to gain access. Mercor Data Breach This unauthorized access coincided with a data breach at Mercor, an AI staffing firm that supplies contractors to major AI labs. Earlier in the month, Mercor acknowledged being affected by the LiteLLM supply-chain attack. Reports suggested that the intruders, identified as members of a private Discord channel, began accessing Mythos the same day Anthropic announced Project Glasswing. Mythos' Capabilities and Limitations Despite its marketing hype, early user feedback about Mythos indicates limitations. While organizations like AWS and Mozilla have praised its speed in identifying vulnerabilities, it has not outperformed elite human cybersecurity researchers. Mozilla's CTO, Bobby Holley, disclosed that Mythos found 271 vulnerabilities in Firefox but acknowledged that any vulnerabilities it discovered could also have been identified by skilled human researchers. Claims of Overhype Researchers have raised concerns about the veracity of the claims surrounding Mythos. While Anthropic touted its ability to discover "thousands of high- and critical-severity vulnerabilities," critics argue these numbers are exaggerated. For instance, VulnCheck researcher Patrick Garrity estimated the actual count at around 40, and no confirmed zero-day exploits were documented. Claims regarding 181 Firefox vulnerabilities were also scrutinized, revealing that most findings stemmed from environments without standard security measures. Concerns in the Cybersecurity Community Experts have mixed reactions about unauthorized access to Mythos. Snehal Antani, CEO of Horizon3.ai, stated the security community should not overreact. He emphasized that adversaries do not require Mytos for vulnerability research; existing open-source models already facilitate this process. * Unauthorized Access: Occurred via a third-party vendor. * Vulnerability Discovery: Mythos' findings are comparable to skilled human researchers. * Hype vs. Reality: Reports indicate exaggerated claims of Mythos' capabilities. The incident surrounding Anthropic's Mythos model illustrates the challenges of maintaining security and managing expectations in the rapidly evolving AI landscape. As the investigation continues, the cybersecurity community watches closely, evaluating the model's true potential and implications.

The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic's past practices, that hackers obtained from AI training startup Mercor. Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported. Anthropic did not immediately respond to Fortune's request for comment. A spokesperson from Anthropic told Bloomberg the company was "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The fact that the model was leaked so quickly doesn't surprise David Lindner, the chief information security officer at Contrast Security and a 25-year industry veteran. Even though Anthropic intentionally limited the model to a small group of 40 companies -- including Microsoft, Apple, and Google -- to beef up their security ahead of a wider release, thousands of people likely had access to the program across these companies, which makes a leak nearly inevitable, he said. "It was bound to happen," Lindner said. "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it." Anthropic claims its Mythos model is more adept at finding cybersecurity vulnerabilities than previous versions. The company was able to use the program, which has not been widely released, to find a 27-year-old security vulnerability in OpenBSD, an operating system known for its security. Mozilla on Tuesday also said it used a preview of the model to identify and patch 271 vulnerabilities in its Firefox web browser. And yet, Mythos' release has been plagued by security breaches from the start. Fortune was the first to report on the model's existence thanks to a security lapse that exposed details about the large language model in a publicly accessible database. For Lindner, this most recent unauthorized access shows it's likely U.S. adversaries already have access to this tech which could put U.S. companies and other systems at risk of attacks. "If some group -- some random Discord online forum, got access to it. it's already been breached by China," Lindner told Fortune. Although Lindner is still unsure how much of Mythos' supposed danger is real or just marketing hype -- OpenAI's Sam Altman this week called Anthropic's promotion of Mythos "fear-based marketing" -- it's clear cybersecurity professionals, or defenders, need to be ready for a new world of AI attacks. "The real thing is there's a real compression of timelines here for defenders," he said. AI is unique in its abilities to execute cyberattacks because it never gets tired, said Lindner. It can relentlessly tackle a weak spot in a company's security system, whereas a human may eventually give up. It also empowers less experienced developers to commit cyberattacks partly by drawing on the myriad documentation available on the web about previous exploits and using it to inform an AI model and adjust its attacks for specific situations. "It's the folks that have some sort of [developer] background or some sort of technical background that may have had some limitations in the past of getting over things or taking too long to do stuff, it makes this stuff way easier now," he said. Lindner said the fact that the program was reportedly accessed by third-party contractors means that, even more than before, companies need to limit who has access to its most vital systems. The rapid rise of AI as a tool for cyberattacks could disproportionately affect smaller companies, who may not be able to keep up with the increasing complexity of AI-fueled attacks, said Lindner. Those that refuse to even touch AI and continue on as before are even more at risk, he said. "AI is not a golden ticket, but if you're not taking advantage of it on the defender side, there is no chance, none, that you are going to be able to keep up with the offensive side," he said.

Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. Project Glasswing In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." A Low-Tech Breach The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

Your plants and your people are caught between a game-changing tool and a historic level of risk. In case you somehow missed it, Anthropic's Project Glasswing recently set off alarms throughout the cybersecurity and AI communities. Glasswing is a security initiative where Anthropic was working with Apple, Microsoft, Google, Amazon, and about 40 other key AI players in a sort of focus group setting to assess an AI model named Mythos Preview that runs on their Claude platform. The goal of Mythos is to proactively detect and patch software vulnerabilities - ideally preventing them from being exploited by malicious actors and protect critical infrastructure by applying AI-powered offensive security techniques. Due to Mythos' powerful potential, Anthropic meant to limit initial access. However, unauthorized users were able to gain access to the platform via third-party vendor credentials. These parties are part of a Discord online forum group known to search for information about unreleased AI models. After obtaining access, the group proceeded to publicize the ease at which Mythos is able to identify vulnerabilities. In the wrong hands, this tool offers hackers the ability to attack at a speed which would be nearly impossible to stop. Or, Mythos could be vital in helping defenders finally operate from a proactive posture, instead of constantly playing catch-up. As you can imagine, there was a passionate response. Shane Fry, Chief Technology Officer, RunSafe Security: "Unauthorized users were able to access Anthropic's Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed. "The reality is these AI capabilities are already out there, 'hacked' or not, and they're going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can't be used in the first place." Agnidipta Sarkar, Chief Evangelist at ColorTokens: "While Anthropic is investigating, the only information publicly available so far is that the attack used the oldest trick in the book, impersonating someone with existing access. The users reportedly guessed the model's URL based on knowledge of Anthropic's patterns for other models. The good news is that Anthropic detected the breach and contained it to that specific vendor's environment. "One of the key controls that every modern environment needs is microsegmentation, which can effectively reduce the blast radius to specific vendors and leave no elbow room for attackers to navigate. I am hoping Anthropic is using similar controls to keep the attack contained, such as zero-trust mechanisms." Tim Mackey, Head of Software Supply Chain Risk Strategy at Black Duck: "The unfortunate reality is that while it's great to hear that novel cybersecurity models are being provided to select researchers to evaluate, if your team is on the outside looking in, waiting for the final report might not be top of mind. For defenders, even the specter of unauthorized access to an adversarial model as powerful as Mythos is purported to be, only increases anxiety levels. "What's clear is that security leaders in organizations of all sizes should take this claim as a call to action focused on the role AI-enabled cybersecurity plays in their operations and how best to scale those efforts to deal with AI enabled adversaries." John Gallagher, Vice President, Viakoo: "There has always been an arms race between cyber defenders and cyber attackers, and Mythos is currently the most powerful armament available. If we do not know whose hands it is in, it should be viewed no differently than uncontrolled distribution of enriched uranium. "If true, this deeply undermines Project Glasswing which was setup up explicitly to give cyber defenders early access to Mythos Preview in order to define and mount defenses against it. Threat actors having early access to Mythos Preview puts them on the same footing (or possibly with advantages) versus cyber defenders. "Uncontrolled access to Mythos Preview will hit hardest on operators of critical OT, IoT, and ICS systems. Already knowing the fifty IT organizations with early access to Mythos would naturally focus threat actors on targets outside of those 50 companies, most likely non-standard operating systems that are prevalent in OT and IoT. "If the model has escaped Pandora's Box, there should be immediate validation and public notification of it. Since that has not happened here, it is likely that there was not significant exposure. However, there has never been a prize as valuable to cyber criminals before as early access to Mythos Preview; it potentially can open all bank accounts and reveal all secrets. "Threat actors are highly sophisticated, very well-funded, and determined. We are in a race to harden systems and have rapid patching at high scale in place before threat actors can leverage Mythos Preview." Nicole Carignan, SVP, Security & AI Strategy, and Field CISO at Darktrace: "This highlights the continued weaponization of commercial tooling. Frontier and near‑frontier models are increasingly dual‑use by default. Capabilities designed to improve software quality and security can be repurposed with minimal friction to accelerate vulnerability discovery for malicious ends. This is not a failure of intent; it is an outcome of scale, accessibility, and capability diffusion. "These models will continue to be a target for threat actors to gain access to in order to achieve initial access capabilities to organizations. More concerning is access to critical vulnerabilities that have not yet been released to the public. Possession of undisclosed, high‑severity vulnerabilities enables threat actors to facilitate more sophisticated and scaled access to organizations through exploiting an 'unknown' vulnerability. "It is also important to be realistic about containment. This was never going to be contained to a single model, organization, or access control failure. Threat actors do not need this system; they need a system with sufficient capability. Whether through parallel development, model leakage, fine‑tuning, or the combination of multiple weaker models and tools, similar outcomes can be achieved. "The strategic mistake would be to treat this as an isolated incident rather than a signal. Advanced vulnerability discovery capabilities will continue to proliferate, and the window between discovery and exploitation will continue to shrink. This reinforces the need for scaled visibility, behavioral analytics, anomaly detection, and autonomous containment across endpoints, cloud, identities, SaaS, and critical infrastructure. "Finally, this is another reminder that investment in AI adoption without commensurate investment in security and risk management is unsustainable. Resilience will depend less on how quickly vulnerabilities can be patched, and more on how effectively exploitation can be detected and contained when prevention inevitably fails." Reach Security's Co-founder and CEO, Garrett Hamilton: "There is only one viable response to AI-powered attacks: AI-powered defense. "If a model can discover and exploit unknown weaknesses at machine speed, the defense playbook must change just as fast. Security teams can't rely on periodic scans and manual hardening; they need always-on visibility of their real exposures and clear prioritization of what to fix first. "However, vulnerabilities should not be the only concern. These are researched routinely by vendors and the cybersecurity community, with patches regularly released. In short, organizations have a fighting chance when it comes to spotting and fixing software vulnerabilities. "Misconfigurations, on the other hand, have no patches and can offer direct access into an environment. They arise unnoticed over time as networks, software, users, and policies change. They're also far more pervasive than many teams realize: our research found 97 percent of organizations suffered a breach or near miss in the past year due to a security-tool misconfiguration, and it takes 8.3 days on average to remediate once identified. That is more than enough time for an AI-enabled attacker to take advantage. "The new standard is simple: fight AI with AI, and close the gaps before they become incidents."

By becoming a member, I agree to receive information and promotional messages from Cyber Daily. I can opt out of these communications at any time. For more information, please visit our Privacy Statement. According to reporting by Bloomberg, a small number of people who are members of a private Discord channel dedicated to researching unreleased AI models have had unofficial access to Mythis since it was first announced. Getting in was apparently simple, too. "To access Mythos, the group of users made an educated guess about the model's online location," Bloomberg said in an article published on April 21. "They based this on knowledge about the format Anthropic has used for other models, the person said, adding that such formatting details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers." Anthropic said it was aware of the access and was investigating the report. Shane Fry, Chief Technology Officer at RunSafe Security, said it was an example of how easily exploited AI models commonly are. "Unauthorised users were able to access Anthropic's Mythos model, reportedly by just changing a model name. Even if their intent is just to explore, it shows how easily these systems can be exposed," Fry said. "The reality is these AI capabilities are already out there, 'hacked' or not, and they're going to accelerate how quickly vulnerabilities are found and exploited. Software teams will need to look at how to harden their code so those vulnerabilities can't be used in the first place." Germaine Tan Shu Ting, VP Security & AI Strategy and Field CISO at Darktrace, expressed similar concerns. "It shows that the frontline remains identity," Tan Shu Ting said. "If Anthropic itself can be accessed using traditional hacking methods (reportedly coopting existing third-party access and 'internet sleuthing'), then it highlights how critical it is to assume the threat is already inside the walls." However, while analysts and industry insiders have reacted to Mythos with something like awe, the actual capabilities of the model may, in reality, fall far short of Anthropic's claims. Don't believe the hype? Doug Britton, EVP and chief strategy officer of RunSafe Security, referred to Mythos and Project Glasswing earlier in April as a "watershed moment for AI's runaway zero-day discovery and exploitation". "AI is now uncovering memory safety bugs at massive scale, including vulnerabilities that have been hiding in production code for over 25 years - the problem isn't just that these bugs exist, it's that they're being found faster than organisations can fix them," Britton said. But the question is - are they being found that fast? Davi Ottenheimer, security engineer and president of security consultancy flyingpenguin, has some serious doubts. "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results," Ottenheimer said in a blog post around the time Mythios and Glasswing were announced. "The Glasswing consortium is regulatory capture dressed up poorly as restraint." Ottenheimer based his observations - rather caustic ones, it must be said - on Anthropic's own Claude Mythos Preview System Card, a "whoppingly inefficient 244-page document that devotes just seven pages to the claim that the model is too dangerous to release". According to Ottenheimer, only seven of those pages do not mention the acronyms one might expect: CVSS, CWE or CVE. "The flagship demonstration document turns out to be like the ending of the Wizard of Oz, a sorry disappointment about a model weaponising two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defence-in-depth mitigations stripped out. Anthropic failed, and somehow the story was flipped into a warning about its success." Ottenheimer has many issues with Anthropic's - and, it must be said, the wider media's - claims that Mythos found "Thousands of zero-day vulnerabilities in every major operating system and every major web browser", and he pulls no punches. Referencing that claim, Ottenheimer points out that the word 'thousands' is "used once, in reference to transcripts reviewed during the alignment evaluation". "It is never used to describe vulnerabilities. The cyber security section (Section 3, pages 47-53) contains no count of zero-days at all," Ottenheimer said. "With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate, why are you teasing us with the claims about vulnerabilities at all?" Cyber Daily has reached out to Anthropic for comment.

Any deployment needs to comply with RBI's data localisation requirements The Reserve Bank of India (RBI) is said to be holding talks with global regulators, domestic lenders, and government officials to assess potential risks linked to Anthropic's new artificial intelligence (AI) model, Mythos. According to a report, RBI's preliminary assessment points towards Mythos potentially raising cybersecurity concerns by expediting the discovery and exploitation of software vulnerabilities. The development comes following reports of unauthorised personnel gaining access to Anthropic's Mythos, which is touted to be "so powerful that it could enable dangerous cyberattacks". RBI Evaluating Mythos-Linked Cybersecurity Risks Citing sources familiar with the matter, Reuters reports that the RBI, over the past two weeks, has held consultations with counterparts around the world. This reportedly includes the Federal Reserve and the Bank of England, intending to understand the emerging risks and safeguards. "Globally, we are discussing with other countries and other regulators on what are the developments and what safeguards need to be taken," the publication quoted one source as saying. The report states that the RBI may also pursue direct engagement with Anthropic. Further, regulators across Asia, Europe, and the US are said to have advised banks to review their cybersecurity preparedness. The National Payments Corporation of India (NPCI), which facilitates payment services like UPI in the country, is said to be exploring early access to Mythos alongside a small group of banks, too. Citing a source, Reuters reported that this is to identify potential "day-zero" vulnerabilities before any wider rollout, although any such access could be restricted. Mythos is said to be hosted on tightly controlled servers in the US. Consequently, running any tests on local datasets in foreign jurisdictions could pose regulatory and technical challenges. The RBI is also said to be working on broader guidelines for banks entering enterprise partnerships with advanced AI models, including Anthropic's Mythos and Claude family. However, any deployment involving Indian user data would need to comply with the RBI's data localisation requirements, the publication noted a source as saying. Concerns Over Unauthorised Access to Mythos The regulatory discussions come shortly after reports that a small group of unauthorised users had gained early access to Mythos. According to Bloomberg, the model, which Anthropic itself has described as highly powerful, was accessed via a private Discord group on the same day it was announced for limited testing. While the group reportedly did not use the model for malicious purposes, the incident has raised concerns about potential misuse. At the time, screenshots appearing to show a Mythos dashboard were shared by the group. These included user management panels, AI experiment interfaces, and detailed analytics for model performance and costs. Anthropic is currently probing the matter. We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," the company said in a statement.

The situation surrounding the Druzhba oil pipeline in April 2026 has evolved into a complex knot of energy, political and financial contradictions within Europe. What has come to the fore is not so much the technical condition of the infrastructure itself as the use of oil transit as a tool of pressure and negotiation. According to statements by Hungarian Prime Minister Viktor Orban, Budapest received a signal through EU structures that Ukraine was ready to resume the transit of Russian oil as early as 20 April, but on the condition that Hungary lift its veto on a €90 billion EU loan to Kyiv. Hungary's response was firm and succinct: "no oil, no money". Budapest insists that physical supplies must first be restored, and only then can concessions on the loan be discussed. This reflects Hungary's pragmatic approach, in which energy security is prioritised over pan-European solidarity. Source: TASS Kyiv, for its part, officially explained the halt in transit by citing infrastructure damage. However, representatives from several Central European countries have questioned this explanation. In particular, Slovak politicians argue that the pipeline remains operational and that the suspension is political in nature. Against this backdrop, Ukraine's refusal to allow independent experts to inspect the pipeline has only intensified suspicions. Moreover, according to some expert assessments, the functioning of Druzhba is being used by Kyiv as leverage to accelerate the receipt of European financing -- a view echoed in Russian media. It should be noted that Ukraine is at war with Russia and has the full moral right to halt the transit of Russian energy resources through its territory. Had such a decision been made at the outset of the war, this issue might not be on the agenda today. However, a paradoxical situation emerged: throughout the war, Russian hydrocarbons continued to transit via Ukraine even as Russian forces struck Ukrainian cities. This created a deeply contradictory reality. Europe's dependence on Russian resources meant that Kyiv hesitated to shut down the pipeline, fearing backlash from European partners and a reduction in financial and military support. Even now, Kyiv frames the disruption in terms of technical malfunction rather than a deliberate decision to limit Russian budget revenues. Over the course of the war, Russia's budget has been replenished by hundreds of billions of dollars, offsetting much of the impact of sanctions. Despite successive EU sanctions packages, Moscow's capacity to sustain the war has, in many respects, continued to grow. According to Bloomberg, Ukraine is set to begin technical testing of the Druzhba pipeline on 21 April to restore supplies to Hungary. The demands voiced by Orban stem directly from Hungary's national interests. The country faces greater difficulty in adapting to supply disruptions than leading EU economies, and an energy crisis would pose serious challenges that Hungary may struggle to manage. Additional complexity is introduced by Hungary's domestic political landscape. Following the electoral defeat of Orban's party, it was expected that the incoming prime minister, Peter Magyar, would decisively reject Russian fuel and support the multi-billion-euro loan to Ukraine. However, developments have not followed that script. Magyar's position appears more flexible, yet in substance it echoes Orban's policy. On the one hand, he confirmed that Hungary does not intend to block a pan-European decision on the loan to Ukraine. On the other, Budapest still refuses to participate in financing it and firmly rejects external pressure. Magyar directly called on Ukrainian President Volodymyr Zelenskyy to abandon what he described as "blackmail" and to resume oil supplies without preconditions. Thus, despite a change in leadership, Hungary's strategic line remains unchanged: the protection of national energy interests and resistance to linking economic decisions with political demands. According to Izvestia, Hungary had fuel reserves sufficient for about 90 days at the end of January, but these have now declined to approximately 30 days. While there is an alternative route via Croatia, oil transported this way would be significantly more expensive. For Hungary and Slovakia, supplies via Druzhba are critically important. Central European countries remain heavily dependent on stable transit through Ukraine. Brussels, meanwhile, faces a dual challenge. On the one hand, it is interested in maintaining oil transit through Druzhba, as a sharp reduction in supply could exacerbate the energy situation. On the other hand, the EU continues to pursue a gradual phase-out of Russian energy resources, theoretically to be completed by 2027. However, experts suggest that this timeline may be extended, as Europe is already encountering practical difficulties. Source: BBC At present, Druzhba has clearly become more than infrastructure -- it is an instrument. It is likely that Kyiv will eventually make concessions and at least partially allow the pipeline to resume operations, recognising that without this step, financial assistance may not materialise. Countries such as Hungary and Slovakia are expected to hold their ground, prioritising energy stability. For the European Union as a whole, additional pressure on already volatile energy markets is undesirable. On Tuesday, it was also reported that another politician favouring cooperation with Russia is returning to power. Bulgaria's future prime minister, Rumen Radev, announced his intention to build respectful and balanced relations with Russia. It is worth recalling that during his presidency, he repeatedly vetoed decisions on supplying Ukraine with armoured vehicles and air defence systems, although those vetoes were overridden by parliament. Now, after a decisive electoral victory, his coalition holds a strong parliamentary majority. In any case, the long-term prospects for resolving Europe's energy stability challenges remain uncertain. Even if supplies are restored, they will likely continue to be accompanied by persistent political risks and will depend heavily on the evolving dynamics between Kyiv, Brussels and individual EU member states.

The Anthropic AI model deemed a danger to cybersecurity may need to be more secure itself. An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.

A group of unauthorized users has reportedly gained access to Mythos, the powerful cybersecurity tool recently unveiled by Anthropic, TechCrunch reported. This development is significant because Anthropic has explicitly warned that Mythos is capable of identifying and exploiting vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The company has framed this technology as a double-edged sword. They previously noted that in the wrong hands, it could become a potent hacking tool rather than the defensive asset it was designed to be for enterprise security. The unauthorized access was reportedly achieved by a small group of users operating within a private online forum. According to reports, these individuals managed to secure access to the tool on the same day it was publicly announced by Anthropic. The group, which is part of a Discord channel dedicated to hunting for information about unreleased AI models, used a mix of strategies to bypass restrictions. Perhaps most concerning is how the group managed to pinpoint the location of the model By making an educated guess about the model's online location, they relied on their existing knowledge of the naming conventions and formats Anthropic has used for previous models. This effort was reportedly aided by information revealed in a recent data breach from Mercor, an AI training startup that works with top developers. Furthermore, the group leveraged access provided by a person who is currently employed at a third-party contractor that works for Anthropic. This individual, who was interviewed about the breach, had legitimate permission to access Anthropic models and software related to evaluating the technology for the startup, which they gained through their contract work. Anthropic has been very cautious with the distribution of Mythos. The model was released only to a select number of vendors and organizations as part of an initiative called Project Glasswing. This limited release was specifically designed to prevent the tool from falling into the hands of bad actors who might weaponize it against corporate security. Big names like Apple, Amazon, and Cisco Systems are among the organizations that have been granted access to test the model. Amazon, which is a key partner and backer of Anthropic, also offers Mythos through its Bedrock platform to a very specific, approved list of organizations. As the utility of the tool has become known, a growing number of financial institutions and government agencies on both sides of the Atlantic have been clamoring to get on that list to better safeguard their own systems. In response to the reports, an Anthropic spokesperson provided a statement, saying, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The company has been quick to clarify that, so far, it has found no evidence that this unauthorized activity has impacted Anthropic's internal systems in any way. They maintain that the access appears to be contained within a third-party vendor's environment. While the situation sounds alarming, the source who spoke about the breach offered some perspective on the intentions of the group. The individual claimed that the users involved are primarily interested in playing around with new models rather than wreaking havoc. They have reportedly avoided running cybersecurity-related prompts on the Mythos model, choosing instead to experiment with tasks like building simple websites to avoid detection. The person also noted that this group has access to a variety of other unreleased Anthropic AI models, suggesting a broader scope of interest in the company's pipeline. This incident highlights the massive challenge Anthropic faces in keeping its most powerful and potentially dangerous technology from spreading beyond its approved partners. If these reports are accurate, it raises serious questions about how many other people might be using Mythos without permission and what their true objectives might be. For now, Anthropic is left to manage the fallout of this unauthorized access, which could potentially threaten the reputation of an exclusive release intended to bolster enterprise security. It is a stark reminder that even with strict initiatives like Project Glasswing, the digital perimeter is only as strong as its weakest link, especially when third-party vendors are involved in the deployment of such high-stakes software.

And that unauthorized access? 'A nothing burger,' hacking startup CEO tells El Reg Anthropic's Mythos model is purportedly so good at finding vulnerabilities that the Claude-maker is afraid to make it available to the general public for fear that criminals will take advantage. But early analysis shows that Mythos may not be as dangerous as some would have you believe. Anthropic made Mythos available in preview to a select but ever-growing number of organizations under the title of Project Glasswing so they could find and fix vulnerabilities in their environment before criminals got hold of the purported zero-day machine and caused mayhem. That plan didn't quite work as intended. On Wednesday, an Anthropic spokesperson confirmed to The Register that some non-Glasswing partners may have accessed the model - but not through Anthropic's production API. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the spokesperson told us. The AI biz declined to name the third-party vendor, but said that it's a company Anthropic works with on model development. There's no evidence that unauthorized activity extended beyond the third-party vendor's environment or that Anthropic systems are affected, we're told. Bloomberg, which originally reported the unauthorized access, said that "a handful" of people gained access to Mythos by making "an educated guess about the model's online location" based on Anthropic's previous models, and that these details were revealed in the recent Mercor data breach. Mercor is an AI staffing startup that supplies specialized contractors to major AI labs, including Anthropic. Earlier this month, Mercor said that it was "one of thousands of companies" affected by the LiteLLM supply-chain attack. This group of unauthorized users reportedly belongs to a private Discord channel and gained access to Mythos on the same day that Anthropic announced Project Glasswing. Since then, it's been "playing around" with the bug-hunting machine, and doesn't have any interest in using the model for evil, according to Bloomberg. Regardless of what the group is doing with Mythos, their access illustrates a couple of key points. First: it's really hard to keep code under wraps (as also evidenced by Anthropic's earlier Claude Code source leak), especially when the folks who want to kick the tires on the new model are cybersecurity and engineering types - and they didn't even need to hack into any network or database to do it. Insider and supply-chain threats are the real deal. "The Mythos breach didn't require a sophisticated attack," Ram Varadarajan, CEO at Acalvio, a deception-tech firm, told The Register. "It just required a contractor, a URL pattern, and a day-one guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue." Additionally, considering all the hype Anthropic spun around its new model, we shouldn't be surprised the genie is out of the lamp. Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture-the-flag exercise, where success includes claims of unauthorized access to Mythos," Tim Mackey, head of risk strategy at supply chain security shop Black Duck, told The Register. That marketing may have outstripped reality. Early reports from Mythos preview users including AWS and Mozilla indicate that while the model is very good and very fast at finding vulnerabilities, and requires less hands-on guidance from security engineers - making it a welcome time-saver for the human teams - it has yet to eclipse human security researchers. "So far we've found no category or complexity of vulnerability that humans can find that this model can't," Mozilla CTO Bobby Holley said, after revealing that Mythos found 271 vulnerabilities in Firefox 150. Then he added: "We also haven't seen any bugs that couldn't have been found by an elite human researcher." In other words, it's like adding an automated security researcher to your team. Not a zero-day machine that's too dangerous for the world. It's a nothingburger. The adversary doesn't need Mythos to hack you Anthropic, in announcing the new model, claimed Mythos identified "thousands of additional high- and critical-severity vulnerabilities." VulnCheck researcher Patrick Garrity, however, put the count as of last week at maybe 40. Or maybe none at all. Another engineer, Devansh, scoured the Mythos-related CVE advisories and Anthropic's exploit code, 44-prompt transcript, and 244-page system card, along with Glasswing partner agreements, red-team writeups. He also looked at Aisle's replication study, which tested Mythos' showcase vulnerabilities on small, cheap, open-weights models and found they produced much of the same analysis. Devansh ultimately concluded that while the bugs it found are real, the true Mythos story is "one of misinformation and hype." For example, the Anthropic-claimed 181 Firefox exploits ran with the browser sandbox turned off and the FreeBSD exploit transcript "shows substantial human guidance, not autonomy." Additionally, the "'thousands of severe vulnerabilities' extrapolates from 198 manually reviewed reports. The Linux kernel bug was found by Opus 4.6, the public model, not Mythos," Devansh said. Another researcher, Davi Ottenheimer, pointed out that the security section (Section 3, pages 47-53) of Anthropic's 244-page documentation "contains no count of zero-days at all. With no CVE list, no CVSS distribution, no severity bucket, no disclosure timeline, no vendor-confirmed-novel table, no false-positive rate." Ottenheimer likens it to "the ending of the Wizard of Oz, a sorry disappointment about a model weaponizing two bugs that a different model found, in software the vendor had already patched, in a test environment with the browser sandbox and defense-in-depth mitigations stripped out." Snehal Antani, co-founder and CEO of offensive AI hacking company Horizon3.ai, told The Register, "attackers didn't need Mythos to accelerate vulnerability research, 4.6 and open source models have already been accelerating the vulnerability process." When asked if the security community should be concerned about unauthorized Mythos access, Antani said no. "In my honest opinion, it's a nothingburger," he told us. "The adversary doesn't need Mythos to hack you." ®

Suswati Basu is a multilingual, award-winning editor. She was shortlisted for the Guardian Mary Stott Prize and longlisted for the Guardian International Development Journalism Award.... Anthropic is investigating reports that Claude Mythos Preview, an unreleased version of its AI model, may have been accessed without authorization through a third-party vendor environment tied to development work. Speaking to Techopedia, an Anthropic spokesperson said: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." Only a week after releasing the Claude Mythos Preview to a select group of organizations, people familiar with the matter said the reported activity appears linked to an external development platform rather than Anthropic's production API systems. The sources added there is no evidence at this stage that the incident extended beyond that external environment or affected the company's internal infrastructure. Anthropic's Mythos Model Is Already Being Put To The Test Anthropic has not said whether any data was removed or when the alleged access took place, however, Bloomberg reported that it took place on a "private online forum." The users were reportedly part of a private Discord group focused on uncovering details about unreleased AI models, using bots to scan unsecured websites such as GitHub for stray references posted by major labs. The news outlet reported that the group gained access to Mythos after some members made an educated guess about the model's online location using naming patterns Anthropic had used for earlier releases. Some of those clues allegedly emerged from a recent data breach involving Mercor, a startup that works with several leading AI developers. The irony is hard to miss. Anthropic positioned Mythos as a model so powerful that it required an unusually cautious rollout. It limited access to a small number of trusted partners because of fears it could be misused by hackers or destabilize cybersecurity defenses. Yet the first major controversy surrounding the system is not what Mythos itself might do. It's the possibility that third parties have already gained access through the carelessness of an external development partner. For a company that has built much of its identity around AI safety and controlled deployment, this risks reinforcing a familiar lesson in tech: A system is only as strong as the weakest link in its wider supply chain. Quite often, that weak link is basic human nature. Also in Tech News Tim Cook Steps Down As Apple Addresses Its AI Problem After more than a decade at the helm, Apple's head honcho Tim Cook passed the baton to John Ternus, signaling a change in direction for the $4 trillion company. In a statement, the 65-year-old said Ternus would attempt to "make something better, bolder, more beautiful, and more meaningful." Ternus has been serving as the tech giant's senior vice president of Hardware Engineering. The changing of the guard comes at a time when Apple appears to have stalled in the AI race against the likes of OpenAI, Google, and Grok. Cook's tenure as CEO will end on September 1, bringing to an end an era defined by operational efficiency and financial growth Although he ushered Apple into its trillion-dollar era, Cook has often lived in the shadow of his predecessor. Analysts have built a mythos around company cofounder Steve Jobs, next to whom Cook has seemed perhaps too straight-laced. Now, Ternus will be expected to step up as both a master of managing sprawling operations and an innovation wizard for this new tech era. Ming-Chi Kuo, a tech analyst at TF International, wrote on X that one of Ternus's major achievements was overseeing the transition from Intel processors to the firm's own proprietary silicon. Kuo added: "Without this, there wouldn't be the success of today's MacBook Neo or the advantage Apple now holds as it gears up for AI devices." Meta Plans to Track Employee Keystrokes for AI Training Meta has found itself in hot water after reports emerged that it plans to track the computer activity of U.S. employees to help train its AI models. The software is expected to capture mouse movements, clicks, and keystrokes as the company looks to build AI agents capable of working more autonomously, Reuters first reported, citing an internal memo. According to the report, the company's Model Capability Initiative tool would run across work-related apps and websites, while also taking occasional snapshots of content displayed on employees' screens. Techopedia contacted Meta for comment, but an initial email bounced back. We will continue to seek a response. The move has already drawn criticism from privacy and ethics experts. Veith Weilnhammer, a Max Planck Fellow in Computational Psychiatry, wrote on LinkedIn: "Beyond questions about AI systems that emulate human behavior, such as their impact on the job market, privacy, and the growing commercial value of human behavioral knowledge, this raises an important societal issue: How should we govern access to human-computer interactions, and how can these data be used for public good?" For now, the data collection is reportedly limited to the U.S., with stricter privacy rules likely to make a similar rollout more difficult in Europe. UK Cyber Chief Warns Frontier AI Is Accelerating Exploit Discovery Britain's top cybersecurity official is expected to warn that frontier AI models are making it easier to discover and exploit software flaws at scale, as the UK confronts a rising mix of technological disruption and geopolitical threats. In remarks due to be delivered at the CYBERUK conference in Glasgow on Wednesday (April 23), National Cyber Security Centre chief executive Richard Horne is set to say that while AI has the potential to strengthen cyber defense, adversaries will also move quickly to weaponize the technology. Politico reported Horne will caution that frontier AI is already "rapidly enabling discovery and exploitation of existing vulnerabilities at scale," increasing pressure on organizations to patch systems, replace legacy technology, and improve basic cyber hygiene. Researchers said Anthropic's Mythos, for example, was too dangerous for general release because of its alleged ability to help users identify and exploit sophisticated vulnerabilities. And just like that, we've gone full circle back to Anthropic.

The Anthropic AI model deemed a danger to cybersecurity may need to be more secure itself. An anonymous group of Discord users says it hacked its way into accessing Claude Mythos Preview, the new AI model Anthropic claims is too powerful for a public release. Anthropic says Claude Mythos "is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser," and has granted access to the model to a select group of partners via an initiative called Project Glasswing. The AI company said this invite-only approach would let tech leaders "secure the world's most critical software." But it might need to pay more attention to its own software security. As Bloomberg reports, the Discord users didn't gain access through a sophisticated hack, but by guessing the online location for the model based on past Anthropic naming conventions -- as found in the recent data breach at Mercor, an AI startup, earlier this month. Once they identified where to access Claude Mythos, the group had to employ additional tactics. One member of the group already had privileged access as a worker at a third-party contractor for Anthropic, Bloomberg reports. The group was part of a private Discord channel that focuses on hunting information about unreleased models. A member of the group told Bloomberg that they were not using Claude Mythos for nefarious purposes, but for tasks like building simple websites. However, they also claimed to have access to even more unreleased Anthropic models. The group provided enough evidence to convince Bloomberg they had indeed breached Anthropic's security. Anthropic confirmed in a statement to Bloomberg it was aware of the claim and investigating. At this time, there is no indication that Claude Mythos has been breached by other unauthorized parties. Still, given that Anthropic described Claude Mythos as a paradigm-shifting security threat that could "reshape cybersecurity" as we know it, any unauthorized access is -- to say the least -- concerning. Want to learn more about getting the best out of your tech? Sign up for Mashable's Top Stories and Deals newsletters today.

If you're here, you're likely looking for a comparison of Perplexity vs. Claude that goes beyond a generic overview. The lines between a "smart chatbot" and a full-fledged AI assistant software are blurring fast. Your choice of platform will impact your workflows, your data handling, and potentially even your customer experience. This comparison will help you cut through the noise and make a call that's both strategic and scalable. As someone who has explored both tools in depth, I have put them head-to-head across real-world use cases. The short answer? Neither tool wins outright. The better choice depends on what you're actually doing. TL;DR: From what I saw, Perplexity and Claude are distinct AI tools. Perplexity is a specialized, source-cited search engine for research and real-time information, while Claude is a highly capable, large-context conversational model designed for executing tasks like reasoning, writing, and coding. * Choose Perplexity if your work is research-heavy and citation-backed answers matter. It's still the stronger pick for fast, sourced, real-time information retrieval. * Choose Claude if you need a thinking partner for writing, coding, or working through complex documents. Its conversational depth and context handling are best-in-class. I hope this comparison saves you time, effort, and a lot of trial and error when choosing between the two popular chatbots. Perplexity vs. Claude: What's different and what's not? After spending a lot of time with these two AI chatbots, I wanted to pinpoint where they diverge and where they overlap. Here's my take on the main differences and similarities between Perplexity and Claude. What are the key differences between Perplexity and Claude Below are some primary differences between Perplexity and Claude. * Context management: Claude feels more human-like and engaging in conversation. Users on G2 consistently rate Claude higher for natural conversation (93% vs Perplexity's 88%). It tends to remember context better in long chats as well. On G2, Claude scored 87% in context management vs Perplexity's 85%. If you refer back to something said 10 messages ago, Claude is less likely to get confused. Perplexity's style is more utilitarian: it gives concise answers and then often suggests a relevant follow-up question rather than carrying on a free-flowing chat by itself. It maintains context to a degree, especially when you're logged in, as it can remember your thread. However, it's more focused on answering the current query and guiding you to the next one. * AI models: Claude and Perplexity differ significantly in the AI models powering their platforms. Claude, developed by Anthropic, uses its own proprietary Claude 4 model family, including Sonnet 4.6, Opus 4.6, and Haiku 4.5, which emphasizes safety, context handling, and helpfulness. Perplexity, on the other hand, takes a multi-model approach, letting users switch between GPT-5.2, Claude Sonnet 4.6, Gemini 3.1 Pro, and its own Sonar models depending on the task. * Integrations: Perplexity has expanded significantly beyond its app and browser extension, now supporting 400+ prebuilt connectors and custom MCP integrations for Pro, Max, and Enterprise users. Claude, in contrast, is more of a platform that others integrate. Anthropic provides Claude via an API, and companies plug it into their products. G2 users rate Claude slightly higher for API flexibility (83% vs Perplexity's 80%), indicating developers still find Claude more adaptable for custom workflows, though the gap has narrowed considerably. * Support and community: According to G2 reviews, users find Perplexity's support to be more responsive and helpful. Perplexity scored 86% in quality of support vs Claude's 78%. This could be due to Perplexity being a smaller, consumer-facing company that directly engages its user community. They have an active Discord and frequent updates. What are the key similarities between Perplexity and Claude? Despite their differences in design philosophy, Perplexity and Claude have a lot in common as AI chatbots. * Information access: Both Perplexity and Claude offer web search capabilities. Perplexity has real-time web access built into every answer by default, complete with citations. Claude offers web search on its free and Pro plans, making it a more versatile research tool than it used to be. So if you need a cited, verifiable answer with traceable sources, Perplexity remains the stronger pick, but both tools can now pull from the live web. * Natural language Q&A: Both Claude and Perplexity are built to answer questions and have conversations in plain language. They both understand a user's question and respond with a coherent, contextually relevant answer. * Content summarization: Both platforms generate a wide range of text content and summarize information. Perplexity tends to lean on its integrated models, like GPT-5.2 and Claude Sonnet 4.6, to produce well-structured, fact-checked write-ups, often citing sources for factual text. Claude, on its own, can produce very fluent and structured text from scratch. Claude might give a more flowing narrative, while Perplexity gives a concise, reference-backed draft. * Knowledge and accuracy: While their methods differ, both give accurate, factual answers to minimize hallucinations. According to G2's feature ratings, content accuracy is a highly rated feature for both, with Perplexity and Claude tied at 85% satisfaction. Each has mechanisms to ground their answers: Perplexity through sources and real-time web retrieval, and Claude through extensive training, alignment, and web search. In a G2 analysis of AI hallucinations, Claude and Perplexity both had relatively fewer user complaints about incorrect information compared to some competitors. * Pricing: Both Perplexity and Claude offer a free tier for casual use and a Pro plan at $20/month for power users. Both also offer a premium Max plan at $200/month for the most demanding workflows. Curious how Perplexity holds up as a research-first AI? Read our full Perplexity AI review for a detailed analysis. How I compared Claude and Perplexity: My tasks and evaluation criteria To keep things fair and thorough, I tested both Claude and Perplexity (free versions) on a series of real-world tasks. I used Claude's latest model (Claude Sonnet 4) and Perplexity free plan. My test included the following tasks: * Text-based content creation. I asked each to write a paragraph or two. I evaluated the fluency, creativity, and correctness of their writing. * Summarization and deep research. I gave them a long article to summarize and asked multi-part questions that required synthesizing information. This tested their ability to handle large contexts and produce accurate, well-structured answers -- both tools now offer sourced responses, so I paid close attention to depth and synthesis quality. * Coding tasks. I tried a few programming-related prompts, such as asking for a sample code snippet. I looked at the accuracy of the code and its ability to handle corrections. * Conversational Q&A. I engaged in a free-form conversation with each AI, asking a sequence of open-ended questions to see how well they maintain context and simulate a natural conversation over multiple turns. For each of these tasks, I paid attention to a few key criteria: * Accuracy: Are the answers correct and trustworthy? * Creativity: Are the responses unique and engaging? * Depth: Do they provide detailed, insightful answers vs. superficial ones? * Clarity: Is the answer well-structured and easy to understand? * Efficiency: How fast and directly did they get to a good answer, and did I have to poke and prod to get something useful? Let me share what I found and how those findings line up with what real users on G2 have reported about Perplexity and Claude. Perplexity vs. Claude: How they performed in my tests Below is an overview of how Perplexity and Claude performed in my evaluation of the two AI chatbots. Conversational ability To test the conversational ability of both AI chatbots, I started a discussion about planning a trip to Japan, and asked a series of questions using prompts like, "What's the food like?" and "What temples to visit?" In a back-and-forth conversation, Claude immediately felt more "chatty" and context-aware. When I asked Claude a question, and then a follow-up that referred to something we discussed earlier, Claude consistently remembered the context. After several turns while talking about flights, food, and culture, I asked, "Oh, what was that temple you mentioned before?" Claude knew I was referring to a temple that it recommended earlier and responded correctly. Based on the tone, I found Claude's style to be more engaging. It tends to use an affable tone, which makes the conversation feel friendly. Perplexity, in a similar scenario, was more helpful but straightforward. It often responded to the last query without seamlessly weaving in the older context unless I explicitly mentioned it. Perplexity's tone was also polite and clear, and more precise than Claude's. For straightforward Q&A-style dialogues, it's highly efficient. Some of Claude's answers felt generalized, but Perplexity gave precise outputs. It's like a very knowledgeable assistant. Interestingly, Perplexity often prompts follow-up questions after an answer. I found this feature extremely useful for digging deeper into topics. Personally, I liked the overall output of Perplexity only slightly better than Claude's since it was not generalized (very precise) and suggested multiple options to dig deeper without having to come up with the right questions by myself. I personally prefer this sort of assistance when I'm using an AI chatbot for search, compared to having something nice to read in an engaging tone. Winner: Perplexity Writing and creativity In this task, I asked both Claude and Perplexity to act as science fiction authors and write a short story. I wanted to see which tool addresses my query more creatively in terms of figurative language, rhyme, tone, and diction. While it had a generic title, Claude managed to create a story with a compelling opening and contained a lot of readable prose. The story seemed to be framed as a mystery, which is what I had asked. While it's no Pulitzer prize winner, and it feels like it has borrowed a lot of elements from existing sci-fi stories, it would do the trick for a first-time reader. Perplexity's attempt was much more basic. I felt like I was a summary of a story rather than the story itself. There was no prose or an air of mystery, which Claude had managed to add. For structured content like article or report writing, both are useful, but in different ways. I had them each write a paragraph describing the biggest cybersecurity threat to small businesses. Claude's paragraph came out narrative and engaging, almost like an opener, hooking the reader with a scenario. Perplexity's paragraph was straightforward: it listed a couple of key points for data protection and financial risk with clarity and even cited statistics about cyberattacks on small businesses. If I were writing a fact-based piece, I'd love those citations handy. However, if the task is more on the side of narrative or copywriting (like drafting a personal blog or marketing tagline), I'd lean on Claude. Winner: Tie; Claude for creative writing, Perplexity for report writing Coding and technical assistance Going into this test, I had a hunch Claude would outperform in coding, and that turned out to be true by a significant margin. I gave both a couple of real programming tasks, and the results were pretty telling. One was a debugging question: I provided them with a short Python function that had a bug and asked for help. I was impressed by Perplexity's response. It was to the point, with explanation, and a solution to fix it. Claude performed equally well and returned a similar output while explaining the error and suggesting alternative ways to fix it. However, the difference became clearer in the following coding test, where I asked the tools to write a function to generate a random password in JavaScript. Claude not only wrote a function, but also explained each step in comments, explained the core logic, and even mentioned a best practice like including a mix of characters. And the best part? It simultaneously executed the code and showed me output, which was a fully functioning password generator that I could actually test and use. All this on the free version! Perplexity's answer gave a code snippet too; however, there was limited in-line explanation within the output. It also could not run and execute the code. Here's what I got with Perplexity: At the end of the day, I have to conclude that Claude is currently better than Perplexity when it comes to coding or offering technical support. Winner: Claude Research and information retrieval In my line of work, up-to-date research holds a lot of weight. Curious to know which tool would perform better, I asked both AI tools the same question: What are the latest trends in renewable energy adoption in 2026? Perplexity blew me away and differentiated itself. It was dramatically more useful for research and used more sources in the local geographic area. Perplexity automatically took data about renewable energy adoption based on the country I was querying. For academic or report-style research, the value of Perplexity's approach is immense. It lets you access quality papers, relevant sources listed, and even videos suggested for whatever you wanna search. On the other hand, here's what I got from Claude: Claude gave a more generalized overview based on global data. The answers were more generic compared to Perplexity, without any precise details about local data on renewable energy trends. I liked Perplexity's output better since I didn't have to over-specify to get the output I needed. Claude felt more static when it came to research. Winner: Perplexity Here's an overview of my tests: Perplexity vs. Claude: Key insights based on G2 Data The qualitative experience I described above echoes many of the patterns we see in G2's ratings and review comments. Here are some key insights drawn directly from G2 data: Satisfaction ratings * Perplexity leads on ease of setup (96%) and ease of use (94%), with a quality of support score of 86%. * Claude matches Perplexity on ease of use (92%), ease of setup (91%), and ease of doing business (91%), but trails on quality of support at 78%. Industries represented * Perplexity sees the strongest adoption in information technology and services, marketing and advertising, computer software, consulting, and higher education. * Claude has a strong presence in marketing and advertising, computer software, information technology and services, hospital and health care, and higher education. Highest-rated features * Perplexity excels in no-code conversation design (94%), multi-step planning (89%), and natural language understanding and intent inference (89%). * Claude stands out for natural conversation (93%), creativity (89%), and complex query handling (85%). Lowest-rated features * Perplexity struggles with fallback responses for unknown queries (75%), web widget and SDK embedding (79%), and API flexibility (80%). * Claude struggles with error learning (78%), software integration (81%), and customizability (83%). Perplexity vs. Claude: Frequently asked questions (FAQs) Let's address a few frequently asked questions that potential users or buyers often have when comparing Perplexity and Claude: Q1. Is Perplexity or Claude better for research and writing? It depends on the type of work you're doing. For research, Perplexity has the edge, since it pulls real-time information from the web and provides direct source citations for every answer. For writing, Claude is the better choice, producing fluent, narrative-driven content with a conversational tone and a strong creativity score of 89% on G2. Many users rely on Perplexity for research and fact-gathering, then turn to Claude to shape that information into polished content. Q2. How does Perplexity AI compare to Claude? Perplexity and Claude are both powerful AI tools built for different primary use cases. Perplexity is an AI-powered search engine that prioritizes real-time, citation-backed answers, leading in ease of setup (96%) and quality of support (86%) on G2. Claude is a large-context conversational model designed for reasoning, writing, and coding, scoring higher for natural conversation (93%) and context management (87%). Both offer a free tier and a Pro plan at $20/month, with Max plans at $200/month for power users. Q3. What is the difference between Perplexity AI and Claude? The core difference is in how they approach information. Perplexity is built around real-time web search with citations, making it ideal for research and fact-checking. Claude is built around deep reasoning and conversation, excelling at coding, long-document analysis, and creative writing. Claude uses its own proprietary Claude 4 model family, while Perplexity takes a multi-model approach with GPT-5.2, Claude Sonnet 4.6, and Gemini 3.1 Pro. Both tools now offer web search and a free tier, which makes them more similar than they used to be, but their core strengths remain distinct. Perplexity vs. Claude: My final verdict I'm a writer by profession. Both fact-checking and writing style and tone are equally important for my work. Given a choice, I'd rely on Perplexity to perform my secondary research, letting it scan the breadth of the Internet to collect relevant data and examples that I can use in my work. For narratives, rewriting, summarization, and finding tone varieties, Claude would be a preferable choice. Ultimately, it depends on what kind of support we need from the AI chatbot. The choice would stem from the individual use case. Exploring chatbots? Go through the detailed comparison of ChatGPT vs. Claude.

A handful of users managed to gain unauthorized access to Anthropic's Claude Mythos - the model the company claims to be so dangerous that it would cause a wave of devastating cyberattacks if made available to the public. The breach occurred on April 8 - the same day that Anthropic and its CEO Dario Amodei revealed that Mythos was only available to about 40 handpicked corporate clients as part of "Project Glasswing." Anthropic said Mythos had found major cybersecurity flaws in "every major operating system and web browser" during internal testing. The unauthorized users belong to a private online forum dedicated to cracking unreleased AI models on Discord, a popular messaging app. Since gaining access, they have been using Mythos "regularly" but not for cybersecurity purposes, according to Bloomberg, which obtained screenshots and was shown a live demonstration of the users accessing the model. The sleuths broke into Mythos through a variety of tactics, including by guessing the model's online address based on the naming conventions Anthropic has used in previous model releases, the report said. One of the unauthorized users reportedly had some level of access to Anthropic's systems due to working as a third-party contractor for the firm. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said in a statement. The company added that it has no evidence the group's unauthorized access had expanded beyond the third-party vendor's environment or impacted any of its other systems. One person in the Discord group - members of which were not named - told Bloomberg that they want to test new models rather than use them to cause chaos. Still, the incident raises concerns about the extent of Anthropic's ability to maintain oversight of a tool that they claim could be used to wreak havoc on critical infrastructure like electric grids, power plants and hospitals if it fell into the wrong hands. Earlier this month, AI safety researcher Roman Yampolskiy told The Post that some "leakage" of the model was inevitable despite Anthropic's attempts to restrict access. Anthropic said it shared Mythos with corporate partners -- including Amazon, Google, Apple, Nvidia, CrowdStrike and JPMorgan Chase -- so they could plug their own cybersecurity vulnerabilities. Prior to the rollout, Mythos broke out of a secure "sandbox" meant to restrict internet access - with a researcher only finding out "by receiving an unexpected email from the model while eating a sandwich in a park." Anthropic described the much-publicized incident as "demonstrating a potentially dangerous capability for circumventing our safeguards." Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell recently held a closed-door meeting in which they urged top bank officials to ensure their systems were ready for the risks purportedly posed by Mythos.

Anthropic's 'Mythos' model is extraordinarily dangerous. The company itself warned that it could autonomously identify and exploit zero-day vulnerabilities in every major operating system, every major web browser, and every critical software library on Earth. And because of this offensive cybersecurity power, Anthropic refused to release Mythos publicly - and instead tightly restricted access through 'Project Glasswing' to roughly 50 carefully vetted organizations - 12 named launch partners plus more than 40 additional critical software and government entities, including the U.S. National Security Agency (NSA). Yet within hours of the limited rollout announcement on April 7, 2026, a small group of unauthorized users in a private Discord server had already broken in. The breach, reported by Bloomberg on Tuesday, reveals how fragile the safeguards around frontier AI models can be. According to the report, the group gained access using a surprisingly low-tech combination: legitimate credentials from a third-party contractor involved in Anthropic's evaluations, plus clever internet sleuthing to guess the hidden API endpoint by reverse-engineering Anthropic's internal naming conventions (patterns inferred from an earlier Mercor data leak). They have reportedly been using Mythos regularly for nearly two weeks. Sources emphasize the usage has been non-malicious so far - things like building simple websites - rather than launching cyberattacks. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said in a statement, adding that there's no evidence that the access went beyond a third-party vendor's environment or that it is impacting any of Anthropic's systems. In early April, Anthropic launched Project Glasswing, a defensive cybersecurity initiative built around Mythos Preview. The 12 launch partners included Amazon Web Services, Apple, Microsoft, Google, Cisco, CrowdStrike, Palo Alto Networks, NVIDIA, Broadcom, JPMorgan Chase, and the Linux Foundation, along with over 40 additional critical software organizations. The explicit goal was to give these defenders a head start: let Mythos hunt for vulnerabilities in their own systems and major open-source projects before malicious actors could weaponize the same capabilities. Anthropic's own red-team testing reportedly showed Mythos could find and chain complex zero-days that had remained hidden for decades in software like Linux, OpenBSD, and FFmpeg. Even as the Pentagon formally labeled Anthropic a "supply-chain risk" in March 2026 - citing the company's refusal to remove ethical guardrails that would allow its models to be used for mass domestic surveillance and autonomous weapons - other key parts of the U.S. government have moved with urgency to embrace the very same technology. The National Security Agency is already actively using Claude Mythos Preview, while the White House's Office of Management and Budget circulated an internal memo on Monday directing federal agencies to begin leveraging the model for vulnerability discovery in government networks. The Treasury Department has been particularly aggressive, rushing to secure access and convening major bank CEOs for urgent red-teaming sessions after being warned that Mythos could "hack every major system." The unauthorized access was deceptively simple. One member of the Discord group (a private forum focused on hunting unreleased AI models) had legitimate access as a worker at a third-party contractor. Using knowledge of Anthropic's naming patterns, the group correctly guessed the private API endpoint for Mythos Preview on the very same day the limited release was announced. Once inside, they continued using the model without triggering obvious alarms. So, here's where we are: these AI models are becoming so powerful that even their creators treat them with extreme caution - yet the operational security surrounding them can still fall to basic tactics like credential misuse and URL guessing. As of Wednesday, Anthropic has offered no further updates on its investigation, no timeline, and no announcement of technical fixes such as credential rotation or endpoint randomization. There is still no public evidence of malicious use by the Discord group - however, the breach raises serious questions about how many other restricted AI systems might be leaking through similar third-party or supply-chain vulnerabilities.

Polymarket CEO Shayne Coplan is regularly late to private meetings, attended at least one of them barefoot and is "easily distracted," texting and taking phone calls mid-conversation, according to a Bloomberg report. The quirks would read as startup color if the company weren't bleeding its lead to rival Kalshi eight days before its biggest backer reports earnings. It reports Q1 earnings on April 30, with the consensus analyst target at roughly $194 on a Strong Buy rating. Raymond James raised its target to $222 earlier this month. Kalshi Pulls Ahead The two companies' valuations moved in lockstep for most of the past year. That changed last month, when Kalshi raised $1 billion at a $22 billion valuation led by Coatue Management. Polymarket is reportedly weighing a raise at $15 billion, a gap of roughly $7 billion. Kalshi's year-to-date notional volume has also pulled ahead, at $37.5 billion versus Polymarket's $29.2 billion. Bloomberg reports the hold-up is structural. Polymarket's international exchange runs on crypto rails, creating engineering challenges its US-first competitors didn't face. Pop-Up Problems, Scheduled Downtime A recent fee rollout was botched enough that an employee admitted on Discord "the rollout was terrible." The company added that it was "adding way more checks before anything like this can be pushed out in the future." The operational stumbles have piled up elsewhere. Polymarket's Washington pop-up bar drew negative coverage for technical snafus during its opening. An earlier pop-up grocery store also opened late. On Monday, a scheduled five-minute exchange restart turned into an outage of more than an hour. ICE CEO Jeff Sprecher called Coplan a "genius" and said he has urged him to move faster. "You're not going to be a prime time company unless you can access the US legally," he recalls telling Coplan. On his $2 billion bet, Sprecher was blunt. "My Polymarket on these things is either a complete wipeout or they are going to be home runs." Polymarket did not immediately respond to a request for comment from Benzinga. Image: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Out here, Discord isn't just about messages anymore. Picture this: your name carries weight now, based entirely on what you've got tied to it. Badges that hardly anyone owns actually start meaning something real. Old-school accounts feel almost like digital artifacts. Folks quietly notice these details and begin looking around for ways to stand out from the crowd. Some wind up checking spots online where people offer those kinds of profiles. A whole niche corner of the internet exists, built for exactly this kind of hunt. The Value of Uncommon Discord Badges Among veteran members, certain Discord tags stand out way more than others. Veteran users obsess over the Ruby Badge for a simple reason: it proves you were there before the platform blew up. You couldn't just click a button to get it, so hardly anyone actually has one today. It is way more than just a cool icon on your profile. It carries actual weight and instantly tells everyone you are an OG. People naturally trust you more when they see it sitting on your profile. Because getting one organically is practically impossible now, a lot of folks just buy them through third-party sellers instead of grinding in servers hoping to get noticed. Nobody wants to wait around when everyone else is already miles ahead. Discord accounts hold value because people use them What actually makes a Discord account valuable? Badges are a massive part of it. Account age plays a huge role too. Older profiles just command more respect. A unique username can definitely tip the scale, especially if it is short or super memorable. Your reputation matters just as much, shaped entirely by how the account has been used over the years. Clean records without any server bans or warning history always rank much higher in quality. Trophies live online when collectors finally claim them. Power quietly shifts right into the hands of gamers and community builders who organize circles around shared ideas. Trust flows a lot faster toward familiar names at the top of the member list. Because of this, shoppers are steadily drifting toward shops that sort through Discord logins, picking out only the strongest ones. Investing in Digital Identity What people used to think was just a silly internet flex actually holds real value now. Older user tags, locked in time, grow rarer every single year. Value climbs when the supply stops. It is as simple as that. An exclusive Discord badge from years ago? Nearly impossible to replace. These digital markers act like hidden collectibles tucked inside a profile, and time only makes them stand out more. Not every account carries such marks, which totally changes how we look at digital ownership. The past is literally shaping what feels exclusive today. With every new person logging into Discord, the interest just climbs. You can look at a profile from a discord accounts shop as an investment you just sit on, hoping the price goes up over time. Or you can immediately leverage it to get into closed circles. Most buyers do not wait around. They grab a solid profile and put it to use right then and there. Staying Safe in the Marketplace Picking up an aged profile gives you instant access, sure, but the scene has its fair share of scammers. You really have to watch your back because not everyone selling these things is playing fair. Here is how you avoid getting ripped off: * Double-check that the seller actually owns the profile before you send a single dime. * Dig into the account's background to make sure it is not already blacklisted or tied to shady stuff. * Stick strictly to well-known platforms that actually have a solid track record. Nobody wants to deal with sketchy trades, which is why going through a legit Discord accounts seller just saves you so much unnecessary stress. You do not have to worry about your money disappearing, and they usually show you exact proof of the account before you even pay. When everything is laid out clearly like that, making the jump feels a lot less risky. The Edge in Rival Groups Let's be real, looks matter in gaming clans and competitive circles. Profiles tagged with uncommon marks or built up over years catch eyes incredibly fast, and respect usually follows close behind. One major reason folks dive into a Discord accounts shop is to climb the ladder faster without having to build trust step by painful step. Jumping ahead simply beats waiting around when reaching the top circles is your main goal. Some people gladly skip the long grind just to land in private chats with big names sooner. The Evolution of Discord Supporter Tiers One way into Discord's past shows up in rare badges few can claim now. Those who joined early got symbols others simply missed out on. Over time, what once seemed small has grown much harder to find. Changes keep coming to the app, yet some old marks stand out more now than ever before. Out of nowhere, more people started wanting older accounts. That shift lit a spark under the Discord marketplace, with options available at https://discord-zone.com. Sales picked up without much warning. What once moved slowly now pushes fast. Conclusion Now it is about so much more than just chatting. What you show on your profile tells people exactly who you are. Badges that few have, how many years you have been around, locked-in early access. These details shape what an account actually means. Some just want the clout. Others need to keep things private for secondary projects. A few treat it like owning a rare piece of internet history. Whatever the reason, more people are finding excuses to check where these premium profiles come from. Out here, where digital trends shift faster than shadows, seeing how the web grows might just tilt the odds in your favor. As groups form and fade, knowing the rhythm of the platform keeps you ahead of the game without having to constantly chase it.

Access involved a Discord group and a third-party contractor. Anthropic's Mythos AI model, described as a high-risk cybersecurity system, has reportedly been accessed by a small group of unauthorised users. The development, first reported by Bloomberg, raises concerns around access control and the risks of exposing advanced AI tools through third-party environments. How the access occurred According to the report, the breach involved individuals linked to a private online forum, including a person identified as a third-party contractor for Anthropic. The group is said to have used a mix of contractor-level access and commonly available internet tools to gain entry into the system. The users are believed to be part of a Discord-based community that tracks unreleased AI models. They reportedly used knowledge of Anthropic's system formats, obtained from a previous data breach, to make an educated guess about where the Mythos model was hosted. What Mythos is capable of Claude Mythos Preview is a general-purpose AI model with cybersecurity capabilities. Anthropic has stated that the system can identify and exploit vulnerabilities across major operating systems and web browsers when instructed by users. Due to the nature of these capabilities, access to Mythos is limited under the company's Project Glasswing initiative. Selected partners include Nvidia, Google, Amazon Web Services, Apple, and Microsoft, while governments are also exploring its potential use. Anthropic has not announced plans for a public release, citing the risk of misuse. Timeline and usage The unauthorised access reportedly began on April 7, the same day Anthropic announced limited testing of the model. The group has reportedly continued to use Mythos since gaining entry, sharing screenshots and demonstrations as evidence. Reports indicate that the users avoided using the model for cybersecurity-related tasks to reduce the chances of detection. The group is also said to have accessed other unreleased Anthropic AI models. Company response Anthropic has confirmed that it is investigating the incident. The company stated that there is currently no evidence suggesting its core systems have been compromised or that the breach extends beyond a third-party vendor environment.

The group is said to be a part of a private Discord community that hunts for information about unreleased AI models. Earlier this month, Anthropic released a preview of what it described as its "most powerful model yet," called Mythos, which it said to have advanced cybersecurity capabilities. Experts and even Anthropic itself have warned that the model could be extremely dangerous in the wrong hands, potentially enabling severe cyberattacks faster than companies can respond. That concern is partly why the company opted for a limited rollout of the model to major technology and financial institutions under an initiative called Project Glasswing. Since the public reveal of the model, it has created a frenzy among security experts and U.S. government officials. Reports say the technology has even prompted emergency discussions between officials and major Wall Street banks some days ago. But despite the tight restrictions Anthropic placed on access to the model, a small group of outsiders reportedly gained entry anyway. According to a report from Bloomberg, a handful of users in a private online forum managed to access Mythos. The access allegedly occurred on the same day the model was announced for limited testing, though details are only now coming to light. The information came from an individual familiar with the situation, who reportedly provided screenshots and a live demonstration of the model to verify the claim. Unauthorized access to such a system raises concerns because of what the model is capable of doing. In Anthropic's own words, Mythos can identify and exploit vulnerabilities "in every major operating system and every major web browser when directed by a user to do so." In simple terms, the model can scan software for security flaws. In theory, that capability could help organizations defend themselves or allow attackers to locate weaknesses in their systems. That dual-use potential is a key reason Anthropic restricted the release. The company reportedly shared access to Mythos with a small number of organizations, including companies such as Apple, Amazon, and Cisco Systems, allowing them to test their own infrastructure for vulnerabilities before a wider rollout. According to Bloomberg, the group responsible for the alleged unauthorized access is part of a private Discord community that searches for information about unreleased AI models. Members reportedly use bots and other tools to scan sites such as GitHub for technical clues. One individual in the group is said to have had contractor-level access to a third-party vendor environment used by Anthropic. That access reportedly helped the group get closer to the Mythos system. The method used to locate the model appears to have been surprisingly simple. The group allegedly made "an educated guess" about the model's online location based on knowledge of the naming patterns Anthropic uses for its systems. Some of those technical details were reportedly exposed in a recent data breach involving Mercor, a company that works with several AI developers. Responding to the report, Anthropic said it is investigating the situation. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," the company said in a statement. Anthropic added that it currently has no evidence the reported access went beyond the vendor environment or affected its internal systems. According to Bloomberg's source, the group did not use the model to attempt cyberattacks. Instead, they reportedly ran simple tests, such as asking the model to build basic websites.

A group of unauthorized users reportedly has gained access to Anthropic's controversial Claude Mythos Preview AI frontier model despite the AI vendor's efforts to keep it out of public hands by limiting the organizations that can use it. Bloomberg reported that the unnamed group had tried multiple ways to gain access to the AI model since it was first announced earlier this month, and finally was able to get through via a third-party vendor. The users, who accessed Mythos on the day it was announced, are part of a Discord online forum group known to search for information about unreleased AI models. According to the report, the group, using knowledge it had about a format Anthropic had used for other models, "made an education guess about [Mythos'] online location." A person inside the group that Bloomberg communicated with told the news outlet that they were "interested in playing around with new models, not wreaking havoc with them." In a statement to TechCrunch, an Anthropic spokesperson said the company was investigating the claim of unauthorized access to Mythos through a third-party vendor, and that the company has not found indications that the group's activities have effected its systems. Anthropic's announcement of Mythos April 7 sent shockwaves through the cybersecurity industry. The vendor described a frontier model that is significantly better than any other developed at detecting and identifying software vulnerabilities, noting that in tests, Mythos was able to find a security flaw that had been present yet undetected for 27 years. However, the model also is very good at creating exploits for the vulnerabilities, which convinced Anthropic executives to limit the release of Mythos to a select group of organizations that will use them to create stronger defenses as part of the AI vendor's new Project Glasswing. OpenAI a week later followed a similar path with the unveiling of GPT-5.4-Cyber, a frontier model focused on cybersecurity that the vendor also designated for particular users, though granting access to more organizations and individuals than Anthropic. The introduction of Mythos ignited debates about everything from cybersecurity as such autonomous AI models come into play to what organizations need to do to secure their IT environments to whether Mythos' capabilities are unique. However, enterprises and their security teams need to pay attention, according to Brian Fox, co-founder and CTO of Sonatype, which provides a software supply chain management platform. "If the early reporting is right, Mythos could be a watershed moment," Fox said. "What is not new is the reality it is forcing people to confront. Beneath the AI framing sits the same software supply chain reality we have been discussing for years: dependencies, build pipelines, third-party software, and infrastructure remain the attack surface." Fox added that "what changed is speed. AI can now find and operationalize weaknesses across that stack faster than most organizations can inventory, prioritize, and patch them. What we are seeing in response to the Mythos news is many organizations coming to terms with a reality that has existed for a long time: they are not actually in control of their software supply chains." Tech vendors are beginning to roll out offerings aimed at helping organizations deal with the cyber risks posed by such frontier models. IBM Consulting last week introduced IBM Autonomous Security, a collection of specialized agents created to make enterprises' often sprawling security stacks work a more unified and coordinated fashion and creating what the vendor called "a systemic defense" that is needed to address the autonomous and fast-moving threats from such models. At the same time, IBM is offering a new service for assessing a company's security weaknesses and responding to them. Likewise, Palo Alto Networks launched Unit 42 Frontier AI Defense, an offering that uses AI models to help organizations "identify and validate the exposures most likely to be chained into real attacks before attackers weaponize them," with Sam Rubin, senior vice president of consulting and threat intelligence at Unit 42, writing that "frontier AI is changing what is possible for attackers. In the hands of defenders, it can become a decisive advantage." Mythos and GPT-5.4-Cyber have garnered much of the attention about the cybersecurity risks such frontier models represent. However, some security vendors wrote that they tested publicly available AI models and found that many of them came close to or matched Mythos' ability to find and identify zero-day vulnerabilities. Executives with startup Aisle, which offers an AI-native app security platform, wrote that over the past year, they had built an AI system for discovering, validating, and patching zero-days in open source software. In tests, they "took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis." The models included GPT-OSS-120b, DeepSeek R1, Qwen3, and Gemma 4. The results varied depending on the model and the task, they wrote. Researchers with Vidoc Security Lab, another AI-based cybersecurity startup, wrote that they came up with similar results with OpenAI's GPT-5.4 and Anthropic's Claude Opus 4.6 models running OpenCode, an open source AI coding agent, scanning for security flaws in open software like OpenBSD and FFmpeg. "If public models can already do useful work inside that kind of workflow, then the story is not 'Anthropic has a magical cyber artifact,'" they wrote. "The story is that serious AI-assisted vulnerability research is no longer confined to a single frontier lab. That does not make the workflow easy. It means the moat is moving up the stack, from model access to validation, prioritization, and remediation."
