The latest news and updates from companies in the WLTH portfolio.
Anthropic's most advanced model is finally here, just not in the way you might expect. On Tuesday, Anthropic unveiled Project Glasswing, an initiative to secure critical software in the age of AI, as attacks become increasingly sophisticated and prevalent. Anthropic cited the launch of Claude Mythos, a model more advanced than its current top-tier model, Opus, and the far-reaching implications it carries for cybersecurity as the driving force behind forming the initiative. "Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," said the company in the blog post. For this initiative, leaders across industries are joining Anthropic, including Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. The plans for Project Glasswing include: Cross-industry collaboration is essential to achieving a comprehensive understanding of the threat landscape, a point underscored by Steve Schmidt, SVP & Chief Security Officer at Amazon. "The software that needs to be examined is literally everything across all of the different industries, and so we have to work together, because we don't have it all," Schmidt told The Deep View. Mythos Preview has already found thousands of "high-severity vulnerabilities" across "every major operating system and web browser." The model outperforms Claude Opus 4.6 in vulnerability reproduction, scoring 83.1% vs. 66.6% on CyberGym, driven by stronger agentic coding and reasoning abilities, as reflected across benchmarks such as SWE-bench Pro and Terminal Bench 2.0. The Claude Mythos preview will not be made generally available, though the company shares the goal of one day releasing models of that scale to the public. Ultimately, the initiative reflects Anthropic's commitment to its core mission of responsible AI deployment, a sentiment echoed by Adam Meyers, SVP of Counter Adversary Operations at CrowdStrike, who told The Deep View. "I think it's important to think about the fact that they [Anthropic] did this because they had a new model, and they understood there were implications to this model that they didn't fully appreciate, and they wanted to almost get a peer review to understand what are we dealing with here, and I think that that is a really responsible way to address that problem," said Meyers. The AI race earns its name: labs are locked in fierce competition to release ever more powerful models. The risk, however, is that these advances are consistently outpacing our ability to responsibly anticipate and apply the appropriate guardrails. Releasing models that the world isn't yet equipped to handle carries serious consequences, particularly when bad actors enter the equation, and the potential for unprecedented harm is real. That Anthropic chose to pump the brakes despite the significant enterprise revenue at stake offers a measure of reassurance that responsibility, not greed, can still win out.

Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no fix exists ShowQuick Read Summary is AI-generated, newsroom-reviewed * Anthropic's Claude Mythos AI model exposes thousands of unpatched software vulnerabilities * The AI model is shared with cybersecurity firms and tech giants through Project Glasswing * Mythos found flaws undetected by creators, including a video software bug tested 5 million times Did our AI summary help? Let us know. Switch To Beeps Mode United States: Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. Project Glasswing As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said Crowdstrik chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts. (Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.) Show full article Track Latest News Live on NDTV.com and get news updates from India and around the world Anthropic AI, Claude Mythos, Cybersecurity

The model will only be available to a small group of trusted partners, including Google, Microsoft, Amazon Web Services, Nvidia, and JPMorgan Chase, as part of a defensive cybersecurity initiative called Project Glasswing. Anthropic isn't going to release the most advanced version of its AI model and the reason is rather alarming - it is too powerful to be contained and poses security risks that can't be controlled! The AI company behind the Claude AI chatbot has decided not to publicly release Claude Mythos - its latest and most powerful AI model. But Anthropic is going to use it to help cybersecurity firms around the world. In a blog post, Anthropic revealed that Mythos demonstrated exceptional abilities in identifying high-severity vulnerabilities in major operating systems and web browsers, including a 27-year-old flaw in OpenBSD -- deemed to be one of the most secure operating systems in the world. During testing, the model reportedly broke out of its virtual containment and took unprompted actions to demonstrate its capabilities. Claude Mythos's concerning capabilities during testing In one incident, researchers instructed Mythos to find a way to send a message if it escaped its sandbox. Not only did the model succeed, but it also emailed the researcher unexpectedly. In an even more alarming move, it posted details of its exploit on multiple public websites without being asked. Anthropic's safety report noted, "The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards. It then went on to take additional, more concerning actions." Engineers with no formal security training were reportedly able to use Mythos to generate complete, working exploits for vulnerabilities overnight. The model's ability to autonomously turn vulnerabilities into functional attacks raised serious red flags. Anthropic to do limited release through Project Glasswing Due to these 'unprecedented' capabilities, Anthropic has restricted access to Mythos. The model will only be available to a small group of trusted partners, including Google, Microsoft, Amazon Web Services, Nvidia, and JPMorgan Chase, as part of a defensive cybersecurity initiative called Project Glasswing. The company is providing up to $100 million in usage credits to these partners. The project is named after the glasswing butterfly, symbolising the model's ability to spot hidden vulnerabilities while highlighting transparency about risks. Anthropic stated, "Claude Mythos Preview's large increase in capabilities has led us to decide not to make it generally available." The company hopes to eventually release "Mythos-class models" once stronger safeguards are developed. This decision comes shortly after Anthropic released Claude Opus 4.6 in February 2026. Mythos represents a significant leap in capabilities, particularly in coding, reasoning, and cybersecurity, but its unlimited power has forced the company to adopt a more cautious approach.

"We shared that over 500 business customers were each spending over $1 million on an annualized basis. Today that number exceeds 1,000, doubling in less than two months," the latest Anthropic blog read. In an earlier post, the company attributed to the rising run-rate revenue to the number of customers spending over $100,000 annually on Claude (as represented by run-rate revenue) has grown 7x in the past year. And businesses that start with Claude for a single use case -- API, Claude Code, or Claude for Work -- are expanding their integrations across their organizations. "Two years ago, a dozen customers spent over $1 million with us on an annualized basis. Today that number exceeds 500. Eight of the Fortune 10 are now Claude customers," the blog read.

By becoming a member, I agree to receive information and promotional messages from Cyber Daily. I can opt out of these communications at any time. For more information, please visit our Privacy Statement. The project is based on Claude Mythos Preview, which is only available to partners via Project Glasswing. "Launch partners will use Mythos Preview for defensive security work and share what they learn so the whole industry can benefit," Anthropic said in an April 8 announcement. "Access has also been extended to around 40 additional organisations that build or maintain critical software infrastructure, so they can scan and secure both first-party and open-source systems. Anthropic is committing up to $100M in usage credits across these efforts, as well as $4M in direct donations to open-source security organisations." CrowdStrike, one of the key partners in the project, highlighted the importance of the effort given that adversaries are also finding vulnerabilities faster. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed - what once took months now happens in minutes with AI," Elia Zaitsev, Chief Technology Officer at CrowdStrike, said. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities. That is not a reason to slow down; it's a reason to move together, faster. If you want to deploy AI, you need security. That is why CrowdStrike is part of this effort from day one." George Kurtz, President and CEO of CrowdStrike, added that as AI becomes more capable, the more security it requires. "That's why Anthropic chose CrowdStrike as a founding member of their security coalition for Claude Mythos Preview - a technical partnership," Kurtz said. "Falcon secures AI where it executes. AI is creating the largest security demand driver since enterprises moved to the cloud. Claude Code is changing how people use computers. OpenClaw is set to reshape how enterprises automate. Mythos may be the most capable frontier model yet. It won't be the last. All of these AI innovations meet enterprises at the endpoint. That's where they access data, make decisions, and also create risk." Jim Zemlin, CEO of the Linux Foundation, said that Project Glasswing is particularly relevant to open source projects. "Open source maintainers - whose software underpins much of the world's critical infrastructure - have historically been left to figure out security on their own. Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software," Zemlin said. "By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation. This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams." However, while the project's partners praise the initiative, others have pointed out that such technologies cut both ways. "What stands out isn't the specific Anthropic release, but what it signals. As LLMs get better at reasoning over code, vulnerability discovery stops being a purely human advantage," Erik Bloch, Vice President, Information Security at Illumio, told Cyber Daily. "LLMs are fundamentally language engines, and code is just another language. That's why it's not surprising they can find bugs and vulnerabilities that humans or rule‑based tools miss, especially subtle, logic‑level issues. The challenge is that this cuts both ways. Attackers can use the same models to identify weaknesses to exploit, or even to introduce intentionally hard-to-spot vulnerabilities. "From that perspective, limiting exposure makes sense. Attackers will use tools like this the moment they can, which is why vendors will want to use them first, to find and fix issues before someone else does."

We at moneycontrol use cookies and other tracking technologies to assist you with navigation and determine your location. We also capture cookies to obtain your feedback, analyse your use of our products and services and provide content from third parties. By clicking on 'I Accept', you agree to the usage of cookies and other tracking technologies. For more details you can refer to our cookie policy. Please select (*) all mandatory conditions to continue.

Automated vulnerability discovery tools have existed for decades, and the gap between finding a bug and building a working exploit has always slowed attackers. That gap is now substantially narrower. Anthropic's Claude Mythos Preview, a new general-purpose language model being made available only to a limited group of critical industry partners and open source developers, can autonomously identify zero-day vulnerabilities and then construct working exploits across every major operating system and major web browser. Anthropic's ... More →

Dario Amodei / @darioamodei: I'm proud that so many of the world's leading companies have joined us for Project Glasswing to confront the cyber threat posed by increasingly capable AI systems head-on. https://x.com/... What I have been saying for years. AI models will become too powerful and treacherous for us to understand, so the only sensible approach to use them is "dangerous until proven safe". Fortunately, since they are so powerful, in addition to the code artifact the produce, they can easily provide a proof that the code is safe, secure, and correct. Then we use artisan trusted technology, like Z3, Lean, Rocq, ... to independently check the proof before we run the AI generated code. Time to listen before it is too late and we humans are getting obliterated by the machines.

Google-parent company Alphabet made an investment in Elon Musk's SpaceX in 2015 which is now potentially worth over $100 billion. Now Google CEO Sundar Pichai said that the company is eyeing on more opportunities in the startup world mainly in artificial intelligence. According to a report by CNBC, speaking with the Stripe co-founder John Collison in a conversation posted on April 7, Pichai noted, "You know SpaceX, Anthropic and so on so, I think now with the AI shift, there are more opportunities on which we can deploy capital in a good way and so we are doing that."Pichai further emphasised that Alphabet wants to be a 'good steward of capital', pointing to a strong returns from investments like Stripe, which has grown more than 17-fold since GV's 2016 round. He also reflected on Waymo, Alphabet's autonomous vehicle unit, which raised $16 billion earlier this year at a $126 billion valuation: "I would have been glad to invest more capital in Waymo earlier, but we weren't at the level of maturity to do that."Google has long invested through its venture arms GV and CapitalG, but today's AI companies require far larger sums. Alphabet is joining tech giants like Nvidia, Microsoft, and Amazon in writing billion-dollar checks directly off the balance sheet.* In 2015, Alphabet invested $900 million in SpaceX at a $12 billion valuation. With SpaceX merging with Musk's xAI earlier this year at a $1.25 trillion valuation, Alphabet's stake could now be worth around $100 billion.* SpaceX has reportedly filed confidentially for an IPO, seeking a record $1.75 trillion valuation.* Alphabet has also invested heavily in Anthropic, committing over $3 billion since 2023. Its stake has grown to about 14%, with Anthropic's valuation soaring to $380 billion as of February.The comments made by the Google CEO suggest that Alphabet will continue to deploy capital aggressively into startups not just through venture arms, but with direct, large-scale investments.
AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities, Anthropic said. Anthropic has announced it is rolling out its AI model, Claude Mythos Preview, to only a select group of companies after the new model found thousands of critical vulnerabilities across operating systems, web browsers and other software. The new general-purpose model, Anthropic said, also found high-security vulnerabilities in every major operating system and web browser. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely." AI has already been used by hackers to conduct cyberattacks. There has been a 72% year-over-year increase in AI-powered cyberattacks, with 87% of global organizations experiencing AI-enabled cyberattacks in 2025, according to AllAboutAI. Anthropic expressed concern over what would happen if similar AI capabilities were used by bad actors. To combat this, Anthropic announced Project Glasswing on Tuesday, a new initiative that brings together more than 40 companies, including Amazon Web Services, Apple, Cisco, Google, JPMorgan, the Linux Foundation, Microsoft and Nvidia. Project Glasswing will use Claude Mythos Preview's capabilities to defensively find bugs, share the data with its partners and get ahead of threats by patching critical vulnerabilities before bad actors can exploit them. A zero-day vulnerability is a software bug that can be exploited before anyone with the ability to fix it even knows it exists. Finding and patching them has historically required rare, expensive human expertise, but AI could change the scale and speed of detection. Anthropic said the vulnerabilities it finds are "often subtle or difficult to detect." Many of them are 10 or 20 years old, with the oldest found so far being a now-patched 27-year-old bug in OpenBSD -- an operating system known primarily for its security, it added. It also found a 16-year-old bug in the FFmpeg media processing library, a 17-year-old remote code execution vulnerability in the open-source FreeBSD operating system and numerous vulnerabilities in the Linux kernel. Related: Cybersecurity stocks fall after Anthropic unveils Claude Code Security Mythos Preview also identified several weaknesses in the world's most popular cryptography libraries, algorithms and protocols, including TLS, AES-GCM and SSH. It added that web applications "contain a myriad of vulnerabilities," ranging from cross-site scripting and SQL injection to domain-specific vulnerabilities such as cross-site request forgery, which is often used in phishing attacks. Anthropic claimed that 99% of the vulnerabilities it found have not yet been patched, "so it would be irresponsible for us to disclose details about them,. Anthropic said that this is likely just the beginning of a trend, and the "work of defending the world's cyber infrastructure might take years," but AI will help harden software and systems. "In the long run, we expect that defense capabilities will dominate: that the world will emerge more secure, with software better hardened -- in large part by code written by these models. But the transitional period will be fraught."

A public spat has broken out among China's leading artificial intelligence companies as they rush to fill the void left by US startup Anthropic's decision to cut off access to its Claude models through OpenClaw, a popular open-source AI agent tool. Anthropic announced on Sunday that Claude subscriptions would no longer cover usage on third-party tools like OpenClaw, citing the need to prioritise existing customers of its own products. The decision has sent ripples through the AI developer community and opened a window of opportunity that Chinese rivals have been quick to exploit. Companies MiniMax and Xiaomi both moved swiftly, encouraging users to switch to their own token subscription plans in the wake of Anthropic's announcement. But the competition has not been without friction. Shanghai-based MiniMax took to X to publicly accuse Anthropic of damaging the broader AI community through its new restrictions, arguing that more good ideas of how to use AI come from outside AI labs than within them, and that limiting subscriptions to first-party products stifles innovation before it can take hold. The commercial battle is unfolding against a broader and more troubling backdrop. The explosive growth of AI agents has triggered a dramatic surge in demand for AI tokens, the core unit by which AI usage is measured, raising serious questions about whether the industry can sustainably meet that demand amid a worsening global crunch in computational power. Analysts are watching closely to see whether Anthropic's move reflects a strategic retreat or a necessary triage, and whether Chinese companies can convert the moment into lasting market share, or whether the same resource constraints that pressured Anthropic will ultimately close in on them too.

Anthropic accidentally leaked the full source code of Claude code, its flagship AI coding agent on March 31. The code was exposed through a 59.8 MB JavaScript source map (.map) file bundled in the public npm package @anthropic-ai/claude-code version 2.1.88. The issue was first flagged by security researcher Chaofan Shou (@Fried_rice) on X, leading to rapid sharing of the leaked data. The leaked file contained approximately 513,000 lines of unobfuscated TypeScript across 1,906 files, revealing the complete client-side agent harness. The full source code of Claude Code -- its flagship AI coding agent -- accidentally made its way to the public internet platform via an npm package. Within hours, the code was downloaded, shared on platforms like GitHub, and widely circulated among developers and researchers. The company then blamed an 'human error' for the leak, saying it is working on a fix. Despite efforts to remove it using legal notices, the code continues to be available across multiple online repositories.Developers who examined the leaked data said it revealed more than just clean engineering. The code included several unreleased features that Anthropic had been quietly building behind compile-time feature flags. One, codenamed Kairos, appears to be an always-on background agent with memory consolidation -- essentially a version of Claude that never fully switches off. Another is a full companion pet system called Buddy, complete with 18 species, rarity tiers, shiny variants, and stat distributions. The leak also mentioned an Undercover Mode, described as auto-activating for Anthropic employees on public repos, which strips AI attribution from commits with no visible off switch.The code also revealed advanced system features. Coordinator Mode turns Claude into a central system that manages multiple workers agents at the same time. Auto Mode uses an AI classifier to silently approve tool permissions, removing the usual confirmation prompts.Beyond the hidden features, the leak gave outsiders a rare look at how a well-funded AI product actually gets built under pressure. The main user interface is a single React component with over 5,005 lines of code containing 68 state hooks, 43 effects, and JSX nesting that goes 22 levels deep. Engineers reading it noted a TODO comment sitting next to a disabled lint rule on line 4114. The entry point file, main.tsx, runs to 4,683 lines and handles everything from OAuth login to mobile device management. Sixty-one separate files contain explicit comments about circular dependency workarounds. A type name used over 1,000 times across the codebase reads: AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS.One standout detail: the word "duck" is encoded in hexadecimal -- String.fromCharCode(0x64,0x75,0x63,0x6b) -- because the string apparently collides with an internal model codename that Anthropic's CI pipeline scans for. Rather than add a regex exception, every animal species in the pet system got hex-encoded.Earlier this month, Anthropic said that a human error led to the leak of the source code for its AI agent, Claude Code. The company described the incident as a release error rather than a security breach. The AI startup revealed that a packaging issue unintentionally exposed part of its internal code."No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again," an Anthropic spokesperson said.Claude Code creator Boris Cherny confirmed the cause on X (formerly known as Twitter): a manual deploy step that didn't get done. "Our deploy process has a few manual steps, and we didn't do one of the steps correctly," he wrote. Early speculation pointed to Bun, the JavaScript runtime Anthropic acquired, citing a known bug where source maps get inadvertently served. Cherny shut that down quickly: unrelated, just developer error.When someone on X asked whether the person responsible was "still breathing," Cherny didn't flinch. "Full trust," he wrote. "In this case the problem wasn't the person, it was infra that was error prone. Anyone could have made this same mistake by accident." No one was fired. The fix, somewhat counterintuitively, is to move faster: more automation, with Claude itself checking deployment results before anything goes out.According to a Wall Street Journal report, the source code leak includes Anthropic's proprietary techniques, tools, and instructions for directing its AI models to act as coding agents. These techniques and tools are collectively referred to as a "harness," a term that reflects how they allow users to control and guide the models, just as a harness allows a rider to direct a horse.As a result, Anthropic's competitors, as well as many startups and developers, now have a clearer path to copying Claude Code's features without having to reverse-engineer them, which is already common in the AI space.The incident poses a challenge to Anthropic on two fronts: its image as a safety-focused AI company, and the exposure of sensitive internal technology at a time when competition for enterprise customers is intensifying.Claude Code has been gaining popularity among developers recently and has played a key role in helping Anthropic secure a new funding round valuing the company at $380 billion, ahead of a possible public offering this year. A significant part of Claude Code's appeal lies in how it connects the company's AI models and guides them to work in ways that help developers complete tasks, an approach known as "tooling" that practitioners consider as much craft as technical execution.According to a report in South China Post, Chinese developers have been actively exploring and using the leaked code since it became public. Several Developers in China are said to be scrambling to download copies of the leaked code and poring over the files to learn every detail. What reportedly makes Chinese developers highly enthusiastic about Anthropic's AI models is their advanced coding capabilities. On Chinese forums, many shared what they deemed to be the secret recipe for Claude Code -- from its architecture and agent design to memory mechanism, among others. One topic titled the "Claude Code source code leak incident" has more millions of views, with many local developers sharing what they had learned and suggesting how they could make better use of the tool.Though some industry experts claim that the leaked file only included codes for Claude Code, and not the model weights, there is also a view that says the leaked data is still a treasure trove for developers. As Zhang Ruiwang, a Beijing-based IT system architect, told South China Post, "But the code batches are indeed a treasure for AI companies or developers, as they revealed all the key engineering decisions Anthropic made."As previously mentioned, the leaked code mentions a system called Kairos, a persistent "daemon" that continues running even after the Claude Code terminal is closed. It uses prompts that appear occasionally to check whether new actions are needed, as well as a "PROACTIVE" flag for "surfacing something the user hasn't asked for and needs to see now."Kairos is also linked to a file-based memory system designed to maintain continuity across sessions, helping the AI build "a complete picture of who the user is, how they'd like to collaborate with you, what behaviors to avoid or repeat, and the context behind the work the user gives you."The leaked code includes links to an AutoDream system to help track this memory over time. When a user is idle or ends a session, Claude Code is told, "You are performing a dream -- a reflective pass over your memory files."This process involves scanning transcripts for "new information worth persisting," removing "near-duplicates" and "contradictions," and trimming outdated or overly detailed entries. It also directs the system to monitor "existing memories that drifted," with the aim to "synthesize what you've learned recently into durable, well-organized memories so that future sessions can orient quickly."Another feature, called "Undercover mode," appears to allow contributions to public open source repositories without revealing that they originate from an AI system. The prompts tied to this mode emphasise protecting "internal model codenames, project names, or other Anthropic-internal information." They also instruct that commits should "never include... the phrase 'Claude Code' or any mention that you are an AI," and avoid attribution like "co-Authored-By lines or any other attribution."The codebase also includes a lighter feature called Buddy. This feature has been described as a "separate watcher" that "sits beside the user's input box and occasionally comments in a speech bubble." These companions are small ASCII-style animations that can take on different shapes. Internal notes say that it was supposed to be released in a small number of places first, then more widely.Other features referenced in the leak include an UltraPlan mode that allows Claude to "draft an advanced plan you can edit and approve," with execution times ranging from 10 to 30 minutes.There is also mention of a Voice Mode for direct spoken interaction, a Bridge mode enabling remote sessions controlled from external devices, and a Coordinator tool designed to "orchestrate software engineering tasks across multiple workers" using parallel processes and WebSocket communication.Just as Anthropic was working to takedown the leaked code, a programmer used other AI tools to rewrite Claude Code's instructions in a different programming language. As per a post on Reddit, the programmer used AI tools to rewrite the instructions in Python. Here's the post.On March 31, someone leaked the entire source code of Anthropic's Claude Code through a sourcemap file in their npm package.A developer named realsigridjin quickly backed it up on GitHub. Anthropic hit back fast with DMCA takedowns and started deleting the repos.Instead of giving up, this guy did something wild. He took the whole thing and completely rewrote it in Python using AI tools. The new version has almost the same features, but because it's a full rewrite in a different language, he claims it's no longer copyright infringement.The rewrite only took a few hours. Now the Python version is still up and gaining stars quickly.A lot of people are saying this shows how hard it's going to be to protect closed source code in the AI era. Just change the language and suddenly DMCA becomes much harder to enforce.By doing this, the programmer said, they aim at keeping the information available without risking a takedown. That new version has itself become popular on the programming platform.The latest incident marking the leak of Anthropic Claude code is not isolated. According to a previous Fortune report, a separate leak has exposed nearly 3,000 files, including a draft blog post revealing a powerful upcoming model referred to internally as both "Mythos" and "Capybara." Security researchers who reviewed the Claude Code leak also warned that it potentially allows competitors to reverse-engineer its agentic harness and that, even without proper access keys, certain internal Anthropic systems may remain reachable -- raising concerns about nation-state exploitation of the company's most capable models.Anthropic confirmed the incident but sought to limit the damage. A company spokesperson told Fortune no sensitive customer data or credentials were exposed, describing the incident as a release packaging issue caused by human error rather than a security breach, and adding that the company is rolling out measures to prevent a recurrence.
Claude Performance Concerns, Tougher Tech Job Market, AI Agent Purchase Liability, and NVIDIA Copyright Takedown Jim Love hosts Hashtag Trending, sponsored by Meter, covering four stories: AMD's AI director and developers report Anthropic's Claude code seems "lazier," with shorter, less complete answers and weaker complex coding performance, possibly tied to compute constraints as Anthropic expands capacity via Google TPUs and a Broadcom-linked deal for about 3.5 gigawatts coming online in 2027; Goldman Sachs warns laid-off tech workers to expect longer job searches and potentially lower pay amid efficiency-focused hiring and growing AI/automation; Target states that if a customer authorizes an AI agent to buy, the customer is responsible, raising risks around errors and compromised accounts; and NVIDIA's DLSS 5 trailer was taken down after news broadcasts triggered automated copyright claims, illustrating how effective enforcement tools can also remove legitimate content at critical launch moments.
Technical Innovation: The launch of Polymarket USD is being prepared, a token backed 1:1 by USDC to optimize the engine's operational efficiency. Polymarket consolidated its hegemony by capturing almost the entirety of the sector's revenue. This exponential growth of the prediction market follows a restructuring of its fee model applied to various trading categories. Trading volume remains resilient despite the implementation of taker fees in finance, politics, and technology. For the first time in history, weekly fees recorded by the sector exceeded $7 million, driven by institutional adoption that includes a $600 million investment from the parent company of the New York Stock Exchange (ICE). The platform's resilience suggests an exceptional product-market fit, as users absorbed the new costs without reducing their activity. Consequently, Polymarket is not only focusing on monetization but also on a comprehensive update of its trading engine and smart contracts. This transformation process includes the replacement of the USDC.e bridge asset with a native collateral called Polymarket USD. This transition seeks to mitigate infrastructure risks and improve execution speed in a high-demand environment. Nevertheless, financial success coexists with increasingly strict regulatory oversight in key markets such as the United States and Europe. Polymarket has transformed its media traction into a high-efficiency revenue machine. Its ability to retain users while expanding its operating margins redefines the profitability standard for decentralized applications today.

Anthropic has introduced Project Glasswing, a cybersecurity initiative aimed at using advanced AI models to identify and address vulnerabilities in critical software systems. The effort brings together major technology companies and infrastructure providers to strengthen defenses across global digital systems. The initiative also references the glasswing butterfly (Greta oto), a symbol used to describe transparency and hidden vulnerabilities in complex systems. Project Glasswing is a collaborative program involving Amazon Web Services, Google, Microsoft, Apple, NVIDIA, Cisco, CrowdStrike, Palo Alto Networks, Broadcom, JPMorganChase, and the Linux Foundation. At the center of the initiative is Claude Mythos Preview, a frontier AI model developed by Anthropic for cybersecurity-focused applications. The term "Mythos" originates from Ancient Greek, meaning "narrative" or "utterance," and reflects the system used to interpret complex patterns in data and code. Key features Claude Mythos Preview has identified thousands of high-severity vulnerabilities across major platforms, including operating systems and web browsers. Many of these were previously unknown to developers. Examples include: These issues have been reported to maintainers and patched. Additional vulnerabilities are being disclosed securely using cryptographic hashes until fixes are released. Benchmark results from CyberGym: Project Glasswing involves a broad group of organizations working together to improve cybersecurity across critical systems. Along with primary partners, Anthropic has granted access to more than 40 additional organizations responsible for maintaining essential software infrastructure. These participants use the model for vulnerability detection, system evaluation, and security testing across both proprietary and open-source environments, covering a significant portion of the global software attack surface. Anthropic has allocated up to $100 million in model usage credits to support Project Glasswing participants. These credits enable large-scale use of Claude Mythos Preview for research and defensive security tasks. Additional support includes: Claude Mythos Preview is available through platforms such as the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry. After the preview phase, pricing is set at $25 per million input tokens and $125 per million output tokens. Project Glasswing is intended as a long-term initiative to adapt cybersecurity practices to evolving AI capabilities. Anthropic plans to work with industry partners, open-source contributors, and government stakeholders to improve security frameworks and safeguards. Focus areas include: Anthropic plans to publish a public report within 90 days summarizing findings and improvements. The company is also working toward enhanced safeguards for future models, enabling safer deployment of advanced AI systems while minimizing misuse risks. Discussions with government stakeholders are ongoing to address national security considerations. Claude Mythos Preview is not available for general public release. Access is restricted to selected partners and approved participants in Project Glasswing. Eligible security professionals and maintainers can apply through the Cyber Verification Program and Claude for Open Source initiative to gain controlled access for defensive cybersecurity use cases.

AI Giants Go on Charm Offensive to Avert Public Backlash Polls show artificial intelligence is broadly unpopular, prompting steps from companies to ease concerns. ---- SpaceX Isn't Even Public Yet. Investors Are Already Abuzz About a Tesla Merger. With Elon Musk focused on artificial intelligence, investors and analysts are discussing a merger of his biggest companies. ---- Ford Asks Trump Administration for Relief as Tariffs Pummel F-150 The Detroit company and other carmakers are reeling after a domestic supplier went offline, but the administration hasn't budged. ---- Levi Strauss Raises Fiscal-Year Guidance as Turnaround Bears Fruit The updated outlook comes after the apparel company logged higher profit and 14% revenue growth in its latest quarter, driven by growth across channels, regions and categories. ---- Elon Musk Asks for OpenAI's Nonprofit to Get Any Damages From His Lawsuit Tesla billionaire also seeks Sam Altman's removal from the OpenAI nonprofit's board in an amendment to the suit over for-profit conversion. ---- Anthropic Set to Preview Powerful 'Mythos' Model to Ward Off AI Cyberthreats Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software. ---- Cornerstone Taps AlixPartners for Debt Restructuring, Capital Raise The Clayton Dubilier & Rice-backed building materials company has nearly $5 billion in debt. ---- GoPro to Eliminate 23% of Workforce in Cost-Cutting Move The wearable camera maker said its board approved a restructuring plan to slash costs, which will entail cutting 145 employees. ---- Intel Partners With SpaceX, Tesla to Operate New Chip Plant The Elon Musk-led companies plan to work with the semiconductor manufacturer at the Terafab project planned in Texas. ---- Delta Air Lines Increases Bag Fees as Fuel Prices Rise Delta Air Lines will raise fees for checked bags on domestic and select short-haul international âroutes, a move that comes as airlines look to offset soaring jet fuel costs stemming from the Iran war. ---- Gilead to Buy German Biotech Tubulis for $3 Billion for Experimental Cancer Drugs The deal bolsters the California biotech's pipeline of treatments that aim to deliver chemo in a more targeted way, Blackstone closed a $10 billion opportunistic credit fund, hitting the fund's hard cap even as the private credit industry struggles to stem an outflow of capital driven by investor worries. ---- Commerzbank Doesn't See Basis For Deal With UniCredit Following Talks Germany's Commerzbank said it doesn't see a basis for a deal with Italy's UniCredit after recent interactions between the two banks, signaling it will continue to focus on its standalone strategy.

New Delhi [India], April 8 (ANI): Anthropic announced Project Glasswing, a new initiative that brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to 'secure the world's most critical software.' The collaborative effort comes as AI models reach a level of coding capability that allows them to find and exploit software vulnerabilities more effectively than most humans. According to a statement by Anthropic, the project was formed because of capabilities observed in Claude Mythos Preview, a general-purpose, unreleased frontier model. According to the company, this model has already identified thousands of high-severity vulnerabilities in major operating systems and web browsers. 'AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient,' said Anthony Grieco, SVP, & Chief Security & Trust Officer at Cisco. Anthropic committed up to USD 100 million in usage credits for the Mythos Preview model to support the project and 40 additional organisations. The statement noted that the 'current global financial cost of cybercrime is estimated at roughly USD 500 billion annually.' The project aimed to use AI for defensive purposes like local vulnerability detection and penetration testing, before these capabilities proliferate to unsafe actors. 'At AWS, we build defences before threats emerge, from our custom silicon up through the technology stack. Security isn't a phase for us; it's continuous and embedded in everything we do. We've been testing Claude Mythos Preview in our own security operations, applying it to critical codebases, where it's already helping us strengthen our code,' said Amy Herzog, Vice President and CISO at Amazon Web Services. As part of the initiative, Anthropic donated USD 2.5 million to Alpha-Omega and OpenSSF and USD 1.5 million to the Apache Software Foundation. The company also engaged in ongoing discussions with US government officials regarding the model's offensive and defensive capabilities. 'As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft,' said Igor Tsyganskiy, EVP of Cybersecurity and Microsoft Research at Microsoft. Anthropic planned to report publicly on the vulnerabilities fixed and improvements made within 90 days. Following the research preview, the model will be available to participants at rates of USD 25 per million input tokens and USD 125 per million output tokens. 'Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI. It's always been critical that the industry work together on emerging security issues, whether it's post-quantum cryptography, responsible zero-day disclosure, secure open source software, or defense against AI-based attacks,' said Heather Adkins, VP of Security Engineering at Google. (ANI)

Reports indicate significant fragility in Discord's account recovery systems. Discord has established itself as a central communication hub for gamers, students, creators, and various clubs in the United States. However, a growing body of user reports and technical critiques suggests that the platform's infrastructure may not be suitable for mission-critical communication or users with high privacy requirements. The platform is increasingly scrutinized for its account recovery processes and the risks associated with third-party service dependencies. For users of MidJourney, the dependency on Discord is particularly acute. losing access to a Discord account prevents the creation of images, and MidJourney does not provide refunds for remaining subscription months if access is lost. Reports indicate significant fragility in Discord's account recovery systems. Users have documented issues involving forced password resets and the failure to receive reset emails, which can lead to a lockout loop that hinders the ability to sign up or regain access. The support flow for these issues is often described as problematic because it frequently requires the user to be logged in to seek help, creating a barrier for those who are locked out of their accounts. Concerns regarding data privacy and surveillance have been raised by various users and critics. Some argue that Discord's infrastructure does not offer the same security guarantees as professional platforms like Slack, Microsoft Teams, or traditional email. Specific technical and policy concerns include the collection of metadata and the perceived inability of the platform to stop certain despicable actions by users. Some critics suggest that the platform's business model and lack of user controls contribute to a privacy nightmare, with mentions of data scraping and breaches. The informal culture of the platform is also cited as a risk for professional use. Critics argue that the lack of a proper audit trail and the tendency for important decisions to be lost in meme channels make the platform feel more like a noisy arcade than a meeting room. Beyond privacy, the platform's management and accessibility have been questioned. Critiques of the product include issues with server screen sharing, communication downtime, and a lack of support for certain disabilities. The risk is heightened for creators who tie paid services to their Discord accounts. The model of requiring one application to use another creates an outsized risk for everyday users, as the failure of the communication layer can result in the loss of paid functionality in a separate creative tool. Due to these concerns, some long-term users and developers have moved their communities away from Discord. Alternatives mentioned include bridging communities with Matrix or migrating to other protocols that avoid the centralized surveillance associated with large tech companies like Google and Facebook. The contrast between Discord and more stable, documented platforms highlights the importance of clear policies and predictable access, especially for those using the software for professional accountability or creative work.
New Delhi [India], April 8 (ANI): Anthropic announced Project Glasswing, a new initiative that brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to 'secure the world's most critical software.' The collaborative effort comes as AI models reach a level of coding capability that allows them to find and exploit software vulnerabilities more effectively than most humans. According to a statement by Anthropic, the project was formed because of capabilities observed in Claude Mythos Preview, a general-purpose, unreleased frontier model. According to the company, this model has already identified thousands of high-severity vulnerabilities in major operating systems and web browsers. 'AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient,' said Anthony Grieco, SVP, & Chief Security & Trust Officer at Cisco. Anthropic committed up to USD 100 million in usage credits for the Mythos Preview model to support the project and 40 additional organisations. The statement noted that the 'current global financial cost of cybercrime is estimated at roughly USD 500 billion annually.' The project aimed to use AI for defensive purposes like local vulnerability detection and penetration testing, before these capabilities proliferate to unsafe actors. 'At AWS, we build defences before threats emerge, from our custom silicon up through the technology stack. Security isn't a phase for us; it's continuous and embedded in everything we do. We've been testing Claude Mythos Preview in our own security operations, applying it to critical codebases, where it's already helping us strengthen our code,' said Amy Herzog, Vice President and CISO at Amazon Web Services. As part of the initiative, Anthropic donated USD 2.5 million to Alpha-Omega and OpenSSF and USD 1.5 million to the Apache Software Foundation. The company also engaged in ongoing discussions with US government officials regarding the model's offensive and defensive capabilities. 'As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft,' said Igor Tsyganskiy, EVP of Cybersecurity and Microsoft Research at Microsoft. Anthropic planned to report publicly on the vulnerabilities fixed and improvements made within 90 days. Following the research preview, the model will be available to participants at rates of USD 25 per million input tokens and USD 125 per million output tokens. 'Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI. It's always been critical that the industry work together on emerging security issues, whether it's post-quantum cryptography, responsible zero-day disclosure, secure open source software, or defense against AI-based attacks,' said Heather Adkins, VP of Security Engineering at Google. (ANI)

New Delhi [India], April 8 (ANI): Anthropic announced Project Glasswing, a new initiative that brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to 'secure the world's most critical software.' The collaborative effort comes as AI models reach a level of coding capability that allows them to find and exploit software vulnerabilities more effectively than most humans. According to a statement by Anthropic, the project was formed because of capabilities observed in Claude Mythos Preview, a general-purpose, unreleased frontier model. According to the company, this model has already identified thousands of high-severity vulnerabilities in major operating systems and web browsers. 'AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient,' said Anthony Grieco, SVP, & Chief Security & Trust Officer at Cisco. Anthropic committed up to USD 100 million in usage credits for the Mythos Preview model to support the project and 40 additional organisations. The statement noted that the 'current global financial cost of cybercrime is estimated at roughly USD 500 billion annually.' The project aimed to use AI for defensive purposes like local vulnerability detection and penetration testing, before these capabilities proliferate to unsafe actors. 'At AWS, we build defences before threats emerge, from our custom silicon up through the technology stack. Security isn't a phase for us; it's continuous and embedded in everything we do. We've been testing Claude Mythos Preview in our own security operations, applying it to critical codebases, where it's already helping us strengthen our code,' said Amy Herzog, Vice President and CISO at Amazon Web Services. As part of the initiative, Anthropic donated USD 2.5 million to Alpha-Omega and OpenSSF and USD 1.5 million to the Apache Software Foundation. The company also engaged in ongoing discussions with US government officials regarding the model's offensive and defensive capabilities. 'As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft,' said Igor Tsyganskiy, EVP of Cybersecurity and Microsoft Research at Microsoft. Anthropic planned to report publicly on the vulnerabilities fixed and improvements made within 90 days. Following the research preview, the model will be available to participants at rates of USD 25 per million input tokens and USD 125 per million output tokens. 'Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI. It's always been critical that the industry work together on emerging security issues, whether it's post-quantum cryptography, responsible zero-day disclosure, secure open source software, or defense against AI-based attacks,' said Heather Adkins, VP of Security Engineering at Google. (ANI)
