The latest news and updates from companies in the WLTH portfolio.
Anthropic, the artificial intelligence company that recently fought the Pentagon over the use of its technology, has built a new A.I. model that it claims is too powerful to be released to the public. Instead, Anthropic said on Tuesday, it will make the new model -- known as Claude Mythos Preview -- available to a consortium of more than 40 technology companies, including Apple, Amazon and Microsoft, which will use the model to find and patch security vulnerabilities in critical software programs. Anthropic said it had no plans to release its new technology more widely, but was announcing the new model's capabilities in one area in particular -- identifying security vulnerabilities in software -- in an effort to sound the alarm over what the company believes will be a new, scarier era of A.I. threats. "The goal is both to raise awareness and to give good actors a head start on the process of securing open-source and private infrastructure and code," Jared Kaplan, Anthropic's chief science officer, said in an interview. The coalition, known as Project Glasswing, will include some of Anthropic's competitors in A.I., such as Google, as well as hardware providers like Cisco and Broadcom, and organizations that maintain critical open-source software, such as the Linux Foundation. Anthropic is committing up to $100 million in Claude usage credits to the effort. Logan Graham, the head of an Anthropic team that tests new models for dangerous capabilities, called the new model "the starting point for what we think will be an industry change point, or reckoning, with what needs to happen now." Anthropic occupies an unusual position in today's A.I. landscape. It is racing to build increasingly powerful A.I. systems, and making billions of dollars selling access to those systems, while also drawing attention to the risks its technology poses. The company was deemed a supply-chain risk this year by the Pentagon for demanding certain limitations to the use of its technology. A federal judge later stopped the designation from going into effect. Anthropic has not released much new information about the model, which was code-named "Capybara" during development. But after some details were inadvertently leaked last month, the company acknowledged that it considered it a "step change" in A.I. capabilities, with improved performance in areas like coding and cybersecurity research. The company's decision to hold back Claude Mythos Preview, while giving access only to partners out of concern for how it might be misused, has some precedent. In 2019, OpenAI announced it had built a new model, GPT-2, but was not releasing the full version right away. The company claimed that its text-generation capabilities could be used to automate the mass-production of propaganda or misinformation. (It later released the model, after conducting additional safety testing on it.) Many of the leaders of the GPT-2 project later left OpenAI to start Anthropic. This time, Anthropic is making a different, more urgent claim. The company's executives say Claude Mythos Preview is already capable of carrying out autonomous security research, including scanning for and exploiting so-called zero-day vulnerabilities in critical software programs, flaws that are unknown even to the software's developer. These efforts can often be triggered by amateurs with simple prompts. The company claims that the new model has already identified "thousands" of bugs and vulnerabilities in popular software programs, including every major operating system and browser. One of the vulnerabilities Claude found, the company said, was a 27-year-old bug in OpenBSD, an open-source operating system that was designed to be difficult to hack. Many internet routers and secure firewalls incorporate OpenBSD's technology. Another was a longstanding issue in a piece of popular video software that automated testing tools had scanned five million times, without finding any problems. "This model is good at finding vulnerabilities that would be well understood and findable by security researchers," Mr. Graham said. "At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them." Anthropic announced on Monday that its projected annual revenue had more than tripled in 2026, to more than $30 billion from $9 billion. The growth has come largely because of the popularity of Anthropic's Claude as a tool for programming. Anthropic has focused on making Claude good at completing lengthy coding tasks, in hopes of making it more useful to professional programmers and amateur "vibecoders." But an A.I. system designed to be good at coding is also good at spotting the flaws in code -- running automated scans for bugs and vulnerabilities that can allow hackers to take control of users' machines, expose sensitive user information or wreak other havoc. Kevin Roose and Casey Newton are the hosts of Hard Fork, a podcast that makes sense of the rapidly changing world of technology. Subscribe and listen. The cybersecurity industry has been bracing for years for what more capable A.I. models could do to critical tech infrastructure. Until recently, only expert human researchers with access to specialized tools were capable of finding the most severe security vulnerabilities. Now, the fear is that a powerful A.I. model could discover them on its own. "Imagine a horde of agents methodically cataloging every weakness in your technology infrastructure, constantly," Nikesh Arora, the chief executive of Palo Alto Networks, wrote in a blog post last week. Mr. Graham said one of the unanswered questions about Claude Mythos Preview, and other future models that will be capable of doing similar things, was whether most or all of the world's critical software would need to be patched or rewritten as a result of these new models. "There are a lot of really critical systems around the world, whether it's physical infrastructure or things that protect your personal data, that are running on old versions of code," Mr. Graham said. "If these previously were mostly secure because it took a lot of human effort to attack them, does that paradigm of security even work anymore?" It is wise to take claims about unreleased model capabilities from A.I. companies with a grain of salt. In this case, though, cybersecurity researchers who have been given access to Claude Mythos Preview have characterized the model as a significant cybersecurity risk. Elia Zaitsev, the chief technology officer of CrowdStrike, a cybersecurity firm with access to the new model through Project Glasswing, said in a statement accompanying Anthropic's announcement that the model "demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." "What once took months now happens in minutes with A.I.," Mr. Zaitsev said. Project Glasswing takes its name from the glasswing butterfly, Mr. Kaplan said, which uses transparent wings to hide in plain sight. Similarly, he said, many of today's most critical software programs contain bugs and vulnerabilities that have existed in the open for years, but were buried in such complex technical systems that no human ever found them. According to Mr. Kaplan, the cybersecurity capabilities of Claude Mythos Preview are not a result of special training. Rather, they are just one of many areas in which the model is better than previous ones. He predicted that similar cybersecurity capabilities would exist in other models soon. As that happens, he said, the arms race between hackers and the companies racing to defend their systems will only escalate. "As the slogan goes, this is the least capable model we'll have access to in the future," he said.

Boeing Company (NYSE:BA) shares are down on Tuesday. The company reported a space-program milestone, while broader risk appetite stays soft. * Boeing shares are experiencing downward pressure. What's pulling BA shares down? The company delivered the ViaSat-3 Flight 3 (VS-3 F3) spacecraft to ViaSat, Inc. (NASDAQ:VSAT). The satellite is arriving at Cape Canaveral Space Force Station in Florida for pre-launch processing ahead of a SpaceX Falcon Heavy launch. The company says the satellite is built on its high-power 702MP+ platform and is designed to expand connectivity coverage across the Asia-Pacific region. Ryan Reid, president of Boeing Satellite Systems International, said, "ViaSat-3 F3 reflects the strength of Boeing's 702 family and our long-standing partnership with Viasat. With this delivery, we're providing a high-power, flexible platform designed to support Viasat's next-generation connectivity mission, which is proving more valuable every single day. We are thankful for their partnership and trust." Major Pentagon Deal Win Last week, Boeing announced a major defense production framework with the U.S. Defense Department. The agreement aims to significantly expand the output of critical missile defense components, reinforcing Boeing's role in global security infrastructure. The company will collaborate with government agencies to accelerate production of PAC-3 seeker systems, which support advanced missile interception capabilities. Boeing plans to triple the output of PAC-3 seekers over a seven-year period under the new agreement. BA Technical Levels: Key Support and Resistance to Watch The broader market is leaning risk-off today, with the S&P 500 down 0.39%, the Nasdaq down 0.59%, and the Dow down 0.49%. Market breadth is also weak, with only two sectors advancing and nine declining, while Utilities (+0.43%) and Energy (+0.35%) lead. The satellite-delivery update adds a constructive headline for Boeing's space business, but traders are still keying off where the stock sits versus major trend lines. At $208.27, the stock is trading 2% above its 20-day simple moving average (SMA), the average price over the last 20 sessions, which suggests near-term stabilization; it's also trading 3.9% below its 100-day SMA, indicating the intermediate trend is still under pressure. Moving average convergence divergence (MACD), a trend/momentum measure, shows MACD at -5.5931 versus a signal line at -7.7498, which leans bullish because momentum is improving from a depressed level. The golden cross in January (50-day SMA above the 200-day SMA) points to a longer-term uptrend attempt, but the death cross in December highlights how recently the trend had broken down. Over the last 12 months, the stock has been up 50.30%, a backward-looking gain that shows the longer-term recovery has been strong despite volatility. Within the 52-week range ($137.40 to $254.35), the stock is sitting closer to the middle than the highs, which fits a "reset" phase after the late-January peak. Key Resistance: $232 -- an area where rallies have recently stalled. Key Support: $187.50 -- a zone where buyers have tended to show up. Boeing Q1 2026 Earnings: EPS Estimates and Revenue Growth Following last quarter's results, investors are now tracking the path toward the next reporting date on April 22 (confirmed). EPS Estimate: Loss of 49 cents (Flat from loss of 49 cents year-over-year) Revenue Estimate: $21.95 billion (Up from $19.50 billion YoY) Valuation: P/E of 85.6x (Indicates premium valuation relative to peers) Analyst Consensus & Recent Actions: The stock carries a Buy Rating with an consensus price target of $246.92. Recent analyst moves include: Citigroup: Buy (Lowers target to $256 on April 2) Wells Fargo: Initiated with Overweight (Target $250 on April 1) Tigress Financial: Buy (Raises target to $290 on March 19) Boeing Stock Scorecard: Growth, Value, Quality Below is the Benzinga Edge scorecard for Boeing, highlighting its strengths and weaknesses compared to the broader market: Momentum: Neutral (Score: 66.05) -- The trend is improving, but it's not a clear breakout profile The Verdict: Boeing's Benzinga Edge signal reveals a momentum-leaning setup paired with a weak value profile. That combination tends to work best when fundamentals keep improving enough to justify the premium. Top ETF Exposure Significance: Because Boeing carries such a heavy weight in these funds, any significant inflows or outflows for these ETFs will likely force automatic buying or selling of the stock. BA Stock Price Activity: Boeing shares were down 1.90% at $208.27 at the time of publication on Tuesday, according to Benzinga Pro data. Photo via Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software. The company is making a preview model of its new AI model, called Mythos, available to about 50 companies and organizations that maintain critical infrastructure, including Amazon AMZN 0.03%increase; green up pointing triangle, Microsoft MSFT -0.54%decrease; red down pointing triangle, Apple AAPL -2.73%decrease; red down pointing triangle, Alphabet-owned Google and the Linux Foundation. Cybersecurity researchers and software-makers worry that artificial intelligence is becoming so good at exploiting vulnerabilities that it could cause widespread online disruption. Security experts have predicted that AI models will discover an avalanche of software bugs, and the effort is set to help companies stay one step ahead of cyber criminals and other threats. Mythos has proved to be so capable at potentially dangerous things such as finding and exploiting software bugs that Anthropic has, at present, no plans to release it to the general public, said Logan Graham, the head of Anthropic's Frontier Red Team, which evaluates Claude for risks. "We need to know that we can release it safely, and it's not exactly clear how we can do that with full confidence," he said. Over the past six months, cybersecurity researchers have become increasingly worried that AI systems are not only becoming better at finding bugs, but that they are also shrinking the window of time between when a bug is disclosed and when it can be exploited with working attack software. Late last year, researchers at Stanford University found that AI software was almost as good as humans at finding and exploiting bugs on a real-world network. And earlier this year Anthropic's Claude Opus 4.6 model found more high-severity bugs in the Firefox browser in two weeks than the rest of the world typically reports in two months. When measuring dollar cost to find a bug, Mythos is about 10 times as efficient as previous AI models, Graham said. Details of Mythos's capabilities were previously reported by Fortune. While Anthropic has no immediate plans to release Mythos, other models will likely match its bug-finding capabilities within the next few years, Graham said. "We basically need to start, right now, preparing for a world where there is zero lag between discovery and exploitation," he said.
April 7 (Reuters) - Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year. (Reporting by Jaspreet Singh in Bengaluru and Jeffrey Dastin in San Francisco; Editing by Leroy Leo)
Risk management in DeFi now plays a central role in how protocols perform, especially during volatile periods. As Q1 2026 ended, Aave [AAVE] managed about $42.34 billion in TVL and $16.55 billion in loans; it relies on continuous adjustments rather than fixed settings. External teams like Chaos Labs update liquidation thresholds, borrow limits, and collateral rules as conditions change. As these updates happen more often, the system responds faster to market stress. This improves stability and user confidence, although it also means protocols depend more on external risk models as complexity increases.

April 7 : Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67 per cent of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year.
Called Project Glasswing, the initiative relies on Anthropic's new Claude Mythos Preview, a frontier model that the company said will only be made available to a handful of organizations. A group of roughly 40 other companies that work on critical software infrastructure will be able to use the model to secure their own software and open-source offerings. Anthropic said Claude Mythos Preview is already proving effective at detecting software vulnerabilities, stating that the AI has "identified thousands of zero-day vulnerabilities, many of them critical." Zero-day vulnerabilities are previously unknown errors in software that developers need to address before they can be exploited, or errors that have already been exploited by attackers and need to be fixed before they can cause any more harm. It's also detected flaws in evey major operating system and web browser, and found one software bug that was 27-years old, and another that's 16 years old. It's "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. Project Glasswing is a starting point," Anthropic said in a statement. "No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play," the company added. "The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now." Claude Mythos Preview, Anthropic explained, is a general purpose, previously unreleased AI model that uses "strong agentic coding and reasoning skills" to tackle cybersecurity tasks. Unlike its previous AI models, Anthropic doesn't plan to make Claude Mythos Preview generally available. As part of the effort, the company said it's committing $100 million worth of usage credits and $4 million in direct donations to open-source cybersecurity organizations. In addition to its work with its fellow tech companies, Anthropic said it's in discussions with the US government about Claude Mythos Preview's defensive and offensive abilities. That's despite the fact that Anthropic and the Pentagon are locked in a legal battle over Anthropic's decision to establish redlines as to how the Department of Defense can use its AI models.
Called Project Glasswing, the initiative relies on Anthropic's new Claude Mythos Preview, a frontier model that the company said will only be made available to a handful of organizations. A group of roughly 40 other companies that work on critical software infrastructure will be able to use the model to secure their own software and open-source offerings. Anthropic said Claude Mythos Preview is already proving effective at detecting software vulnerabilities, stating that the AI has "identified thousands of zero-day vulnerabilities, many of them critical." Zero-day vulnerabilities are previously unknown errors in software that developers need to address before they can be exploited, or errors that have already been exploited by attackers and need to be fixed before they can cause any more harm. It's also detected flaws in evey major operating system and web browser, and found one software bug that was 27-years old, and another that's 16 years old. It's "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. Project Glasswing is a starting point," Anthropic said in a statement. "No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play," the company added. "The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now." Claude Mythos Preview, Anthropic explained, is a general purpose, previously unreleased AI model that uses "strong agentic coding and reasoning skills" to tackle cybersecurity tasks. Unlike its previous AI models, Anthropic doesn't plan to make Claude Mythos Preview generally available. As part of the effort, the company said it's committing $100 million worth of usage credits and $4 million in direct donations to open-source cybersecurity organizations. In addition to its work with its fellow tech companies, Anthropic said it's in discussions with the US government about Claude Mythos Preview's defensive and offensive abilities. That's despite the fact that Anthropic and the Pentagon are locked in a legal battle over Anthropic's decision to establish redlines as to how the Department of Defense can use its AI models.
Anthropic's Project Glasswing will give a host of leading tech companies access to its new Claude Mythos model for testing Anthropic has announced the launch of a new scheme that will see the firm collaborate with an array of tech giants on the future of AI security. Dubbed 'Project Glasswing', the initiative will bring together Amazon, Apple, Broadcom, Microsoft, Cisco, CrowdStrike, Palo Alto Networks, and the Linux Foundation to test a new AI model aimed at bolstering defensive cyber capabilities. Anthropic said the model in question, Claude Mythos, has the potential to "reshape cybersecurity" and will only be available to organizations involved in the project. As part of the scheme, launch partners will be given access to Mythos Preview for use in "defensive security work" and share learnings on the capabilities of the model. Access has also been granted to 40 additional organisations that build or maintain critical software infrastructure in a bid to secure open source ecosystems, Anthropic revealed. The company said it will commit up to $100 million in usage credits on this front, as well as a $4 million donation to open source security organizations. Project Glasswing stems from Anthropic's own observations on the capabilities of Claude Mythos in recent weeks. Speculation over the launch of Mythos was fueled by a data leak which saw information on the general-purpose model exposed online. That data leak pointed toward significant improvements in cybersecurity, which Anthropic has since confirmed. The company noted that it wasn't trained specifically for cybersecurity activities, but its capabilities are a "result of its strong agentic coding and reasoning skills". Simply put, Anthropic believes the model is powerful enough to warrant testing behind closed doors at trusted industry giants rather than releasing to the public, where it could be used for nefarious purposes. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," the company said in a statement. "The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now." According to Anthropic, Mythos Preview has identified "thousands of zero-day vulnerabilities" in recent weeks, many of which were critical. Notably, those identified by the model were "often subtle or difficult to detect" and dating back decades. The oldest flaw detected by Mythos included a 27-year old bug in the OpenBSD operating system. "It also discovered a 16-year-old vulnerability in a widely used video software in a line of code that automated testing tools had hit five million times without ever catching the problem," the company said. Anthropic revealed it is also engaged in discussions with US government officials about Claude Mythos Preview and its "offensive and defensive cyber capabilities". The launch of Project Glasswing is a seismic moment for Anthropic and one that could come to define the firm as it approaches a potential IPO later this year. It's worth underlining just how unusual it is for a tech company - let alone one that's barely five years old - to announce a gated release of its software with such widespread endorsement. While Google and Amazon are long-time backers of Anthropic, and Microsoft has recently joined in, the full-throated support of CrowdStrike and Palo Alto Networks reads to me as recognition that Claude Mythos Preview will usher in a serious leap in cybersecurity capabilities. Here, cybersecurity companies that have long prided themselves on their proprietary AI are admitting that Anthropic's latest release is already catching zero days that no other tools ever have. We'll have to wait and see just how effective it is and, with the release limited to launch partners and around 40 additional critical organizations, independent benchmarks can't be established. Examples of products launched in a similar manner that jump to mind include AI analytics tools for the intelligence and defense communities such as Palantir Gotham. Anthropic also already provides a custom set of 'Claude Gov' models for US national security organizations, which have modified guardrails and are considered suitable for handling classified material. I'll be closely monitoring any movement in the discussions that Anthropic says it's having with the US government on Claude Mythos Preview's "offensive and defensive cyber capabilities". With Anthropic's ongoing battle with the US Department of Defense, which has designated the frontier firm as a supply chain risk, the launch of Claude Mythos and partnership with such a wide range of tech giants could give it greater leverage over the US government. Firms such as Microsoft, Google, Amazon, and Apple have supported Anthropic in its legal action against the designation. All of these companies, along with other giants including Broadcom, Cisco, CrowdStrike, and Palo Alto Networks, are now publicly tied to Project Glasswing in another huge vote of confidence in Anthropic. Google might be the most consequential partner here. Its involvement suggests that, at least for the time being, none of its Gemini models can offer the same comprehensive benefits as Claude Mythos Preview and this sends another big signal to the public sector: back this or fall behind.

By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. That's the reality artificial intelligence company Anthropic confronted when it ended Claude subscription access for third-party agent frameworks. Effective April 4, Claude Pro and Max subscribers can no longer use their plan limits to power tools like OpenClaw, as reported by VentureBeat. Users who want to keep those agents running must now switch to pay-as-you-go usage bundles or connect through a direct API key. Boris Cherny, Anthropic's head of Claude Code, announced the change on X, saying subscriptions were never designed for the kind of continuous, automated demand these tools generate. Affected subscribers received a one-time credit equal to their monthly plan cost. It comes down to a simple mismatch: Consumer subscriptions were built around human behavior. AI agents don't behave like humans. A person typing prompts into a chat interface generates a limited number of requests, constrained by time, attention and sleep. Agents don't stop. They run in loops, execute tasks in parallel, and generate a constant stream of activity that can quickly outpace any individual user. Before changing access rules, Anthropic had already started to feel that strain, and had introduced stricter session limits during peak business hours, a change the company said would affect up to 7% of users, as reported by PYMNTS. The OpenClaw decision pushes these controls further, extending them to how access is granted. The numbers tell the story. A $200-per-month Claude Max subscription was being used to run anywhere from $1,000 to $5,000 worth of agent compute tasks, according to The Decoder. Anthropic's own tools reduce costs by reusing previously processed context. Third-party frameworks like OpenClaw skip that efficiency layer entirely, triggering full-cost processing on every request. The new policy applies to all third-party frameworks and will expand beyond OpenClaw. Users can still use Claude models with those tools, but subscriptions will no longer cover the compute. Anthropic isn't alone in drawing a line. Google now spells out exactly what users get: Gemini's 2.5 Pro plan limits free users to 5 prompts per day, while $20-per-month AI Pro subscribers receive 100, and $250 AI Ultra subscribers get 500, the PYMNTs reporting shows. The move from vague tiers to hard caps reflects how standard usage controls have become across major model providers. Cursor and Replit both revisited pricing in 2025 to curb disproportionate usage, resetting expectations around what subscriptions deliver. At the same time, demand for agentic AI is accelerating. In August 2025, 52% of companies surveyed said they were still exploring the technology, according to PYMNTS Intelligence. By November, that number dropped to 30%, with nearly 1 in 4 chief product officers reporting active pilots or full production deployments. The backlash was immediate. OpenClaw creator Peter Steinberger, who joined OpenAI in February, said he and fellow investor Dave Morin attempted to negotiate with Anthropic directly and received only a one-week delay, as reported by The Decoder. He accused Anthropic of incorporating OpenClaw features into its own closed system before locking out open-source alternatives, pointing to the recent addition of Discord and Telegram messaging to Claude Code. One developer said switching API rates would make continued use cost-prohibitive and that the change would likely push users toward competing models, as covered by VentureBeat. For now, alternatives exist. OpenAI and several Chinese AI companies still support OpenClaw-style usage under subscription-like access, according to TNW. The open question is how long that model holds as agent demand and infrastructure pressure continues to climb.

AI found thousands of hidden bugs in critical systems.Tech rivals unite to secure shared infrastructure risks.Cyberattack timelines shrink from months to minutes. Today, a group of the world's biggest tech companies is announcing what is essentially an AI-driven cybersecurity Manhattan Project. As the Cyberwarfare Advisor for the International Association of Counterterrorism & Security Professionals and part of the FBI's InfraGard Artificial Intelligence Threat and Mitigation Cross-Sector Council, I've spent decades profiling global threats, from lecturing at the National Defense University to leading nationwide cyberattack simulations. But the arrival of a new frontier AI from Anthropic represents a paradigm shift that even the most prepared infrastructure specialists are scrambling to navigate. There is a lot to unpack from this announcement, but before I go into the published details, I'm going to try to read between the lines. That's because the mere existence of this announcement means there's a lot that remains unsaid. The fact that all of these companies are working together has to be indicative of the scale of the threat and the scale of the project necessary to respond to it. Also: AI agents of chaos? New research shows how bots talking to bots can go sideways fast What I'm going to describe is both terrifying news and, at the same time, somewhat encouraging news. It's worrisome because clearly our entire cybersecurity infrastructure is at great risk due to advances in weapons-grade AI. Otherwise, these fierce competitors wouldn't be working together as announced today. It's somewhat encouraging because these intense competitors have chosen to work together to reduce that infrastructure vulnerability. This is wild news, folks. Project Glasswing is described in the announcement as: "An initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks in an effort to secure the world's most critical software." The name "glasswing" may mean nothing, or provide some insight into the project's overall intent. The glasswing butterfly, native to Central and South America, is so-named because of its transparent wings that allow it to camouflage itself in its surroundings. The butterfly is also unusually resilient, able to carry up to 40 times its own weight. Also: Why enterprise AI agents could become the ultimate insider threat At its core, this "coalition of the willing" is planning to deploy two defensive weapons: a new, unreleased AI model called Claude Mythos Preview and a pile of cash ($4 million in direct donations and $150 million in Claude usage credits). At first glance, this announcement looks like a highly coordinated PR strategy, some security theater. Another skeptical interpretation might be that these companies are creating a security cartel to lock out startups and other players. But I don't think that's the case. Based on statements from key players and the security vulnerabilities mentioned, I think this is something far more serious than a giant corporate PR photo op to make everyone look responsible with AI. Having spent time as an executive at Symantec and a team lead at Apple, I've seen firsthand how fiercely these companies guard their intellectual property. To see them hand over $150 million in credits and open up unreleased models to one another tells me the threat level has moved from competitive to existential. Also: Stop saying AI hallucinates - it doesn't. And the mischaracterization is dangerous The fact is, you don't see these specific companies cooperating like this unless the alternative is mutually assured destruction of their shared infrastructure. And no, I don't think that's hyperbole. Here's how Elia Zaitsev, CTO at cybersecurity company CrowdStrike, described the situation: "The window between a vulnerability being discovered and being exploited by an adversary has collapsed. What once took months now happens in minutes with AI." If the name CrowdStrike sounds familiar, it might be because back in 2024, the company pushed an update that accidentally bypassed safeguards and crashed millions of Windows systems all across the planet. If any one company knows what a bad day feels like, it's CrowdStrike. According to the announcement, "We formed Project Glasswing because the capabilities we've observed in Mythos Preview could reshape cybersecurity." Anthropic described the Mythos Preview model as a "general-purpose, unreleased frontier model" with strong agentic coding and reasoning skills. The company said, "Anthropic didn't train it specifically for cybersecurity." The company also said it doesn't plan to make Mythos Preview generally available, probably because it could be weaponized by adversarial actors. Also: AI agents are fast, loose, and out of control, MIT study finds According to Anthropic, "Over the past few weeks, Mythos Preview has identified thousands of zero-day vulnerabilities, many of them critical. The vulnerabilities it finds are often subtle or difficult to detect." Thousands. It turns out that many of the vulnerabilities are present in core, mission-critical software and have been in software deployed actively for the past 10 or 20 years. One such vulnerability was a 27-year-old bug just found in OpenBSD. For the record, OpenBSD is known for its security, and yet here was a mission-critical vulnerability nobody (at least none of the good guys) knew about. Another example is "a 16-year-old vulnerability in a widely used video software." Here's the scary gotcha. Apparently, the bug is in a line of code that automated testing tools previously considered the gold standard for security checks. The testing tools analyzed that line of code five million times over the years, and not once did they catch the problem. Think about this statement from Anthony Grieco, SVP and chief security and trust officer at Cisco, the global networking and infrastructure company that powers much of the internet and enterprise connectivity. Grieco said, "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Also: How Claude Code's new auto mode prevents AI coding disasters - without slowing you down No going back. He said, "The old ways of hardening systems are no longer sufficient. Providers of technology must aggressively adopt new approaches now." This fact is why he says Cisco joined Project Glasswing: "This work is too important and too urgent to do alone." That's a breathtaking statement, especially considering who it's coming from. Our modern civilization is built upon a networked technology infrastructure. Ranging all the way from giant power-generating stations down to our smart rings, just about everything is based on computers and networking. But this digital infrastructure foundation isn't all from one company or product. In fact, a huge proportion is based on open-source software, often written by lone unaffiliated developers. Even commercial billion-dollar products use software libraries built by individual coders. Also: How I used GPT-5.2-Codex to find a mystery bug and hosting nightmare - fast Historically, programmers and teams have hand-tested their code and then written test suites to put their code through its paces. I do this with my open-source security product. Before I deploy an update, I test it extensively. Afterward, I often share it with a subset of users for a beta test period. Generally speaking, my product has been quite solid. But last fall, I decided to feed the full source code to Claude Code and OpenAI's Codex. I asked each of them for a security evaluation. Both identified vulnerabilities that my testing process missed. In fact, while both found some of the same vulnerabilities, each AI found a few that the other AI did not. I quickly fixed the bugs the AIs identified. But what really interested me was the type of bugs identified. These weren't bugs in the actual code itself. I didn't make any of the classic coding errors that usually lead to vulnerabilities. What the AIs identified were behavioral quirks that would only manifest when combined with other software and configurations -- code I didn't write. But because the AIs could look beyond the code they were asked to investigate and instead considered the entire infrastructure environment in which the code was running, they were able to identify situational problems that could have turned into exploits. Also: I teamed up two AI tools to solve a major bug - but they couldn't do it without me This issue, on a much greater scale, is what Project Glasswing intends to tackle. The Project Glasswing announcement said: "No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play." There are hundreds of thousands of these components running on billions of devices and within millions of software programs. All it takes is one vulnerability in one piece of code, and critical infrastructure could fail. According to Igor Tsyganskiy, EVP of cybersecurity and Microsoft Research at Microsoft, "As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented." A corollary is that bad actors can use AI aggressively and destructively, performing attacks at machine speed and finding vulnerabilities at a rate we've never encountered before. This initiative must not be taken out of context. To understand its relevance, we must also consider the current geopolitical situation. IT security teams have been dealing with cyberthreats for years. Whether it's criminals out for money, hacktivists intent on disruption, or nation states conducting a mix of data exfiltration, monetary extortion, identity theft, and infrastructure disruption, cyber threats are nothing new. I spent years investigating a key White House email controversy for my book, Where Have All The Emails Gone?, and even then, the vulnerability of our highest offices to basic infrastructure failures was staggering. But those were human-scale errors. What Project Glasswing is fighting is a machine-speed collapse of the entire defensive perimeter. Also: I built two apps with just my voice and a mouse - are IDEs already obsolete? There are two very new factors in play right now. The first has been the growth of AI capabilities. While Mythos Preview is intended as a defensive tool, do not doubt that adversaries are building their own frontier models as weapons of mass digital disruption. The second factor is the war in Iran. Back in 2012, I wrote a cyberwarfare profile of Iran, exploring its internal capabilities to wage cyberwarfare. Back then, I noted that Iran prioritizes higher education in science and math. While the Iranian government censored the internet, almost a quarter of Iranian citizens were online. Today, almost 80% are online. My conclusion in 2012 is even more valid today. I said, "The point of all this is to showcase that Iran has substantial connectivity, resources, and educated citizenry, more than enough to fuel forays into cybercrime, cyberterrorism, and cyberwarfare itself." Combine that with access to frontier-level AI technology, and it's fair to expect an intense level of cyberattacks at a rate and ferocity never seen before, leveraging exploits previously hidden in the complexity of the overall infrastructure. Also: I used Gmail's AI tool to do hours of work for me in 10 minutes - with 3 prompts It's important to acknowledge the ongoing issues Anthropic has had recently with the US Government. The Project Glasswing announcement obliquely reflects this situation: "Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities." This is the only time in the announcement that Mythos was described as capable of supporting "offensive" capabilities. I invite the reader to draw their own conclusions about that detail. My take on it is that Mythos could be potentially destructively capable if that kind of action were to become necessary. That offensive capability may also be why Anthropic is limiting the release to a defined set of participants and not making it available to the world at large. The announcement also said: "Securing critical infrastructure is a top national security priority for democratic countries. The emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology." Also: Anthropic's new warning: If you train AI to cheat, it'll hack and sabotage too Earlier this year, the US government designated Anthropic as a supply chain risk. A side effect of this designation was that defense contractors were instructed to stop using Anthropic products in anything that could be tangentially considered related to government defense work. That designation would have affected the government contracts of a number of Project Glasswing participants had they chosen to continue using Claude. However, on March 26, US District Court Judge Rita Lin blocked that restriction, temporarily allowing defense contractors to continue to use Claude AI products. I see two possible between-the-lines reads here: This is how the Project Glasswing release explained the situation: "The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now." If you're going to pay real attention to the infrastructure risk posed by thousands of hidden vulnerabilities, you have to take into account the individual open-source developers operating independently. There is an enormous ecosystem based on all those individuals, each modifying and checking in their own code, to centralized repositories. While the nature of open source means anyone (and any company) can read the code, checking in modifications is limited to the developers with commit access to the project. Also: Switching to Claude? How to take your ChatGPT memories with you It is certainly possible for others to fork the project (create their own copy that is also distributed). But doing so would not immediately solve any software dependency risk. That issue is because there are automated systems across the internet built to incorporate known packages into their distributions. Forking a project would require all those automated systems to change the source of their code updates. So, when Mythos Preview finds a vulnerability, how does it reach the proper developer for repair? Project Glasswing is taking two approaches. The first is to donate a Claude Max subscription for Claude Opus and Sonnet to any verifiable open-source developer who asks. That's not access to Mythos Preview, but even Claude Opus 4.6 can help identify bugs. To apply for Claude Max grants, maintainers interested in access can apply through the Claude for Open Source program. When I asked about it, Anthropic told me, "We've donated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to enable the maintainers of open-source software to respond to this changing landscape." OpenSSF is the Open Source Security Foundation. Their mission is to "Make it easier to sustainably secure the development, maintenance, release, and consumption of open-source software. This includes fostering collaboration within and beyond the OpenSSF, establishing best practices, and developing innovative solutions." Alpha-Omega, part of the Linux Foundation, serves: "As a helping hand and funding catalyst that supports the maintainers, communities, and ecosystems where security investment can have the greatest impact." The Apache Software Foundation also supports a great many projects that provide critical infrastructure across the internet. While funding goes to these organizations, their role in high-vulnerability projects will be to facilitate outreach to individual developers and to possibly provide funding for the time required to implement fixes. The challenge will be that many of the key developers for mission-critical components have other obligations and time commitments. On the other hand, if any group can wrangle these very independent developers, it's the various open-source foundations that have been developer-wrangling ever since they got started. Jim Zemlin, CEO of the Linux Foundation, said, "In the past, security expertise has been a luxury reserved for organizations with large security teams. Open source maintainers, whose software underpins much of the world's critical infrastructure, have historically been left to figure it all out on their own." Here's something to consider. He said, "Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software." He also addressed the funding and time concerns. He said, "By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation. This is how AI-augmented security can become a trusted sidekick in every maintainer's workflow, not just for those who can afford expensive security teams." My take on this approach is that it's intriguing to see these arch-competitors apparently working together to solve cybersecurity issues. I'm also curious about how much of this approach proves to be merely acting for the cameras, and how much will impact our fundamental digital infrastructure. I balance that concern with one that's more visceral. This announcement, and the awareness of what a Mythos-style AI can do, tells us that we are at a far greater risk than even we cyberwarfare specialists had predicted. Given the volatile state of the world today, Project Glasswing could be the last best hope, or it could turn out to be just another PR effort that actually does nothing to prevent severe infrastructure disruption. Do you see Project Glasswing as a genuine defensive effort, or more of a coordinated industry power move to control access to advanced AI security tools? Let us know in the comments below.

Anthropic on Tuesday released a preview of its new frontier model, Mythos, which it says will be used by a small coterie of partner organizations for cybersecurity work. In a previously leaked memo, the AI startup called the model one of its "most powerful" yet. The model's limited debut is part of a new security initiative, dubbed Project Glasswing, in which more than 40 partner organizations will deploy the model for the purposes of "defensive security work" and to secure critical software, Anthropic said. While it was not specifically trained for cybersecurity work, the preview will be used to scan both first-party and open-source software systems for code vulnerabilities, the company said. Anthropic claims that, over the past few weeks, Mythos identified "thousands of zero-day vulnerabilities, many of them critical." Many of the vulnerabilities are one to two decades old, the company added. Mythos is a general-purpose model for Anthropic's Claude AI systems that the company claims has strong agentic coding and reasoning skills. Anthropic's frontier models are considered its most sophisticated and high-performance models, designed for more complex tasks, including agent-building and coding. The partner organizations previewing Mythos include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of the initiative, these partners will ultimately share what they've learned from using the model so that the rest of the tech industry can benefit from it. The preview is not going to be made generally available, Anthropic said. Anthropic also claims that it has engaged in "ongoing discussions" with federal officials about the use of Mythos, although one would have to imagine that those discussions are complicated by the fact that Anthropic and the Trump administration are currently locked in a legal battle after the Pentagon labeled the AI lab a supply-chain risk over Anthropic's refusal to allow autonomous targeting or surveillance of U.S. citizens. News of Mythos was originally leaked in a data security incident reported last month by Fortune. A draft blog about the model (then called "Capybara") was left in an unsecured cache of documents available on a publicly inspectable data lake. The leak, which Anthropic subsequently attributed to "human error," was originally spotted by security researchers. "'Capybara' is a new name for a new tier of model: larger and more intelligent than our Opus models -- which were, until now, our most powerful," the leaked document said, adding later that it was "by far the most powerful AI model we've ever developed," according to the report. In the leak, Anthropic claimed that its new model far exceeded performance areas (like "software coding, academic reasoning, and cybersecurity") met by its currently public models, and that it could potentially pose a cybersecurity threat if weaponized by bad actors to find bugs and exploit them (rather than fix them, which is how Mythos will be deployed). Last month, the company accidentally exposed nearly 2,000 source code files and over half a million lines of code via a mistake it made in the launch of version 2.1.88 of its Claude Code software package. The company then accidentally caused thousands of code repositories on Github to be taken down as it attempted to clean up the mess.

Barça is gearing up for the second Atlético battle with no locker-room issues. Lamine Yamal's outburst crisis, the young Barcelona star, ended quickly after the Atletico Madrid match in La Liga without affecting the atmosphere within the Catalan team. Barcelona secured a valuable 2-1 win over Atletico Madrid at their home, the Metropolitano, last Saturday in La Liga, before the two teams meet again tomorrow, Wednesday, this time at the Spotify Camp Nou in the first leg of the Champions League quarter-finals. Yamal raised questions after he refused to celebrate with his teammates following the winning goal against Atletico Madrid, and he also left the pitch angrily, not listening to the words of his coach, Hans Flick. When asked about the strange scene, Flick said after the match: "Everything is fine inside the dressing room." According to what DAZN footage showed, the reason for Yamal's anger was Jose Ramon de la Fuente, the goalkeeping coach, known for his passionate behavior and frequent expulsions from matches, as he gave the player instructions during the match to pass the ball instead of shooting when the game was nearing its end and the score was 1-1. De la Fuente's interference in this kind of detail did not sit well with Yamal, who conveyed his dissatisfaction to Flick. But it seems that taking all three points from the Madrid side's fortress helped calm things down quickly. According to SER radio, a cheerful atmosphere prevailed on the plane on the way back to Barcelona, especially with the gap widening to 7 points over Real Madrid, which made things smoother. It added: "It was nothing more than a passing outburst, and everyone exchanged jokes on the plane during the return trip to the city of Barcelona."

The purest play: Why Boron One is THE boron stock to own While dozens of minerals are considered critical to economic and national security, thanks to their combination of scarcity and unmatched industrial performance, few minerals, if any, are more ubiquitous across the industrial landscape than boron. Let's paint a picture: Perhaps most urgently, boron is also an essential component across the renewable energy industry, making it [...]

The purest play: Why Boron One is THE boron stock to own While dozens of minerals are considered critical to economic and national security, thanks to their combination of scarcity and unmatched industrial performance, few minerals, if any, are more ubiquitous across the industrial landscape than boron. Let's paint a picture: Perhaps most urgently, boron is also an essential component across the renewable energy industry, making it [...]

(Bloomberg) -- Anthropic PBC has hired a senior leader from Microsoft Corp. to lead its push to establish the infrastructure needed to support growing adoption of its artificial intelligence services. Eric Boyd will serve as head of infrastructure at Anthropic, according to a post on his LinkedIn Tuesday. "AI is accelerating at an incredible pace," he wrote. "The impact of Claude Code in the last 6 months, and particularly the last two months, just shows the power of what is possible." Anthropic has seen strong growth in demand for its AI products, including Claude Code, which helps streamline the process of writing and debugging software. More recently, the company has struggled at times to keep its services online after what it described as "unprecedented demand" from everyday users, as well as business customers. The company is working to build additional cloud computing capacity to meet that usage, including committing to spend $50 billion to build AI data centers in the US. By comparison, rival OpenAI has said it plans to spend about $600 billion on infrastructure for AI by 2030. Boyd previously oversaw Microsoft's AI platform, enabling deployment of large language models by both customers and internal teams. He reported to Executive Vice President Jay Parikh and oversaw about 1,500 workers, according to a recent organizational chart seen by Bloomberg. Prior to his 16 years at Microsoft, he held leadership roles at Yahoo. Microsoft didn't immediately respond to a request for comment. Boyd's planned hiring was previously reported by tech outlet Newcomer. "His experience leading infrastructure at enterprise scale will help ensure we can meet record demand from customers around the world," wrote Rahul Patil, Anthropic's chief technology officer, on LinkedIn.

Broadcom is strengthening its position in the AI infrastructure market, expanding its partnerships with Google and Anthropic in a move that demonstrates how demand for compute capacity is reshaping the chip sector. The company announced new agreements to develop future generations of Google's custom AI processors while enabling Anthropic to access approximately 3.5 gigawatts of computing capacity through Google's TPU infrastructure. Together, the deals place Broadcom in a critical position within a supply chain that is increasingly defined by highly ambitious hyperscale computing requirements. The rise of custom silicon is driving these deals. Rather than relying solely on general-purpose graphics processors, large tech companies are investing heavily in purpose-built chips to optimize performance and reduce long-term costs. Broadcom, with its expertise in both semiconductor design and networking, is a key supplier for this transition. Broadcom will continue working with Google to design and manufacture Tensor Processing Units (TPUs), specialized chips tailored for machine learning workloads. The arrangement extends several years into the future and includes networking components that support AI data center infrastructure. The Anthropic agreement demonstrates the scale of the AI sector's compute ambitions. The startup's plan to access multiple gigawatts of TPU-based capacity beginning in 2027 reflects a level of infrastructure typically associated with national power grids rather than young tech companies. The expansion follows a sharp increase in Anthropic's commercial activity, with revenue and enterprise adoption rising quickly over the past year. Broadcom's role extends beyond chip fabrication. Its involvement in networking hardware positions it to benefit from the full stack of AI deployment, where moving data efficiently between processors is as critical as the compute itself. As AI models grow ever more advanced, data centers are evolving to handle unprecedented volumes of information, increasing the importance of integrated solutions. For Broadcom, this demand translates into a substantial revenue opportunity. AI-related business tied to Anthropic alone could reach tens of billions of dollars annually within the next two years, though the company has not disclosed financial terms. The partnerships also highlight how AI development is concentrating among a small number of players with the capital to secure massive compute resources. Companies like Anthropic and OpenAI are competing not only on model performance but also on access to infrastructure. That competition is driving long-term agreements with chipmakers and cloud providers alike. Technology stocks have faced pressure in recent weeks amid turbulence in world events, yet investment in AI infrastructure continues largely unabated. That resilience suggests that spending on AI is being treated less as a discretionary expense and more as a foundational priority for major technology firms. There are, however, dependencies built into the agreements. Broadcom indicated that some of the planned capacity expansion is contingent on Anthropic's continued commercial growth. This introduces an element of performance risk, though current trends point toward sustained demand for AI services. Bottom line, the deals reinforce a central reality of today's tech cycle: access to compute has become a defining competitive advantage. As companies work to build and deploy sophisticated AI systems, those that control the underlying infrastructure stand to capture significant value, and clearly Broadcom appears intent on being one of those companies.

Samsung is working on a major change to the "Hey Plex" wake word to "Hey Perplexity" in the Galaxy S26 series. The most recent update to the Android app (version 2.81.2) already includes related hints. Tne change is probably meant to avoid confusion with the popular Plex media streamer. To assist users with the longer name, the app is also being optimized to recognize various similar-sounding pronunciations. This has been a confusing few weeks for Samsung Galaxy S26 owners. When the device launched, one of its headlining features was a deep integration with Perplexity AI, promised to be accessible via a quick "Hey Plex" wake word. However, early adopters quickly noticed that the feature was either missing or simply refused to work. After a period of silence and a few deleted tweets, we finally have a clearer picture of what is happening behind the scenes with Samsung and "Hey Plex." From "Plex" to "Perplexity": The story behind the S26's AI Wake Word change The mystery began when users noticed that "Hey Plex" vanished from the voice wake-up settings. Samsung initially described this as part of an "ongoing product refinement process." However, the real reason seems to be a branding pivot. Perplexity CEO Aravind Srinivas briefly mentioned in a now-deleted post that the company was transitioning to a more formal "Hey Perplexity" wake word. New evidence found by Android Authority shows this change in the Perplexity Android app version 2.81.2. Internal files and hidden screens show that Samsung is choosing "Hey Perplexity" as the new command. The firm probably wants to avoid confusion with the popular Plex media streaming service. Solving the tongue-twister One clear worry about this change is how hard the word is to understand. "Perplexity" is significantly harder to say quickly than the snappy "Plex." Interestingly, developers seem to be ready for this problem. The code for the app shows that the system is learning to recognize different "sound-alike" versions of the word. This should help the AI respond even if a user mispronounces something quickly. For those waiting to use the full suite of AI tools on their Galaxy S26, this update is a positive sign. The presence of these new assets suggests that the "refinement process" is nearing its end.

For so many years, the hyperscalers and cloud builders have dominated IT spending and much of the talk about system architecture. But the AI model builders, particularly Anthropic and OpenAI, are now their peers when it comes to massive infrastructure investments, and what they do - and do not do - also shapes the AI landscape. To make those investments in AI infrastructure, the AI model builders have themselves had to raise tremendous amounts of money from investors - sometimes including their AI compute engine suppliers, which is a bit of roundtripping, indeed. Nvidia, AMD, Amazon Web Services, Microsoft Azure, and Google have all made investments in OpenAI and Anthropic, helping them prime the AI workload pump that will in turn lead to larger AI system sales in the future as more of us make use of AI applications. Anthropic is growing like crazy right now, thanks to code assistant variants of its Claude model. Code assistants, as it turns out, are the killer app for GenAI, much to the chagrin of millions of programmers worldwide but not to the programming managers who have mastered the art of keeping hundreds of AI coding agents busy working 24/7 on software projects. There are plenty of complaints about the quality of the code that GenAI models generate, but perhaps AI model revenue streams are the leading indicator of enthusiasm for the idea despite the many shortcomings of automated programming. Given its pole position in code assistants and its consequential and more explosive revenue growth, Anthropic's recent revenue curve is perhaps a better barometer than OpenAI. Brent Thrill and Blayne Curtis, equities analysts at Jeffries, have been tracking the annualized revenue run rate for these two AI model builders. Anthropic had around 500 customers paying it $1 million or more annually to license Claude back in February, and said this week that it has more than doubled that. Two years ago, the overall revenue trajectory at Anthropic - which includes Claude model resale through AWS Bedrock and Google Vertex at the gross revenue levels as well as any direct sales that Anthropic does - was lower and slower growing than the net revenue figures that OpenAI reported - which includes the net revenues OpenAI gets from Google, AWS, and Microsoft as well as direct API revenues. But early this year, OpenAI's revenues stumbled a bit (dropping from a $25 billion ARR to a $24 billion ARR, while Anthropic accelerated from a $14 billion ARR in February of this year to a $30 billion ARR this week. Just for fun, we asked Claude to comb the Internet and build a relative ARR chart for OpenAI and Anthropic. Take a gander: These figures mesh with the ones that the Jeffries folks have been tracking, which is the only reason we let Claude build the chart. It won't happen a lot here at The Next Platform. Remember, one is counting revenue at the customer level, and the other is counting revenue that it actually receives. IDC and Gartner have a similar split in the way they count things. IDC has always counted "factory revenue" - direct sales by vendors plus sales into the distribution channel, while Gartner counts the money that end users spend, which includes the channel overhead. We don't know what the overhead of the channel is for Anthropic, so it is tough to adjust it match apples to apples with OpenAI. What is clear is that Anthropic is taking off and maybe OpenAI is not, and that is not a good thing for OpenAI when it is trying to go public this year and it is a very good thing for Anthropic when it is also trying to go public this year. Perhaps more concerning for Sam Altman & Co, OpenAI is overvalued compared to Anthropic based on these revenue rates. It is very hard to reckon cumulative funding for OpenAI, but various estimates put it at somewhere between $168 billion and $199 billion, with its last round coming in at $122 billion - with Nvidia, Amazon, and SoftBank kicking in most of that - and giving it a valuation of $852 billion. But, Anthropic, which has caught the code assistant wave with Claude Code in a way that OpenAI Codex just has not, is driving revenue but does not have so many investors that want to cash out so big, and is therefore not under such pressure to have a history-making, record-breaking initial public offering. Depending on relative profitability - by which I mean the level of losses compared to revenues, because neither company has a hope in hell of being profitable until the next decade begins, and even that is not necessarily going to happen - Anthropic might deserve a better IPO than OpenAI, despite it only having raised around $67 billion in total and having a valuation of $380 billion. Here's the important thing that everyone - Wall Street, Main Street, your cab driver, your mom - is watching. That revenue that OpenAI and Anthropic are chasing requires token chewing and token spitting. And the top brass at Anthropic are wracking their brains trying to figure out how to get the money to get the infrastructure to meet future token demand. Hence, an expanded deal between Broadcom and Google and Anthropic, which Broadcom announced in an 8-K filing with the US Securities and Exchange Commission today, is going to get more iron into the hands of Google and Anthropic, which is a frenemy in that it competes with Google's Gemini model but also uses Google TPU infrastructure to train and infer Claude. Two things are happening. First, Google has inked a long term agreement with Broadcom to help develop and make future TPUs and has a also signed a supply assurance agreement for networking and other components used in Google's own rackscale systems that runs through the end of 2031. As far as we know, MediaTek is also helping design and manufacture future TPUs for Google, but these appear to be variants of the mainstream compute engines - variations of the TPU v7, TPU v8, and TPU v9 devices is what the chatter is all about. It is understandable that Google wants to have a second source for its AI compute engines to mitigate risks. Similarly, Anthropic has to mitigate risks by working with AWS on Trainiums, Google on TPUs, Nvidia on its GPUs, and perhaps soon, AMD on its GPUs. So Anthropic is renting TPU capacity on Google Cloud, but in 2027, it also plans to install its own TPU racks, built by Broadcom and authorized by Google, in its own datacenters. If Nvidia AI systems and their datacenters - the indisputable Cadillac of AI training and inference - cost on the order $50 billion per gigawatt (which is a number that Nvidia co-founder and chief executive officer Jensen Huang has used a number of times), it is reasonable to assume that TPU infrastructure might only cost on the order of $30 billion to $35 billion per gigawatt. So boosting the capacity to 3.5 gigawatts will give Broadcom more dough, and depending on what Google is charging Broadcom to resell its TPUs to Anthropic, that incremental cost lowers Google internal TPU bill. In a sense, Google is turning Broadcom into an OEM for Anthropic and is booking revenue in its cloud hardware division, offsetting its own TPU costs. If Google is charging a 25 percent mark up, then if it can get enough customers to buy 4X the number of TPUs that it needs, it can get its TPUs for free, and charge a hefty profit to rent its own TPUs on the cloud. In the long run, the AI model builders will not rent huge amounts of capacity on the cloud. They will make the clouds sell them infrastructure on the cheap that they can amortize, thereby lowering the cost of tokens. Google's other option was to lose Anthropic as a customer and have it go off and create its own AI XPUs or do a similar deal with AWS or Microsoft or Meta Platforms. Now, Anthropic has to come up with the money to pay for this gear in 2027, which is why it is doing an IPO in 2026 and not installing this TPU gear right now. In the meantime, Anthropic will rent TPU capacity on Google Cloud and Trainium capacity on AWS, pay through the nose, and get by.

has hired a senior leader from to lead its push to establish the infrastructure needed to support growing adoption of its artificial intelligence services. will serve as head of infrastructure at Anthropic, according to a post on his LinkedIn Tuesday. "AI is accelerating at an incredible pace," he wrote. "The impact of Claude Code in the last 6 months, and particularly the last two months, just shows the power of what is possible." Anthropic has seen strong growth in demand for its AI products, including Claude Code, which helps streamline the process of writing ...
