The latest news and updates from companies in the WLTH portfolio.
Anthropic recently hired a 16-year Microsoft veteran away from the software giant: Former Microsoft AI Platform president Eric Boyd is now the head of infrastructure at Anthropic. "I'm excited to join the amazing team at Anthropic today where I'll be leading the Infrastructure team," Boyd wrote on LinkedIn today. "I've been privileged to have a front row seat to the explosion of LLMs, and the team at Anthropic is truly special. The combination of the absolute leading models with a culture that is committed to their mission is inspiring and I can't wait to lean in to help." Geekwire reported a week ago that Boyd had left Microsoft after over 16 years. Before he surfaced at Anthropic today, Boyd had led the AI Platform team, which "delivers Microsoft Foundry" and Foundry IQ, while powering the company's first-party Copilot applications, for over 11 years. He reported to executive vice president Jay Parikh and oversaw about 1,500 workers, Bloomberg says. Before that, he was the general manager of the BingAds Development team. While many Big AI companies poach executives from one another-Apple, in particular, has seen its AI teams gutted by rivals in recent months-it's not difficult to understand why Boyd would want to leave the Copilot behind for an AI that's well regarded and successful. "AI is accelerating at an incredible pace, and the impact of Claude Code in the last six months, and particularly the last two months, just shows the power of what is possible," he wrote. "Bringing Powerful AI to the world in a way that brings the benefits to everyone will be so important, and I can't think of a better place to make this happen."

Anthropic has launched Project Glasswing, a cybersecurity consortium featuring tech giants like Apple, Google, and Microsoft. Participants will use the unreleased Claude Mythos Preview model to find and patch deep-seated vulnerabilities in their software. Anthropic aims to give defenders a head start against future AI-powered cyberattacks by providing early access to this powerful AI. Anthropic, the parent company of Claude AI, has launched a new cybersecurity initiative called "Project Glasswing." The main goal is to use artificial intelligence to defend against the very cyberattacks that AI is now making possible. To achieve this, the company is also teaming up with rivals in the segment. Project Glasswing: Anthropic's AI cybersecurity initiative powered by Claude Mythos The centerpiece of this project is Claude Mythos Preview. Anthropic describes this new AI model as a major leap forward. While it wasn't specifically designed for hacking, its advanced coding and reasoning skills make it a formidable tool for finding security holes. According to Anthropic, the model has already autonomously identified thousands of critical vulnerabilities, including bugs in every major operating system and web browser. Because of this power, Anthropic is not releasing the model to the public. As reported by The Verge, the company fears that if Mythos Preview fell into the wrong hands, it could become a "meaningful accelerant" for hackers. Instead, they are keeping it behind a "glass wing," giving access only to a select group of defensive partners. Unlikely allies Project Glasswing has accomplished something rare in Silicon Valley: it has brought fierce rivals together. The consortium includes over 45 organizations, such as Apple, Google, Microsoft, Amazon Web Services, and NVIDIA. Financial giants like JPMorganChase and infrastructure groups like the Linux Foundation are also on board. The logic behind this collaboration is defensive. It wants to give these companies private access to the model to find and patch vulnerabilities in their own software before similar capabilities become widely available to malicious actors. Logan Graham, Anthropic's frontier red team lead, told WIRED that we must prepare for a world where these high-level hacking capabilities are common within the next year or two. A complicated path forward Despite the high-tech mission, Anthropic's journey hasn't been without friction. The company recently faced tension with the U.S. government after refusing to remove certain guardrails for military use. This even led the company to get a "supply chain risk" designation from the Pentagon. Furthermore, reports suggest that a hacker used an earlier version of Claude against government agencies in Mexico earlier this year. To support the effort, Anthropic is committing $100 million in usage credits and making significant donations to open-source foundations.

April 7 (Reuters) - Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year. (Reporting by Jaspreet Singh in Bengaluru and Jeffrey Dastin in San Francisco; Editing by Leroy Leo)
Intel Corp. will join Elon Musk's massive chip-building effort in Austin, Texas, the companies announced April 7. The Silicon Valley chip manufacturer will help Musk's Terafab project to "help refactor silicon fab technology," Intel's CEO Lip-Bu Tan announced on X in a surprise statement. "Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab's aim to produce 1 [terawatt per year] of compute to power future advances in AI and robotics," Tan posted on X. SpaceX, xAI, and Tesla expect to require more than 1 terawatt (TW), or 1 trillion watts, of solar power annually, which is the amount the Terafab is designed to produce. That's about enough energy to power 100,000 average homes for one year....

8 PM ET today is the deadline set by President Donald Trump for Iran to reopen the Strait of Hormuz, a vital artery for the global oil trade. Through Truth Social, the president warned of imminent attacks on critical infrastructure, such as power plants and bridges, if an agreement is not reached. The escalation reached a breaking point following the refusal of Russia and China at the UN to intervene in the reopening of the passage, which has been blocked since February due to regional conflict. This scenario has generated immediate volatility in the markets. While crude oil prices remain at highs, Polymarket traders reflect global uncertainty; in just one hour, the odds of a deadline extension jumped from 35% to 66%. The crypto community is closely monitoring the situation, as the outcome of this geopolitical conflict will determine the direction of safe-haven assets and the stability of energy markets in the coming hours. With Pakistan's mediation still ongoing and the world awaiting the expiration of the deadline, the market is bracing for a possible announcement of the end of military operations by June (74% probability according to bets). The next step depends on Tehran's response before the presidential clock runs out. Source:https://polymarket.com/event/trump-announces-hormuz-deadline-extension-today Disclaimer: Crypto Economy Flash News is prepared from official and public sources verified by our editorial team. Its purpose is to quickly inform about relevant facts in the crypto and blockchain ecosystem. This information does not constitute financial advice or investment recommendations. We recommend always verifying the official channels of each project before making related decisions.

By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. That growth is being driven largely by enterprise demand. Companies are integrating Anthropic's models into internal workflows and customer-facing tools via APIs, while a smaller share of revenue comes from subscriptions to premium chatbot features, according to The Information. The momentum is also showing up in customer spend. Anthropic now counts more than 1,000 enterprise clients paying over $1 million annually, a figure that has more than doubled in recent months, signaling a shift from experimentation to full-scale deployment. At the same time, the surge is tightening the competitive race. Anthropic's trajectory is closing the gap with larger competitors like OpenAI, which generated an estimated $25 billion annualized revenue earlier this year. This underscores how enterprise AI spending is expanding quickly across multiple providers rather than consolidating around one. The expansion comes even as Anthropic faces regulatory scrutiny in the United States, as PYMNTS previously reported. The company is contesting a federal designation that labeled it a potential supply chain risk, a move it has warned could cost billions in lost revenue. The dispute has introduced uncertainty among some enterprise customers, with more than 100 reportedly raising concerns about continuing their relationships with Anthropic, according to Bloomberg. Anthropic's growth reflects broader enterprise adoption of AI tools. Companies are moving beyond early chatbot experiments and deploying models across functions such as software development, customer support and internal data operations. The rise in customers spending more than $1 million annually suggests that AI usage is shifting from limited pilots to larger, production-level deployments. That demand is also shaping how AI companies structure their offerings. Rather than focusing solely on consumer-facing chat interfaces, Anthropic has leaned into developer tools and enterprise integrations, where usage scales with business activity. The company's growth strategy is also extending beyond its core model business. Anthropic is reportedly planning a $200 million investment into a new private equity-backed venture aimed at distributing AI tools across portfolio companies, as covered by PYMNTS. At the same time, it is expanding into industry-specific applications. Anthropic has agreed to acquire biotech startup Coefficient Bio for roughly $400 million, targeting the use of AI in drug discovery and clinical workflows, according to PYMNTS.

Anthropic decided not to release the Mythos model widely, citing the need for further safeguards to ensure its safe deployment. Anthropic introduced its latest AI model, Claude Mythos Preview, on Tuesday, focusing on cybersecurity. The model aims to strengthen defenses against cyber threats by identifying vulnerabilities and developing exploits with little human oversight. This move comes weeks after a security lapse exposed Claude's code, which drew significant attention to Anthropic's internal security measures. Claude Mythos Preview represents Anthropic's cutting-edge effort in cybersecurity, designed to uncover high-severity vulnerabilities. In its testing, Mythos has already identified thousands of flaws across major systems, including operating systems and web browsers. According to the company, this new model aims to equip defenders with advanced tools before cyber attackers can harness similar AI capabilities. Project Glasswing, a partner program, was also announced alongside the Mythos model. It offers early access to the new AI system for select companies like AWS, Google, and Microsoft, among others. More than 40 organizations will be part of the initiative, with Anthropic committing up to $100 million in usage credits to support the program. Despite Mythos's promising capabilities, Anthropic has decided against a wide release. The company emphasized the need for further safeguards before deploying Mythos at scale. "Our goal is to ensure that this powerful tool can be deployed safely and responsibly," an Anthropic representative stated. This caution follows a mishap earlier this year when a packaging error exposed Claude's code. The incident led to the accidental release of over 500,000 lines of code, causing a significant security breach. Anthropic's attempt to take down the leaked files further escalated the issue, as it mistakenly removed thousands of GitHub repositories. Project Glasswing's partner organizations will be among the first to test Claude Mythos. These partners include some of the largest players in technology and infrastructure, with the Linux Foundation and Palo Alto Networks joining the program. The initiative aims to harness Mythos's ability to identify and exploit vulnerabilities while prioritizing the security of its deployment. Along with the partner program, Anthropic has pledged $4 million in donations to open-source security groups. The company's focus on supporting cybersecurity initiatives highlights its commitment to safeguarding against advanced cyber threats. However, Mythos's future remains uncertain, as Anthropic continues to refine its defensive capabilities before making the model more widely available. The development of Mythos marks a significant step in the AI-driven defense against cybersecurity risks, though Anthropic remains cautious about its broader use.

Project Glasswing unites Apple, Google, Microsoft, and others to fix critical vulnerabilities before the same AI power falls into the wrong hands. Anthropic has just announced Project Glasswing, which is less of a standard product announcement and more of a global distress flare. The project is bringing a coalition of tech companies together, including Apple, Amazon Web Services (AWS), Broadcom, Cisco, CrowdStrike, Google, Microsoft, NVIDIA, and Palo Alto Networks, to proactively hunt down and patch vulnerabilities in the "world's most critical software infrastructure." Perhaps the main focus of the announcement is the launch of Claude Mythos Preview, a general-purpose frontier model that Anthropic has deliberately kept unreleased to the general public, and for good reasons. Anthropic revealed that Mythos Preview has autonomously identified thousands of high-severity, zero-day vulnerabilities across all major operating systems and web browsers. Mythos Preview was able to unearth a remote crash vulnerability in OpenBSD, an operating system famous for its hardened security, sitting unnoticed for 27 years. It also found a critical flaw in FFmpeg, a video processing library, buried in a line of code that automated testing tools had passed over five million times in 16 years. It even autonomously chained together multiple Linux kernel exploits to achieve total machine takeover. Anthropic isn't new to dominating state-of-the-art frontier models in benchmarks. We already know that the company has been aggressively optimizing its foundation models for complex reasoning and agentic programming, with the launch of models such as Claude Opus 4.5 and Claude Sonnet 4.6 and its massive one-million context window. However, the exact same agentic reasoning can also be used by a bad actor to automate a zero-day exploit. The barrier to entry for cyberattacks has historically been the human expertise required, but with models like Mythos, that barrier might not exist anymore. This is why Anthropic is giving the "good guys" a head start by distributing Mythos Preview to over 40 partners and critical infrastructure maintainers. The company is committing up to $100 million in usage credits for Mythos Preview and donating $4 million directly to open-source security entities like the Linux Foundation and the Apache Software Foundation.

Anthropic, the artificial intelligence company backed by Google, Amazon, and Microsoft, has crossed $30 billion in annualized revenue, outpacing OpenAI for the first time, according to Jefferies analysts. The surge comes after Anthropic added roughly $21 billion in net new annualized revenue in just three months, more than one-third of the $58 billion added by Jefferies' entire public software coverage (excluding Microsoft) in all of 2025. Analysts note that this rapid growth is likely to stoke concerns that Anthropic and OpenAI could dominate corporate IT budgets. Anthropic's growth has been fueled in part by a recent agreement with Google and Broadcom to supply around 3.5 gigawatts of next-generation TPU-based AI compute capacity, expected to come online in 2027. Jefferies highlighted that similar capacity deals may follow, benefiting cloud providers like Google, Amazon, and Microsoft. "Anthropic's continued acceleration should help support future revenue and order growth for its cloud partners," analysts wrote, citing Google's reported 158% year-over-year increase in remaining performance obligations in the fourth quarter of 2025. Despite Anthropic pulling ahead in revenue, analysts caution that OpenAI remains a formidable competitor. OpenAI has raised significantly more capital, secured substantial compute capacity, and maintains a large and engaged user base, including more than 900 million weekly active users. Its free-tier users provide a substantial data advantage, with the potential to drive monetization through advertising and other services. "As demand continues to surge, we believe Anthropic will need to keep scaling compute commitments, while carefully balancing capacity between training future frontier models and serving inference demand today, especially given scaling laws remain very much intact," analysts wrote. Jefferies concluded that while Anthropic may currently lead in revenue, both companies are likely to continue shaping AI spending across enterprise IT budgets, with further growth expected as demand for AI compute capacity continues to accelerate.
Matthew Green / @matthewdgreen: I think this is a good precautionary analysis but I'd bet huge amounts of money against a relevant quantum computer by 2029 or even 2035. [embedded post] And the posts, they keep on coming. -- I hundred percent agree with @filippo here, the question is not whether we're certain that a quantum computer exists by 2029, it's whether we're certain that one doesn't exist. And things have progressed far enough that non-physicists, or even physicists working in different subfields, can no longer reliably tell what's going on. ...

Anthropic's Project Glasswing will bring together a group of vendors to help define how AI resources will be protected from cyber threats. Cisco is joining Anthropic's Project Glasswing initiative, which will offer Anthropic's unreleased Claude Mythos Preview software to a coalition of vendors to help define how AI resources will be protected from cyber threats. Project Glasswing brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Anthropic wants the group to build a coordinated technology answer for AI security threats through tasks such as vulnerability detection, black box testing of binaries, securing endpoints, and system penetration testing. According to Anthropic, Project Glasswing partners will get access to its as yet unreleased Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systems -- systems that represent a very large portion of the world's shared cyberattack surface, the vendor wrote in its announcement. Anthropic calls Claude Mythos Preview a next-generation AI frontier model featuring advanced reasoning and code development.

Right now, the team-up looks to be a significant win for Intel and a smart move by Musk to accelerate the Terafab's development and production timeline and increase the project's chances of being a big success. Intel stock closed out the day up more than 4%, but Tesla ended the session down roughly 1.8%. Bringing Intel on board to lead key aspects of the Terafab plant makes it significantly more likely that Musk's semiconductor factory in Austin, Texas, will produce reliably usable semiconductors for SpaceX, Tesla, and xAI and make progress on the CEO's aggressive compute output goals within a reasonable timeframe. Succeeding in the chip-foundry space is enormously difficult and also very capital-intensive. For example, the Terafab plant is expected to cost roughly $25 billion. High levels of spending also don't guarantee top-tier fabrication production yields. Building and operating highly effective fabrication plants is very difficult, and there are a multitude of different factors that can cause yield loss -- many of which can be challenging to detect and rectify. The level of complexity and operational difficulty in the advanced fabrication space is illustrated by the industry's market-share dynamics. Taiwan Semiconductor Manufacturing absolutely dominates the market for contract semiconductor fabrication, and its stranglehold on the production of advanced chip designs is even stronger. Some estimates suggest that the company commands more than 70% of the global contract chip manufacturing market -- and that its share of the market for artificial intelligence (AI) chips and other advanced semiconductors exceeds 90%. By most accounts, Intel trails significantly behind TSMC when it comes to advanced chip-fabrication capabilities despite making massive investments in its foundry capabilities. Intel's chip foundry segment posted an operating loss of roughly $7 billion last year alone, but the semiconductor specialist is continuing to bet big that it can make gains in the market with its next-generation processes. Notably, most of the revenue from Intel's foundry business actually comes from the manufacturing of its own chip designs -- and relatively low demand from third-party customers has long been a source of frustration for shareholders. Musk's move to partner with Intel looks like a notable vote of confidence and win for the semiconductor specialist. It's also part of a broad-based and very well-funded initiative to increase U.S. domestic chip manufacturing capabilities and shift reliance away from TSMC's Taiwan-based fabs amid potential geopolitical threats from China.

Revenue growth accelerates with $6.8M weekly fees as trading demand remains stable after fee expansion. Polymarket is rapidly tightening its grip on the prediction market sector. On-chain data shows a sharp rise in fee generation following a recent pricing overhaul. Also, trading activity has remained strong despite broader monetization across categories. These trends point to growing user engagement even as scrutiny increases globally. Crypto-based prediction market Polymarket recorded $6.8 million in fees during its first full week under a new fee structure. At that pace, the platform is on track for an annual run rate of about $355 million. Daily fees are holding close to $1 million, reflecting sustained trading demand. Moreover, market share data shows a clear imbalance. Polymarket accounted for 96.8% of total on-chain prediction market fees during the same period. Total weekly fees across the sector exceeded $7 million for the first time, with most of that coming from Polymarket. Recent growth follows a shift in its fee model. The platform now charges taker fees across categories such as finance, politics, economics, culture, weather, and technology. Crypto and sports markets were already monetized, while geopolitical and world events remain fee-free for now. Trading volumes have not shown a meaningful decline since the change. High taker activity suggests users are absorbing the additional costs without reducing participation. That dynamic signals strong product-market fit, at least in the short term. According to the outlined plans for a platform-wide upgrade, a new collateral token, Polymarket USD, will replace bridged USDC.e. The token will be backed 1:1 by USDC, issued by Circle. Current infrastructure relies on assets bridged from Ethereum. The upgrade also includes a rebuilt trading engine and revised smart contracts. Polymarket described the rollout as a full exchange upgrade aimed at improving execution and system design. Institutional interest is rising alongside revenue growth. Intercontinental Exchange, parent of the New York Stock Exchange, recently committed $600 million to the prediction platform. At the same time, regulatory pressure is building across the US and Europe. Polymarket's ability to sustain volume while increasing fees will likely remain a key focus for both investors and regulators.

AI-Driven Security Operations , Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Anthropic asserted Tuesday that it's created a new era for cybersecurity after developing an artificial intelligence model too dangerous to release to public. See Also: From Visibility to Action: Modernizing Security Operations with Cisco, Optiv, and Splunk The AI mainstay - also embroiled in a fight with the U.S. federal government over its model deployment for autonomous weapons and surveillance - said its unreleased Claude Mythos Preview model has already found thousands of high-severity vulnerabilities, "including some in every major operating system and web browser." "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout - for economies, public safety and national security - could be severe," the company wrote. A consortium of more than 40 technology companies, including Microsoft, the Linux Foundation, Google and Cisco, will have access to the frontier model $100 million in usage credits to find and plug holes. Anthropic dubbed the coalition "Project Glasswing." "While the capabilities now available to defenders are remarkable, they soon will also become available to adversaries, defining the critical inflection point we face today," wrote Cisco CSO Anthoy Grieco. Mythos Preview isn't just a high-end fuzzer, Anthropic executives wrote. They said it found a 27-year old vulnerability in OpenBSD, a security-focused Linux distribution used in network appliances and security functions. "The vulnerability allowed an attacker to remotely crash any machine running the operating system just by connecting to it," Anthropic wrote. The frontier model also found and chained vulnerabilities in the Linux kernel allowing an attacker to gain superuser privileges. The model was able to defeat kernel address space layout randomization, the security technique of randomizing the location of kernel functions in memory. The attack combined a flaw giving the model read access to kernel memory with a vulnerability allowing it to write. "We have nearly a dozen examples of Mythos Preview successfully chaining together two, three and sometimes four vulnerabilities in order to construct a functional exploit on the Linux kernel." In a blog post, Anthropic researchers said the model is able to identify a wide range of vulnerabilities and understand the logic behind the code. "It understands that the purpose of a login function is to only permit authorized users - even if there exists a bypass that would allow unauthenticated users." Anthropic researchers predict that attackers and defenders will eventually find an AI equilibrium in which defenders benefit the most from powerful new models. But that time will involve a tumultuous transitional period that would be worse if attackers get ahold of the model before defenders are ready, they said. They promised new safeguards that detect and block malicious outputs and a set of forthcoming recommendations on long-standing cybersecurity issues such as vulnerability disclosure, patching, vulnerability prioritization and secure-by-design practices.

We see a lot of doom and gloom about the potential negative impacts of artificial intelligence, particularly centered on how it could create new problems in cybersecurity. Anthropic has announced a new initiative called Project Glasswing to help address those concerns by working "to secure the world's most critical software" against AI-powered attacks. The endeavor includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks as partners. Participants will use Claude Mythos Preview, an unreleased, general-purpose model from Anthropic, to enhance their own security projects. Anthropic claims that this model has found thousands of exploitable vulnerabilities, "including some in every major operating system and web browser." The company said it wants to begin using its tools defensively to prevent malicious use of AI that could cause severe consequences for economies and security. Anthropic has become one of the notable AI companies raising concerns about ethics in the field. Earlier this year, the business refused to remove guardrails on its services for use by the Pentagon, which prompted the Department of Defense to sanction Anthropic with a "supply chain risk" designation in retaliation. Launching Project Glasswing could be a helpful start toward improved cybersecurity in the AI era, but some damage has already been done. Its own Claude was reportedly used by a hacker against multiple government agencies in Mexico in February.
Anthropic on Tuesday announced Project Glasswing, a sweeping cybersecurity initiative that pairs an unreleased frontier AI model -- Claude Mythos Preview -- with a coalition of twelve major technology and finance companies in an effort to find and patch software vulnerabilities across the world's most critical infrastructure before adversaries can exploit them.The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Anthropic says it has also extended access to more than 40 additional organizations that build or maintain critical software, and is committing up to $100 million in usage credits for Claude Mythos Preview across the effort, along with $4 million in direct donations to open-source security organizations.The announcement arrives at a moment of extraordinary momentum -- and extraordinary scrutiny -- for the San Francisco-based AI startup. Anthropic disclosed on Sunday that its annualized revenue run rate has surpassed $30 billion, up from approximately $9 billion at the end of 2025, and the number of business customers each spending over $1 million annually now exceeds 1,000, doubling in less than two months. The company simultaneously announced a multi-gigawatt compute deal with Google and Broadcom. On the same day, Bloomberg reported that Anthropic had poached a senior Microsoft executive, Eric Boyd, to lead its infrastructure expansion.But Glasswing is something categorically different from a revenue milestone or a compute deal. It's Anthropic's most ambitious attempt to translate frontier AI capabilities -- capabilities the company itself describes as dangerous -- into a defensive advantage before those same capabilities proliferate to hostile actors.Why Anthropic built a model it considers too dangerous to release publiclyAt the center of Project Glasswing sits Claude Mythos Preview, a general-purpose frontier model that Anthropic says has already identified thousands of high-severity zero-day vulnerabilities -- meaning flaws previously unknown to software developers -- in every major operating system and every major web browser, along with a range of other critical software.The company is not making the model generally available."We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities," Newton Cheng, Frontier Red Team Cyber Lead at Anthropic, told VentureBeat in an exclusive interview. "However, given the rate of AI progress, it will not be long before such capabili ...

Anthropic on Tuesday released a preview of its new frontier model, Mythos, which it says will be used by a small coterie of partner organizations for cybersecurity work. In a previously leaked memo, the AI startup called the model one of its "most powerful" yet. The model's limited debut is part of a new security initiative, dubbed Project Glasswing, in which 12 partner organizations will deploy the model for the purposes of "defensive security work" and to secure critical software, Anthropic said. While it was not specifically trained for cybersecurity work, the model will be used to scan both first-party and open source software systems for code vulnerabilities, the company said. Anthropic claims that, over the past few weeks, Mythos identified "thousands of zero-day vulnerabilities, many of them critical." Many of the vulnerabilities are one to two decades old, the company added. Mythos is a general-purpose model for Anthropic's Claude AI systems that the company claims has strong agentic coding and reasoning skills. Anthropic's frontier models are considered its most sophisticated and high-performance models, designed for more complex tasks, including agent-building and coding. The partner organizations previewing Mythos as part of Project Glasswing include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of the initiative, these partners will ultimately share what they've learned from using the model so that the rest of the tech industry can benefit from it. The preview is not going to be made generally available, Anthropic said, though 40 organizations will gain access to the Mythos preview aside from the partnership. Anthropic also claims that it has engaged in "ongoing discussions" with federal officials about the use of Mythos, although one would have to imagine that those discussions are complicated by the fact that Anthropic and the Trump administration are currently locked in a legal battle after the Pentagon labeled the AI lab a supply-chain risk over Anthropic's refusal to allow auton ...

This super composite rating is the result of a weighted average of the rankings based on the following ratings: Fundamentals (Composite), Global Valuation (Composite), EPS Revisions (1 year), and Visibility (Composite). We recommend that you carefully review the associated descriptions. This composite rating is the result of an average of the rankings based on the following ratings: Fundamentals (Composite), Valuation (Composite), Financial Estimates Revisions (Composite), Consensus (Composite), and Visibility (Composite). The company must be covered by at least 4 of these 5 ratings for the calculation to be performed. We recommend that you carefully review the associated descriptions.

Contractors filed five lawsuits against Mercor, the AI training firm valued at $10 billion, in the past week, accusing the company of violating data privacy and consumer protection laws. The suits, filed in federal courts in California and Texas, allege Mercor's negligence could have resulted in the disclosure of Social Security numbers, addresses, and other information, including recordings of interviews, to bad actors. The lawsuits seek unspecified monetary damages. Mercor said last week that it was impacted by a breach of the open-source project LiteLLM, which was created by Berrie AI, without describing the stolen data. Techcrunch reported that sample materials posted by the hackers included Slack data and videos of conversations between Mercor contractors and an AI system. It's somewhat common for companies to be sued in the wake of a data breach. The biggest cases have settled for between $1 and $5 per class member, according to a survey of data-breach settlements from 2018 to 2021 by Cornerstone Research. Victims with documented financial losses are sometimes paid more. Some settlements include non-monetary relief, like free credit monitoring. A lawsuit filed by NaTivia Esson and her lawyers at Strauss Borrelli says she worked for Mercor from March 2025 to March 2026 and filled out a W-9 form with her personal identifying information each time she got work. She "trusted the company would use reasonable measures to protect it," her complaint read. "Because of the data breach, plaintiff anticipates spending considerable amounts of time and money to try and mitigate her injuries." Mercor declined to comment. Mercor has used gig workers to train AI for clients including Meta, Facebook's parent company. Meta paused its work with Mercor after the data breach, Business Insider previously reported. One suit against Mercor also names Berrie AI and Delve Technologies, an "automated compliance" firm that had previously certified Berrie's compliance with certain industry standards, as defendants. The complaint in that case said a "whistleblower" exposed misconduct at Delve. Last month, Delve denied claims in an anonymously authored Substack post that accused it of facilitating "fake compliance" and arranging sham security audits. Other legal challenges for Mercor might be on the horizon. An apparent lead-generation website, MercorClaims.com, went live on or around April 1, although it does not appear to be sending users to any particular law firm. Berrie AI and Delve didn't immediately respond to requests for comment.
GameSpot may receive revenue from affiliate and advertising partnerships for sharing this content and from purchases through links. Years ago, Epic frequently filled the Fortnite island with collectibles that players could find for XP, but they didn't do much of that the past few chapters. But the collectibles have returned in a big way in Chapter 7 Season 2, with the dozens of Chaos Cubes that are scattered all over the island. There are currently 45 cubes available to discover in Fortnite's main Battle Royale mode, with 25 more planned to pop up at various points over the course of the season. This is probably related to the battle between the Ice King and the Foundation somehow, and there's a good chance the Last Reality, with its alien invaders and chrome monsters and army of cubes, is involved somehow. Collecting these cubes isn't just a purely for-fun side activity, as collecting a cube awards 4,000 XP, collecting all five cubes in a region awards 40,000 XP, and collecting all 70 on the entire map (which is not currently possible) will award 80,000 XP. That all adds up to 14.5 account levels in total, which makes these cubes worth pursuing as you go about your business. Fortunately, they aren't too hard to find. They aren't particularly well hidden, and they give off a sound similar to that of a treasure chest when you're near one, along with an exclamation mark and a visualized audio ping (for those who have that setting turned on) to show you where it is. So it's not that time-consuming of a hunt. And while some of them might be tough to reach without movement items, you won't have to do any complicated platforming sequences to collect the cubes. And we'll make it even easier for you by showing you where on the map each cube is, which should make it easy to run the island collecting them. Scroll on to see where you can find every cub currently sitting around the Fortnite Battle Royale island.
