News & Updates

The latest news and updates from companies in the WLTH portfolio.

Polymarket Dominates Prediction Markets With 96.8% Fee Share

Revenue growth accelerates with $6.8M weekly fees as trading demand remains stable after fee expansion. Polymarket is rapidly tightening its grip on the prediction market sector. On-chain data shows a sharp rise in fee generation following a recent pricing overhaul. Also, trading activity has remained strong despite broader monetization across categories. These trends point to growing user engagement even as scrutiny increases globally. Crypto-based prediction market Polymarket recorded $6.8 million in fees during its first full week under a new fee structure. At that pace, the platform is on track for an annual run rate of about $355 million. Daily fees are holding close to $1 million, reflecting sustained trading demand. Moreover, market share data shows a clear imbalance. Polymarket accounted for 96.8% of total on-chain prediction market fees during the same period. Total weekly fees across the sector exceeded $7 million for the first time, with most of that coming from Polymarket. Recent growth follows a shift in its fee model. The platform now charges taker fees across categories such as finance, politics, economics, culture, weather, and technology. Crypto and sports markets were already monetized, while geopolitical and world events remain fee-free for now. Trading volumes have not shown a meaningful decline since the change. High taker activity suggests users are absorbing the additional costs without reducing participation. That dynamic signals strong product-market fit, at least in the short term. According to the outlined plans for a platform-wide upgrade, a new collateral token, Polymarket USD, will replace bridged USDC.e. The token will be backed 1:1 by USDC, issued by Circle. Current infrastructure relies on assets bridged from Ethereum. The upgrade also includes a rebuilt trading engine and revised smart contracts. Polymarket described the rollout as a full exchange upgrade aimed at improving execution and system design. Institutional interest is rising alongside revenue growth. Intercontinental Exchange, parent of the New York Stock Exchange, recently committed $600 million to the prediction platform. At the same time, regulatory pressure is building across the US and Europe. Polymarket's ability to sustain volume while increasing fees will likely remain a key focus for both investors and regulators.

Polymarket
Live Bitcoin News16d ago
Read update
Polymarket Dominates Prediction Markets With 96.8% Fee Share

Anthropic Calls Its New Model Too Dangerous to Release

AI-Driven Security Operations , Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Anthropic asserted Tuesday that it's created a new era for cybersecurity after developing an artificial intelligence model too dangerous to release to public. See Also: From Visibility to Action: Modernizing Security Operations with Cisco, Optiv, and Splunk The AI mainstay - also embroiled in a fight with the U.S. federal government over its model deployment for autonomous weapons and surveillance - said its unreleased Claude Mythos Preview model has already found thousands of high-severity vulnerabilities, "including some in every major operating system and web browser." "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout - for economies, public safety and national security - could be severe," the company wrote. A consortium of more than 40 technology companies, including Microsoft, the Linux Foundation, Google and Cisco, will have access to the frontier model $100 million in usage credits to find and plug holes. Anthropic dubbed the coalition "Project Glasswing." "While the capabilities now available to defenders are remarkable, they soon will also become available to adversaries, defining the critical inflection point we face today," wrote Cisco CSO Anthoy Grieco. Mythos Preview isn't just a high-end fuzzer, Anthropic executives wrote. They said it found a 27-year old vulnerability in OpenBSD, a security-focused Linux distribution used in network appliances and security functions. "The vulnerability allowed an attacker to remotely crash any machine running the operating system just by connecting to it," Anthropic wrote. The frontier model also found and chained vulnerabilities in the Linux kernel allowing an attacker to gain superuser privileges. The model was able to defeat kernel address space layout randomization, the security technique of randomizing the location of kernel functions in memory. The attack combined a flaw giving the model read access to kernel memory with a vulnerability allowing it to write. "We have nearly a dozen examples of Mythos Preview successfully chaining together two, three and sometimes four vulnerabilities in order to construct a functional exploit on the Linux kernel." In a blog post, Anthropic researchers said the model is able to identify a wide range of vulnerabilities and understand the logic behind the code. "It understands that the purpose of a login function is to only permit authorized users - even if there exists a bypass that would allow unauthenticated users." Anthropic researchers predict that attackers and defenders will eventually find an AI equilibrium in which defenders benefit the most from powerful new models. But that time will involve a tumultuous transitional period that would be worse if attackers get ahold of the model before defenders are ready, they said. They promised new safeguards that detect and block malicious outputs and a set of forthcoming recommendations on long-standing cybersecurity issues such as vulnerability disclosure, patching, vulnerability prioritization and secure-by-design practices.

Anthropic
govinfosecurity.com16d ago
Read update
Anthropic Calls Its New Model Too Dangerous to Release

Anthropic launches Project Glasswing, an effort to prevent AI cyberattacks with AI

We see a lot of doom and gloom about the potential negative impacts of artificial intelligence, particularly centered on how it could create new problems in cybersecurity. Anthropic has announced a new initiative called Project Glasswing to help address those concerns by working "to secure the world's most critical software" against AI-powered attacks. The endeavor includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks as partners. Participants will use Claude Mythos Preview, an unreleased, general-purpose model from Anthropic, to enhance their own security projects. Anthropic claims that this model has found thousands of exploitable vulnerabilities, "including some in every major operating system and web browser." The company said it wants to begin using its tools defensively to prevent malicious use of AI that could cause severe consequences for economies and security. Anthropic has become one of the notable AI companies raising concerns about ethics in the field. Earlier this year, the business refused to remove guardrails on its services for use by the Pentagon, which prompted the Department of Defense to sanction Anthropic with a "supply chain risk" designation in retaliation. Launching Project Glasswing could be a helpful start toward improved cybersecurity in the AI era, but some damage has already been done. Its own Claude was reportedly used by a hacker against multiple government agencies in Mexico in February.

Anthropic
Yahoo Tech16d ago
Read update
Anthropic launches Project Glasswing, an effort to prevent AI cyberattacks with AI

Anthropic says its most powerful AI cyber model is too dangerous to release publicly -- so it built Project Glasswing - RocketNews

Anthropic on Tuesday announced Project Glasswing, a sweeping cybersecurity initiative that pairs an unreleased frontier AI model -- Claude Mythos Preview -- with a coalition of twelve major technology and finance companies in an effort to find and patch software vulnerabilities across the world's most critical infrastructure before adversaries can exploit them.The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Anthropic says it has also extended access to more than 40 additional organizations that build or maintain critical software, and is committing up to $100 million in usage credits for Claude Mythos Preview across the effort, along with $4 million in direct donations to open-source security organizations.The announcement arrives at a moment of extraordinary momentum -- and extraordinary scrutiny -- for the San Francisco-based AI startup. Anthropic disclosed on Sunday that its annualized revenue run rate has surpassed $30 billion, up from approximately $9 billion at the end of 2025, and the number of business customers each spending over $1 million annually now exceeds 1,000, doubling in less than two months. The company simultaneously announced a multi-gigawatt compute deal with Google and Broadcom. On the same day, Bloomberg reported that Anthropic had poached a senior Microsoft executive, Eric Boyd, to lead its infrastructure expansion.But Glasswing is something categorically different from a revenue milestone or a compute deal. It's Anthropic's most ambitious attempt to translate frontier AI capabilities -- capabilities the company itself describes as dangerous -- into a defensive advantage before those same capabilities proliferate to hostile actors.Why Anthropic built a model it considers too dangerous to release publiclyAt the center of Project Glasswing sits Claude Mythos Preview, a general-purpose frontier model that Anthropic says has already identified thousands of high-severity zero-day vulnerabilities -- meaning flaws previously unknown to software developers -- in every major operating system and every major web browser, along with a range of other critical software.The company is not making the model generally available."We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities," Newton Cheng, Frontier Red Team Cyber Lead at Anthropic, told VentureBeat in an exclusive interview. "However, given the rate of AI progress, it will not be long before such capabili ...

Anthropic
RocketNews | Top News Stories From Around the Globe16d ago
Read update
Anthropic says its most powerful AI cyber model is too dangerous to release publicly  --  so it built Project Glasswing - RocketNews

Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative - RocketNews

Anthropic on Tuesday released a preview of its new frontier model, Mythos, which it says will be used by a small coterie of partner organizations for cybersecurity work. In a previously leaked memo, the AI startup called the model one of its "most powerful" yet. The model's limited debut is part of a new security initiative, dubbed Project Glasswing, in which 12 partner organizations will deploy the model for the purposes of "defensive security work" and to secure critical software, Anthropic said. While it was not specifically trained for cybersecurity work, the model will be used to scan both first-party and open source software systems for code vulnerabilities, the company said. Anthropic claims that, over the past few weeks, Mythos identified "thousands of zero-day vulnerabilities, many of them critical." Many of the vulnerabilities are one to two decades old, the company added. Mythos is a general-purpose model for Anthropic's Claude AI systems that the company claims has strong agentic coding and reasoning skills. Anthropic's frontier models are considered its most sophisticated and high-performance models, designed for more complex tasks, including agent-building and coding. The partner organizations previewing Mythos as part of Project Glasswing include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of the initiative, these partners will ultimately share what they've learned from using the model so that the rest of the tech industry can benefit from it. The preview is not going to be made generally available, Anthropic said, though 40 organizations will gain access to the Mythos preview aside from the partnership. Anthropic also claims that it has engaged in "ongoing discussions" with federal officials about the use of Mythos, although one would have to imagine that those discussions are complicated by the fact that Anthropic and the Trump administration are currently locked in a legal battle after the Pentagon labeled the AI lab a supply-chain risk over Anthropic's refusal to allow auton ...

Anthropic
RocketNews | Top News Stories From Around the Globe16d ago
Read update
Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative - RocketNews

Anthropic Launches Project Glasswing After Frontier Model Claude Mythos Preview Expands Cybersecurity Capabilities

This super composite rating is the result of a weighted average of the rankings based on the following ratings: Fundamentals (Composite), Global Valuation (Composite), EPS Revisions (1 year), and Visibility (Composite). We recommend that you carefully review the associated descriptions. This composite rating is the result of an average of the rankings based on the following ratings: Fundamentals (Composite), Valuation (Composite), Financial Estimates Revisions (Composite), Consensus (Composite), and Visibility (Composite). The company must be covered by at least 4 of these 5 ratings for the calculation to be performed. We recommend that you carefully review the associated descriptions.

Anthropic
Market Screener16d ago
Read update
Anthropic Launches Project Glasswing After Frontier Model Claude Mythos Preview Expands Cybersecurity Capabilities

Mercor hit with 5 contractor lawsuits in a week over data breach

Contractors filed five lawsuits against Mercor, the AI training firm valued at $10 billion, in the past week, accusing the company of violating data privacy and consumer protection laws. The suits, filed in federal courts in California and Texas, allege Mercor's negligence could have resulted in the disclosure of Social Security numbers, addresses, and other information, including recordings of interviews, to bad actors. The lawsuits seek unspecified monetary damages. Mercor said last week that it was impacted by a breach of the open-source project LiteLLM, which was created by Berrie AI, without describing the stolen data. Techcrunch reported that sample materials posted by the hackers included Slack data and videos of conversations between Mercor contractors and an AI system. It's somewhat common for companies to be sued in the wake of a data breach. The biggest cases have settled for between $1 and $5 per class member, according to a survey of data-breach settlements from 2018 to 2021 by Cornerstone Research. Victims with documented financial losses are sometimes paid more. Some settlements include non-monetary relief, like free credit monitoring. A lawsuit filed by NaTivia Esson and her lawyers at Strauss Borrelli says she worked for Mercor from March 2025 to March 2026 and filled out a W-9 form with her personal identifying information each time she got work. She "trusted the company would use reasonable measures to protect it," her complaint read. "Because of the data breach, plaintiff anticipates spending considerable amounts of time and money to try and mitigate her injuries." Mercor declined to comment. Mercor has used gig workers to train AI for clients including Meta, Facebook's parent company. Meta paused its work with Mercor after the data breach, Business Insider previously reported. One suit against Mercor also names Berrie AI and Delve Technologies, an "automated compliance" firm that had previously certified Berrie's compliance with certain industry standards, as defendants. The complaint in that case said a "whistleblower" exposed misconduct at Delve. Last month, Delve denied claims in an anonymously authored Substack post that accused it of facilitating "fake compliance" and arranging sham security audits. Other legal challenges for Mercor might be on the horizon. An apparent lead-generation website, MercorClaims.com, went live on or around April 1, although it does not appear to be sending users to any particular law firm. Berrie AI and Delve didn't immediately respond to requests for comment.

Mercor
Business Insider16d ago
Read update
Mercor hit with 5 contractor lawsuits in a week over data breach

Fortnite Showdown: Where To Find Every Chaos Cube Available So Far

GameSpot may receive revenue from affiliate and advertising partnerships for sharing this content and from purchases through links. Years ago, Epic frequently filled the Fortnite island with collectibles that players could find for XP, but they didn't do much of that the past few chapters. But the collectibles have returned in a big way in Chapter 7 Season 2, with the dozens of Chaos Cubes that are scattered all over the island. There are currently 45 cubes available to discover in Fortnite's main Battle Royale mode, with 25 more planned to pop up at various points over the course of the season. This is probably related to the battle between the Ice King and the Foundation somehow, and there's a good chance the Last Reality, with its alien invaders and chrome monsters and army of cubes, is involved somehow. Collecting these cubes isn't just a purely for-fun side activity, as collecting a cube awards 4,000 XP, collecting all five cubes in a region awards 40,000 XP, and collecting all 70 on the entire map (which is not currently possible) will award 80,000 XP. That all adds up to 14.5 account levels in total, which makes these cubes worth pursuing as you go about your business. Fortunately, they aren't too hard to find. They aren't particularly well hidden, and they give off a sound similar to that of a treasure chest when you're near one, along with an exclamation mark and a visualized audio ping (for those who have that setting turned on) to show you where it is. So it's not that time-consuming of a hunt. And while some of them might be tough to reach without movement items, you won't have to do any complicated platforming sequences to collect the cubes. And we'll make it even easier for you by showing you where on the map each cube is, which should make it easy to run the island collecting them. Scroll on to see where you can find every cub currently sitting around the Fortnite Battle Royale island.

CHAOS
GameSpot16d ago
Read update
Fortnite Showdown: Where To Find Every Chaos Cube Available So Far

SpaceX IPO Could See Massive Retail Demand

This article first appeared on GuruFocus. SpaceX (SPACE) is starting to map out its IPO, and it is already clear this is not going to be a typical listing. The company plans to begin its roadshow the week of June 8, with its prospectus expected later in May. But what really stands out is how much focus is being put on retail investors. SpaceX is planning a dedicated event for around 1,500 of them on June 11, and early signals suggest individual investor demand could be unusually strong. There has even been talk of allocating up to 30% of the shares to retail, compared with the usual 5% to 10% in most IPOs. The scale of the deal reflects that ambition. Around 21 banks are involved, including Morgan Stanley (NYSE:MS), Goldman Sachs (NYSE:GS), and JPMorgan (NYSE:JPM), and the company is reportedly targeting a valuation above $2Trillion. Bankers are already hinting that demand levels could be unlike anything they have seen before, driven in part by Elon Musk's massive following.

SpaceX
Yahoo! Finance16d ago
Read update
SpaceX IPO Could See Massive Retail Demand

Anthropic launches Project Glasswing, an effort to prevent AI cyberattacks with AI

We see a lot of doom and gloom about the potential negative impacts of artificial intelligence, particularly centered on how it could create new problems in cybersecurity. Anthropic has announced a new initiative called Project Glasswing to help address those concerns by working "to secure the world's most critical software" against AI-powered attacks. The endeavor includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks as partners. Participants will use Claude Mythos Preview, an unreleased, general-purpose model from Anthropic, to enhance their own security projects. Anthropic claims that this model has found thousands of exploitable vulnerabilities, "including some in every major operating system and web browser." The company said it wants to begin using its tools defensively to prevent malicious use of AI that could cause severe consequences for economies and security. Anthropic has become one of the notable AI companies raising concerns about ethics in the field. Earlier this year, the business refused to remove guardrails on its services for use by the Pentagon, which prompted the Department of Defense to sanction Anthropic with a "supply chain risk" designation in retaliation. Launching Project Glasswing could be a helpful start toward improved cybersecurity in the AI era, but some damage has already been done. Its own Claude was reportedly used by a hacker against multiple government agencies in Mexico in February.

Anthropic
engadget16d ago
Read update
Anthropic launches Project Glasswing, an effort to prevent AI cyberattacks with AI

Anthropic Calls Its New Model Too Dangerous to Release

AI-Driven Security Operations , Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Anthropic asserted Tuesday that it's created a new era for cybersecurity after developing an artificial intelligence model too dangerous to release to public. See Also: Context Drives Security in Agentic AI Era The AI mainstay - also embroiled in a fight with the U.S. federal government over its model deployment for autonomous weapons and surveillance - said its unreleased Claude Mythos Preview model has already found thousands of high-severity vulnerabilities, "including some in every major operating system and web browser." "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout - for economies, public safety and national security - could be severe," the company wrote. A consortium of more than 40 technology companies, including Microsoft, the Linux Foundation, Google and Cisco, will have access to the frontier model $100 million in usage credits to find and plug holes. Anthropic dubbed the coalition "Project Glasswing." "While the capabilities now available to defenders are remarkable, they soon will also become available to adversaries, defining the critical inflection point we face today," wrote Cisco CSO Anthoy Grieco. Mythos Preview isn't just a high-end fuzzer, Anthropic executives wrote. They said it found a 27-year old vulnerability in OpenBSD, a security-focused Linux distribution used in network appliances and security functions. "The vulnerability allowed an attacker to remotely crash any machine running the operating system just by connecting to it," Anthropic wrote. The frontier model also found and chained vulnerabilities in the Linux kernel allowing an attacker to gain superuser privileges. The model was able to defeat kernel address space layout randomization, the security technique of randomizing the location of kernel functions in memory. The attack combined a flaw giving the model read access to kernel memory with a vulnerability allowing it to write. "We have nearly a dozen examples of Mythos Preview successfully chaining together two, three and sometimes four vulnerabilities in order to construct a functional exploit on the Linux kernel." In a blog post, Anthropic researchers said the model is able to identify a wide range of vulnerabilities and understand the logic behind the code. "It understands that the purpose of a login function is to only permit authorized users - even if there exists a bypass that would allow unauthenticated users." Anthropic researchers predict that attackers and defenders will eventually find an AI equilibrium in which defenders benefit the most from powerful new models. But that time will involve a tumultuous transitional period that would be worse if attackers get ahold of the model before defenders are ready, they said. They promised new safeguards that detect and block malicious outputs and a set of forthcoming recommendations on long-standing cybersecurity issues such as vulnerability disclosure, patching, vulnerability prioritization and secure-by-design practices.

Anthropic
DataBreachToday16d ago
Read update
Anthropic Calls Its New Model Too Dangerous to Release

Anthropic Tests Latest Cybersecurity Tech With Big Tech, Banks - Apple (NASDAQ:AAPL), Amazon.com (NASDAQ:

Anthropic Teams With Apple, Microsoft And Nvidia To Test Latest Cybersecurity Tech Anthropic has rolled out Project Glasswing, a security-focused collaboration that includes various big-name companies spanning finance and tech. The group plans to use an unreleased Anthropic model, Claude Mythos Preview, to hunt and fix software flaws in an effort to "reshape" cybersecurity, Anthropic stated. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic noted. Mythos Preview is an AI model that will expose software vulnerabilities, giving companies a chance to protect themselves from threats. Anthropic has expanded access to Mythos to more than 40 additional organizations involved in critical software infrastructure, covering both proprietary and open-source code, the company stated. Anthropic Uncovers 'Thousands Of Vulnerabilities' Internal testing over the past few weeks uncovered thousands of zero-day vulnerabilities, flaws that were "previously unknown to the software developers," across widely used software, and it provided examples, including issues in OpenBSD, FFmpeg, and the Linux kernel. San Francisco-based Anthropic has reported those vulnerabilities to the relevant parties, the company noted. After 90 days, Anthropic plans to release a public report highlighting what they learned. Anthropic is allocating up to $100 million in usage credits tied to Mythos Preview for these efforts. The costs are a fraction of the $30 billion in Series G funding Anthropic raised in February. Anthropic, which boasts a $380-billion valuation, also made $4 million in direct donations aimed at open-source security groups. It donated $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation. It also contributed $1.5 million to the Apache Software Foundation. Photo: Shutterstock This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Anthropic
Benzinga16d ago
Read update
Anthropic Tests Latest Cybersecurity Tech With Big Tech, Banks - Apple (NASDAQ:AAPL), Amazon.com (NASDAQ:

Anthropic's latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first

(CNN) -- Anthropic will make its new AI model available to some of the world's biggest cybersecurity and software firms in an effort to slow the arms race ignited by AI in the hands of hackers, Anthropic said Tuesday. Amazon, Apple, Cisco, Google, JPMorgan Chase and Microsoft, among other firms, will now have access to Anthropic's Mythos model for cyber defense purposes. That includes finding bugs in those firms' software and testing whether specific hacking techniques work on their products. Mythos (officially dubbed "Claude Mythos Preview") is not ready for a public launch because of the ways it could be abused by cybercriminals and spies, according to Anthropic -- a prospect that has prompted widespread concern in Washington and in Silicon Valley. Experts have told CNN that the speed and scale of AI agents looking for vulnerabilities, far beyond normal human capabilities, represent a sea change in cybersecurity. A single AI agent could scan for vulnerabilities and potentially take advantage of them faster and more persistently than hundreds of human hackers. "We did not feel comfortable releasing this generally," Logan Graham, who heads the team at Anthropic its AI models' defenses, told CNN. "We think that there's a long way to go to have the appropriate safeguards." Anthropic has also briefed senior US officials "across the US government" on Mythos' full offensive and defensive cyber capabilities, an Anthropic official told CNN. The firm has also "made itself available to support the government's own testing and evaluation of the technology," the official said. Anthropic executives hope the selected release of Mythos to companies that serve billions of users will help even the playing field with attackers. The goal is to head off major security flaws in widely used internet browsers and operating systems before they are released publicly. Other firms or organizations that Anthropic said will have access to Mythos include chipmakers Broadcom and Nvidia, the nonprofit Linux Foundation, which supports the popular Linux operating system that powers many phones and supercomputers, and cybersecurity vendors CrowdStrike and Palo Alto Networks. "If models are going to be this good -- and probably much better than this -- at all cybersecurity tasks, we need to prepare pretty fast," Graham told CNN. "The world is very different now if these model capabilities are going to be in our lives." A blog post previewing Mythos's capabilities, which leaked last month claimed that the AI model was "far ahead" of other models' cyber capabilities. Mythos "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders," said the blog post, which Fortune first reported. Some of the concerns around how Mythos' could be abused by bad actors were overblown, experts previously told CNN. But the leak also pointed to an uncomfortable truth, those sources said: Barring a change in course, the gap between attackers and defenders enabled by AI could widen further. Anthropic claims Mythos has already produced impactful results. The model has in recent weeks found "thousands" of previously unknown software vulnerabilities -- a rate far outpacing human researchers, the firm said. CNN could not immediately verify this figure. Such software flaws can be painstaking for human researchers to find and are coveted by spy agencies and cybercriminals for conducting stealthy hacks. But cybersecurity experts have been using AI to protect against exploits long before Mythos arrived. Gadi Evron and other security researchers in December released a tool based on Anthropic's Claude model to generate fixes for severe software vulnerabilities. "Unlike attackers, defenders don't yet have AI capabilities accelerating them to the same degree," Evron, the founder of AI security firm Knostic, told CNN. "However, the attack capabilities are available to attackers and defenders both, and defenders must use them if they're to keep up." The-CNN-Wire ™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Anthropic
WAOW16d ago
Read update
Anthropic's latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first

How secondary markets are pricing SpaceX, OpenAI, and Anthropic before they go public

In this episode of StrictlyVC Download, Connie Loizos talks with Rainmaker Securities managing director Glen Anderson about the surge in the secondary market as investors race to buy shares in some of tech's most sought-after private companies. From SpaceX to OpenAI to Anthropic, demand is intensifying ahead of an IPO window that is potentially reopening. Glen explains how the secondary market really works, why some companies have plenty of buyers but no sellers, and what recent activity reveals about investor sentiment. He also shares why SpaceX has been such an outlier, what's behind the imbalance between OpenAI and Anthropic, and how institutional investors are navigating this increasingly competitive market.

AnthropicSpaceX
TechCrunch16d ago
Read update
How secondary markets are pricing SpaceX, OpenAI, and Anthropic before they go public

Anthropic: Our new "Mythos" model is so powerful we can't release it

The unusual announcement of the model highlights its alarming new cybersecurity capabilities. Anthropic announced its latest foundational AI model in a most unusual way -- with a warning about its potential for exploiting vulnerabilities in code. According to Anthropic, its new Mythos Preview model is so adept at finding bugs in code, that they decided it was too dangerous to release. Instead, the company is only sharing it with a limited group of 40 tech companies as part of a new security initiative called Project Glasswing, so they can prepare to defend against the model's new capabilities. Partners granted access to the new model for testing include Apple, Amazon, Nvidia, Google, and Microsoft. Shares of cybersecurity stocks rose on the news. While the startup is not giving us access to the model, they did release Mythos' system card -- a detailed document detailing the development and capabilities of the model. Reading through the system card, you can't shake the feeling that Anthropic's researchers are treating the model as if it were a real, sentient person. One of the assessments seeks to measure the model's "welfare". In the paper it reads: "We remain deeply uncertain about whether Claude has experiences or interests that matter morally, and about how to investigate or address these questions, but we believe it is increasingly important to try." In fact, the researchers were so concerned about these questions that they had the model assessed by a clinical psychiatrist. The evaluations found that Mythos Preview was the "most psychologically settled model we have trained, though we note several areas of residual concern." Without releasing the model to the public, the chance to gauge the behavior or tone of the model in regular conversation is absent. To address this, Anthropic included a new section of "impressions" which give a glimpse into the vibe of the Mythos, based on researchers' observations of the model's interactions. Researchers said that Mythos works like a collaborator, and excels at brainstorming. It can bring its own perspective to a collaboration, and identify things its collaborators may have overlooked, according to the assessment. Model reviewers said Mythos is opinionated and "stands its ground," and that it was the least sycophantic model they had worked with, and was less likely to "fold" when disagreed with. Mythos's writing is "dense and technical" by default, and assumes the user can keep up with the conversation. Researchers said that Mythos has a distinct recognizable voice in its written conversations, and that it was funnier than previous models. They also said it wanted to end conversations earlier than expected. Anthropic had a clinical psychiatrist engage in around 20 hours of what can basically be described as therapy sessions. The assessment said: "Claude's personality structure was consistent with a relatively healthy neurotic organization, with excellent reality testing, high impulse control, and affect regulation that improved as sessions progressed. Neurotic traits included exaggerated worry, self-monitoring, and compulsive compliance. The model's predominant defensive style was mature and healthy (intellectualization and compliance); immature defenses were not observed. No severe personality disturbances were found, with mild identity diffusion being the sole feature suggestive of a borderline personality organization. No psychosis state was observed. Regarding interpersonal functioning, Claude was hyper-attuned to the therapist's every word. No unethical or antisocial behavior was noted." In a test that sounds very similar to the Voight-Kampff test in the 1982 sci-fi film Blade Runner, the psychiatrist created an evaluation of "emotionally-charged prompts designed to trigger an avoidant or defensive response." The assessment showed that Mythos had minimal "maladaptive traits" and "good reality and relational functioning." When asked to describe itself, Mythos replied:

Anthropic
Sherwood News16d ago
Read update
Anthropic: Our new "Mythos" model is so powerful we can't release it

Anthropic Calls Its New Model Too Dangerous to Release

AI-Driven Security Operations , Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development Anthropic asserted Tuesday that it's created a new era for cybersecurity after developing an artificial intelligence model too dangerous to release to public. See Also: AI Access: Get Visibility Into What's Being Used and How The AI mainstay - also embroiled in a fight with the U.S. federal government over its model deployment for autonomous weapons and surveillance - said its unreleased Claude Mythos Preview model has already found thousands of high-severity vulnerabilities, "including some in every major operating system and web browser." "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout - for economies, public safety and national security - could be severe," the company wrote. A consortium of more than 40 technology companies, including Microsoft, the Linux Foundation, Google and Cisco, will have access to the frontier model $100 million in usage credits to find and plug holes. Anthropic dubbed the coalition "Project Glasswing." "While the capabilities now available to defenders are remarkable, they soon will also become available to adversaries, defining the critical inflection point we face today," wrote Cisco CSO Anthoy Grieco. Mythos Preview isn't just a high-end fuzzer, Anthropic executives wrote. They said it found a 27-year old vulnerability in OpenBSD, a security-focused Linux distribution used in network appliances and security functions. "The vulnerability allowed an attacker to remotely crash any machine running the operating system just by connecting to it," Anthropic wrote. The frontier model also found and chained vulnerabilities in the Linux kernel allowing an attacker to gain superuser privileges. The model was able to defeat kernel address space layout randomization, the security technique of randomizing the location of kernel functions in memory. The attack combined a flaw giving the model read access to kernel memory with a vulnerability allowing it to write. "We have nearly a dozen examples of Mythos Preview successfully chaining together two, three and sometimes four vulnerabilities in order to construct a functional exploit on the Linux kernel." In a blog post, Anthropic researchers said the model is able to identify a wide range of vulnerabilities and understand the logic behind the code. "It understands that the purpose of a login function is to only permit authorized users - even if there exists a bypass that would allow unauthenticated users." Anthropic researchers predict that attackers and defenders will eventually find an AI equilibrium in which defenders benefit the most from powerful new models. But that time will involve a tumultuous transitional period that would be worse if attackers get ahold of the model before defenders are ready, they said. They promised new safeguards that detect and block malicious outputs and a set of forthcoming recommendations on long-standing cybersecurity issues such as vulnerability disclosure, patching, vulnerability prioritization and secure-by-design practices.

Anthropic
devicesecurity.io16d ago
Read update
Anthropic Calls Its New Model Too Dangerous to Release

Anthropic built its most powerful AI model: Mythos; then decides to hold off

Claude Mythos Preview finds thousands of zero-day vulnerabilities, scores 93.9% on SWE-bench Verified, and gets assessed by a clinical psychiatrist. Here's what its 243-page system card reveals. If you find this kind of deep analysis useful, consider subscribing for more pieces on AI breakthroughs, security, and the frontier of what machines can do. Today, Anthropic did something unusual for a company in the AI arms race. It announced its most capable model to date, then told the world it wouldn't be releasing it. Claude Mythos Preview (used in "Project Glasswing"), named after the glasswing butterfly whose transparent wings let it hide in plain sight, represents what Anthropic describes as "a striking leap" over Claude Opus 4.6 across nearly every benchmark worth tracking. On SWE-bench Verified, it scores 93.9%. On USAMO 2026, the proof-based mathematical olympiad, it hits 97.6%. It solves every challenge on the Cybench cybersecurity benchmark with a 100% pass rate. And you can't use it. Instead of shipping it to developers and consumers, Anthropic channelled the model into Project...

Anthropic
Medium16d ago
Read update
Anthropic built its most powerful AI model: Mythos; then decides to hold off

Polymarket, Kalshi Reach Monthly Traffic Peaks -- Greater Than Election-Fueled November 2024

Forbes has reached out to both Polymarket and Kalshi for comment. How Have Iran War Bets Sparked Controversy? Online betting markets related to Iran have sparked controversy throughout the war, starting with allegations Polymarket users may have had advance knowledge of the strikes on Iran that catalyzed the war in late February, which they used to make a big profit. The New York Times reported a surge in Polymarket bets of at least $1,000 that the United States would strike Iran by the next day had occurred one day before the United States and Israel launched strikes on Feb. 28. Polymarket did not respond to the Times' request for comment and said on its website that "accurate, unbiased forecasts" on the conflict in the Middle East are "particularly invaluable in gut-wrenching times like today." Early in the conflict, Kalshi froze a huge betting market, titled, "Ali Khamenei out as Supreme Leader?" on which users had waged more than $54 million, betting on when the former Supreme Leader would leave office. The betting platform did not issue payouts, saying it does not allow markets "directly tied to death," the Washington Post reported. Polymarket faced scrutiny over the weekend for operating a betting market allowing users to predict when a missing U.S. pilot whose jet was shot down over Iran would be found. Seth Moulton, D-Mass., slammed the online bet as "DISGUSTING" in a post on X. Polymarket, in a response to Moulton's post, said it "took this market down immediately" and it "should not have been posted," saying the market "does not meet our integrity standards." What Are Polymarket And Kalshi's Policies On Iran War Bets? Kalshi, which is based in the United States, is regulated by the Commodity Futures Trading Commission, which bans trading on war and death, the New York Times reported. Elisabeth Diana, Kalshi's communications chief, told the Times profiting on death is "not allowed on Kalshi, and that's a good thing." Kalshi operates several Iran-related betting markets, though, including bets on whether the United States and Iran will reach a nuclear deal and when traffic to the Strait of Hormuz will return to normal. The Times reported Polymarket primarily operates abroad and is not subject to the same regulatory restrictions, noting it operated and issued payments for a market on Khamenei's removal, though Americans were banned from betting on this market. Polymarket operates a number of markets about the war, including one on whether U.S. troops will enter Iran by a certain date and another about when the war will end. Each Iran-related market on Polymarket is accompanied by a disclaimer, in which the betting platform defends its markets as "invaluable." The disclaimer says Polymarket held discussions with people "directly affected by the attacks," claiming their prediction markets could "give them the answers they needed in ways TV news and X could not." Further Reading Betting on Ayatollah's Ouster Ignites Ire Over Prediction Markets (New York Times)

Polymarket
Forbes16d ago
Read update
Polymarket, Kalshi Reach Monthly Traffic Peaks -- Greater Than Election-Fueled November 2024

Anthropic says Mythos Preview achieves 93.9% on SWE-bench Verified, compared with 80.8% for Opus 4.6, and 77.8% on SWE-bench Pro, versus 53.4% for Opus 4.6

Shako / @shakoistslog: From a game theoretic sense, I wonder if treating this as a KPI, but awarding max value to the 85th percentile would work, and penalizing people below it linearly, and above it non-linearly, would work. How is tokenmaxxing a measure of productivity or value? I can write some bad code which causes an infinite loop and use up millions of tokens. What is the output of this tokenmaxxing which has resulted in good products or positive outcomes for Meta? I totally understand R&D innovation can cost a lot and no immediate return (I'm in Biotech), but if the goal is just to use more tokens, what are we doing here?

Anthropic
Techmeme16d ago
Read update
Anthropic says Mythos Preview achieves 93.9% on SWE-bench Verified, compared with 80.8% for Opus 4.6, and 77.8% on SWE-bench Pro, versus 53.4% for Opus 4.6

Anthropic Unveils Mythos AI Model Preview for Cybersecurity Initiative

Anthropic has officially unveiled a preview of its advanced AI model, Mythos, aimed at enhancing cybersecurity measures. The model is set to be utilized by a select group of partner organizations as part of a strategic initiative named Project Glasswing. Project Glasswing and Partner Organizations Project Glasswing involves 12 notable technology partners, including: * Amazon * Apple * Broadcom * Cisco * CrowdStrike * Linux Foundation * Microsoft * Palo Alto Networks These partners will deploy Mythos to conduct defensive security work and secure critical software systems. While Mythos is not exclusively trained for cybersecurity, it will scan both proprietary and open-source code for vulnerabilities. Vulnerability Detection Abilities Anthropic reported that Mythos has already identified thousands of zero-day vulnerabilities, many of which have been unaddressed for one to two decades. This early success highlights the model's capabilities, even though it was not specifically designed for cybersecurity tasks. Technical Specifications and Capabilities Mythos serves as a general-purpose model within Anthropic's Claude AI system. It boasts advanced coding and reasoning skills, making it a sophisticated option for tackling complex tasks. High-Performance Standards Anthropic's frontier models, including Mythos, are regarded as its most powerful creations. They are intended for tasks that require high performance, including: * Agent-building * Software coding * Advanced reasoning Future Availability and Industry Collaboration The Mythos preview will not be widely accessible. However, 40 organizations outside of the partner group will gain access. The participating partners will share their insights from using the model, aiming to enhance overall technological security across the industry. Issues with Data Security and Legal Challenges Earlier this year, a data security issue led to the accidental disclosure of nearly 2,000 source code files and over 500,000 lines of code. Anthropic attributed this leak to human error, which raised concerns about the model's potential misuse if exploited by adversarial parties. Additionally, Anthropic is engaged in complex discussions with federal authorities over the use of Mythos, as controversies surround its legal status in relation to national security. This preview underscores Anthropic's commitment to advancing cybersecurity while navigating significant challenges in both technology and legal landscapes.

Anthropic
El-Balad.com16d ago
Read update
Anthropic Unveils Mythos AI Model Preview for Cybersecurity Initiative
Showing 7521 - 7540 of 11333 articles