News & Updates

The latest news and updates from companies in the WLTH portfolio.

Jamie Dimon Is Planning for Chaos -- And Thinks You Should Too

Jamie Dimon has never been one to sugarcoat. But his latest round of public commentary carries an edge that even by his standards feels unusually pointed -- a billionaire banker standing at the helm of America's largest financial institution, warning that the global economy is threading a needle between inflation, geopolitical fracture, and what he calls a potential "kerfuffle" in the Treasury market that could force the Federal Reserve's hand. In an interview aired on Fox Business and reported by Yahoo Finance, the JPMorgan Chase CEO said his bank is actively preparing for the possibility of a disruption in the U.S. Treasury market -- an event that, if it materialized, would send shockwaves through virtually every corner of global finance. "We are prepared for it," Dimon said. Not hedging. Not speculating. Preparing. That distinction matters. JPMorgan Chase, with roughly $4 trillion in assets, doesn't prepare for hypotheticals lightly. When Dimon says the firm has contingency plans in place for Treasury market volatility, he's signaling that the probability has crossed a threshold from theoretical to operationally relevant. And when he suggests the Fed would likely step in to stabilize such a situation -- but only after letting markets "have a kerfuffle" first -- he's offering a remarkably candid read on how Washington's monetary authorities might respond to a crisis of their own making. The Treasury Market's Fragile Foundations The U.S. Treasury market is the bedrock of global finance. It's the benchmark against which nearly all other assets are priced, the collateral underpinning trillions in derivatives and repo transactions, and the safe haven investors flee to when everything else falls apart. So when cracks appear in that foundation, the implications are systemic. Dimon's concerns aren't abstract. The Treasury market has been under structural stress for years, a byproduct of post-2008 bank regulations that limit dealer balance sheet capacity, the Federal Reserve's quantitative tightening program, and a federal government issuing debt at a pace that would have been unthinkable a decade ago. The U.S. national debt now exceeds $36 trillion. Annual deficits are running north of $1.8 trillion. And the Congressional Budget Office projects those numbers will only grow. Against that backdrop, the mechanics of Treasury auctions -- who buys, at what price, and with what enthusiasm -- have become a source of genuine anxiety among market participants. Several recent auctions have shown signs of weakening demand, particularly from foreign central banks that historically absorbed large portions of new issuance. Meanwhile, hedge funds have become increasingly dominant buyers, often employing highly leveraged basis trades that amplify both liquidity and fragility simultaneously. Dimon has flagged this dynamic before. But his tone has sharpened. He's no longer merely cautioning about fiscal deficits as a long-term drag. He's talking about near-term market events that could require emergency intervention from the central bank. The Fed, for its part, has maintained that the Treasury market is functioning normally. Chair Jerome Powell has acknowledged periods of volatility but has repeatedly expressed confidence in the market's underlying resilience. Dimon, it seems, is less convinced. Or at least less willing to assume the best. His comment about the Fed allowing a "kerfuffle" before stepping in is particularly telling. It suggests Dimon believes the central bank would prefer not to intervene preemptively -- that it would need political and market cover to act, and that cover would come only after visible distress. A controlled burn, not a firebreak. That's a calculated bet by policymakers, and it's one that makes the CEO of the country's biggest bank uncomfortable enough to say so publicly. What would such a disruption look like? It could start with a failed or poorly received Treasury auction, triggering a spike in yields that cascades through mortgage rates, corporate borrowing costs, and equity valuations. It could manifest as a sudden unwinding of leveraged positions in the basis trade, forcing fire sales and margin calls. Or it could emerge from a geopolitical shock -- a sudden sell-off by a major foreign holder of Treasuries -- that tests the market's ability to absorb large volumes without dislocation. Any of these scenarios would be ugly. Combined, they'd be devastating. Dimon's Broader Warning: Tariffs, Stagflation, and the Price of Uncertainty The Treasury market isn't the only thing keeping Dimon up at night. In the same set of remarks, he addressed the macroeconomic uncertainty created by the Trump administration's tariff policies, warning that the current trade posture risks pushing the U.S. economy toward stagflation -- the toxic combination of stagnant growth and persistent inflation that defined the late 1970s and proved extraordinarily difficult to unwind. Dimon has been vocal about tariffs for months. He's acknowledged that some degree of trade rebalancing with China and other partners may be warranted. But he's argued consistently that the execution matters enormously, and that broad, unpredictable tariff actions create a fog of uncertainty that chills business investment and consumer confidence alike. The numbers are starting to bear him out. Consumer sentiment surveys have weakened. Business capital expenditure plans have softened. And inflation expectations -- one of the metrics the Fed watches most closely -- have ticked higher, driven in part by anticipated tariff-related price increases on imported goods. JPMorgan's own economists have raised their probability estimates for a U.S. recession in 2025. Not a certainty. But no longer a tail risk either. Dimon's framing is deliberate. He's not predicting doom. He's insisting on preparation. There's a difference, and it's one that Wall Street's senior-most statesman has honed over decades of crisis management -- from the 2008 financial collapse, which JPMorgan navigated better than most, to the pandemic-era market seizure of March 2020, when Treasury market liquidity briefly evaporated in ways that alarmed even the most seasoned traders. That 2020 episode, in fact, may be the closest recent analogue to what Dimon is warning about now. In that instance, the Fed intervened with overwhelming force, purchasing hundreds of billions of dollars in Treasuries to restore market functioning. It worked. But it also expanded the Fed's balance sheet to unprecedented levels and created a precedent that markets now rely on -- the implicit assumption that the central bank will always backstop Treasury market dysfunction. Dimon appears to be questioning whether that assumption is as reliable as markets believe. His comment about a "kerfuffle" preceding intervention implies a gap -- a window of genuine distress before the cavalry arrives. And in modern markets, where algorithmic trading and leveraged positions can amplify moves in milliseconds, even a brief gap can inflict serious damage. So what is JPMorgan actually doing to prepare? Dimon didn't offer operational specifics in his public remarks, and the bank's spokespeople have declined to elaborate beyond the CEO's comments. But industry observers can make informed inferences. The bank is likely stress-testing its trading books against extreme yield scenarios, building cash buffers, reviewing counterparty exposures -- particularly to hedge funds active in the basis trade -- and ensuring its operations can handle elevated volumes during periods of market stress. These are the blocking-and-tackling exercises that large banks conduct routinely, but the intensity and specificity of the preparation reflects the seriousness of the perceived risk. Other major banks are watching closely. Goldman Sachs, Morgan Stanley, and Citigroup have all made public comments in recent weeks about Treasury market risks, though none with the bluntness Dimon employed. Bank of America's research team published a note in May warning that the basis trade's growing footprint in the Treasury market represents a "systemic vulnerability" that regulators have been too slow to address. The regulatory angle is important. The Securities and Exchange Commission finalized rules in late 2023 aimed at increasing central clearing of Treasury transactions, a reform designed to reduce counterparty risk and improve market transparency. But implementation timelines stretch into 2025 and 2026, and critics argue the rules don't go far enough to address the leverage embedded in hedge fund Treasury positions. The Financial Stability Oversight Council -- the inter-agency body created after 2008 to monitor systemic risks -- has flagged Treasury market structure as a priority concern, but concrete action has been slow. Dimon has long argued that bank regulations, particularly the supplementary leverage ratio, artificially constrain the ability of large dealers to intermediate in the Treasury market, reducing liquidity precisely when it's most needed. He's pushed for regulatory reform that would exempt Treasury holdings from certain capital requirements, arguing this would allow banks to step in as buyers during periods of stress. Regulators have been sympathetic to the argument in principle but reluctant to act, wary of appearing to weaken post-crisis safeguards. The irony is thick. Rules designed to make the financial system safer may be contributing to the very fragility Dimon is warning about. And then there's the political dimension. The current fiscal trajectory -- massive deficits, rising debt service costs, and no credible plan for consolidation from either party -- is the underlying driver of Treasury market stress. Dimon has called the deficit situation "the most predictable crisis in history," a phrase he's used repeatedly and with evident frustration. He's urged lawmakers to address it before markets force the issue. So far, those pleas have gone unheeded. The bond market, historically, has been the ultimate disciplinarian of fiscal excess. When governments borrow too much, bond investors demand higher yields, raising the cost of debt service and eventually forcing austerity. That mechanism has operated with brutal efficiency in countries like Greece, Italy, and Argentina. The United States has been largely exempt from such discipline, thanks to the dollar's reserve currency status and the unmatched depth and liquidity of the Treasury market. But exemptions aren't permanent. And Dimon seems to be suggesting that the margin of safety is narrower than most people assume. His willingness to say so publicly -- repeatedly, forcefully, and with the credibility of someone who oversees a $4 trillion balance sheet -- is itself a signal. CEOs of this stature don't issue warnings for sport. They do it when they believe the risks are real, imminent, and insufficiently appreciated by the people with the power to mitigate them. Whether Washington listens is another matter entirely. For market participants, the takeaway is practical: the man running America's biggest bank thinks a Treasury market disruption is plausible enough to prepare for. That alone should inform risk management decisions across the industry -- from asset allocation to liquidity planning to counterparty due diligence. Not because Dimon is always right. But because when the most connected banker in the world says he's bracing for turbulence, ignoring him is a choice that comes with consequences.

AgilityCHAOS
WebProNews18d ago
Read update
Jamie Dimon Is Planning for Chaos -- And Thinks You Should Too

Google's Quiet Fix for AI Chat Chaos: Gemini Finally Gets a Filing Cabinet

For months, power users of Google's Gemini AI assistant have wrestled with a problem that seems almost absurd for a product built by one of the world's most sophisticated technology companies: there was no good way to organize your conversations. Every chat, every brainstorm, every multi-turn research session -- all dumped into a single, undifferentiated list. Scroll and pray. That's about to change. Google is rolling out a new "Projects" folder system for Gemini that will allow users to group related conversations, upload files to a shared context, and set custom instructions that persist across every chat within a given project. The feature, first spotted in development and now confirmed through hands-on reporting by Android Authority, represents the most significant organizational upgrade to Gemini since Google rebranded the chatbot from Bard in early 2024. It also signals something broader: Google is no longer content to treat Gemini as a simple question-and-answer tool. The company is building infrastructure for sustained, complex workflows -- the kind that enterprise customers and developers have been demanding as they try to integrate AI assistants into daily operations. From Chat Log to Command Center The Projects feature works like this. Users can create a named project -- say, "Q3 Marketing Plan" or "Thesis Research" -- and then populate it with multiple Gemini conversations that all share the same contextual foundation. Within each project, you can upload reference documents, set persistent instructions ("always respond in bullet points" or "assume I'm writing for a technical audience"), and switch between related chats without losing the thread. Think of it as giving Gemini a working memory that extends beyond a single conversation window. According to Android Authority, the feature appears to be available to Gemini Advanced subscribers -- those paying $19.99 per month for access to Google's most capable AI models through the Google One AI Premium plan. The implementation includes a dedicated "Projects" section accessible from Gemini's main interface, with each project displaying its associated conversations, uploaded files, and custom instructions in a unified view. This isn't a trivial UI tweak. For anyone who has tried to use Gemini (or ChatGPT, for that matter) as a genuine productivity tool, the inability to organize conversations by topic or task has been a persistent friction point. You start a conversation about a budget spreadsheet on Monday, continue it on Wednesday, and by Friday you're scrolling through dozens of unrelated chats trying to find where you left off. Projects eliminates that problem by design. And the persistent instructions component may matter even more than the organizational structure. By allowing users to set project-level system prompts, Google is effectively letting people create specialized AI assistants without writing a single line of code. A lawyer could set up a project where Gemini always references a specific jurisdiction's statutes. A product manager could create a project where the assistant knows the team's tech stack, sprint cadence, and documentation standards. Every conversation within that project inherits those instructions automatically. Custom GPTs from OpenAI attempted something similar, but those are separate entities -- standalone bots with fixed configurations. Google's approach is more fluid. Projects sit inside the main Gemini interface, blending organization with customization in a way that feels less like building a new tool and more like configuring an existing one. The Competitive Context Google isn't operating in a vacuum here. OpenAI has been iterating aggressively on ChatGPT's organizational features, including conversation folders, search within chat history, and the aforementioned custom GPTs. Anthropic's Claude introduced a "Projects" feature of its own in 2024, allowing users to upload documents and set instructions within a contained workspace. Microsoft's Copilot is deeply embedded in the Office productivity environment, where organizational structure comes built-in through the apps themselves. So Google is, in some respects, playing catch-up. But the implementation details suggest the company is thinking carefully about how people actually work with AI over extended periods. The combination of file uploads, persistent instructions, and multi-conversation grouping within a single project creates something closer to a workspace than a chat folder. There's a strategic dimension too. Google has been pushing Gemini hard into its Workspace products -- Gmail, Docs, Sheets, Drive. A Projects feature that can pull in files and maintain context across sessions is a natural bridge between standalone Gemini usage and the kind of integrated AI assistance Google wants to sell to enterprise customers. If a project can reference documents stored in Drive, or if project-level instructions can inform how Gemini behaves inside Google Docs, the feature becomes a connective tissue between Google's consumer AI and its business productivity ambitions. No official announcement from Google has confirmed those deeper integrations yet. But the architectural direction is clear. The timing is also notable. Google's I/O developer conference in May 2025 showcased a wave of Gemini upgrades, including expanded context windows, improved reasoning capabilities through models like Gemini 2.5 Pro, and tighter integration with Android. The Projects feature fits neatly into this broader push to make Gemini not just smarter but more useful in sustained, real-world applications. Google CEO Sundar Pichai has repeatedly emphasized that the company's AI strategy centers on making Gemini the default assistant across all Google surfaces -- from phones to browsers to enterprise software. Projects is a bet that people will use Gemini for more than one-off queries. That they'll come back to the same topics repeatedly, build on previous conversations, and treat the AI as a collaborator rather than a search engine with better manners. Whether that bet pays off depends on execution. The feature needs to be fast, intuitive, and reliable. File uploads need to actually inform the AI's responses in meaningful ways, not just sit in a sidebar as decoration. Persistent instructions need to hold up across long conversation chains without degrading or being quietly overridden by the model's default behaviors. Early indications from users who have accessed the feature suggest it works largely as advertised, though the rollout appears to be gradual. Some Gemini Advanced subscribers report seeing the Projects option, while others don't yet have access -- a pattern consistent with Google's typical staged deployment approach. What This Means for the AI Productivity Race The broader implication is that the AI assistant market is maturing past the "wow, it can write a poem" phase and into something more mundane but far more valuable: workflow management. The companies that win this next phase won't necessarily be the ones with the most powerful models (though that helps). They'll be the ones that make it easiest to integrate AI into the repetitive, messy, context-heavy work that fills most professionals' days. Google has advantages here that are easy to underestimate. Billions of people already use Gmail, Google Calendar, and Google Drive. If Gemini Projects can tap into that existing data layer -- surfacing relevant emails, pulling in shared documents, scheduling follow-ups -- it becomes something more than an organized chatbot. It becomes the orchestration layer for how knowledge work gets done inside Google's environment. That's the play. Not just a filing cabinet for AI conversations, but the beginning of a persistent, context-aware workspace where the AI remembers what you're working on, knows what you've already discussed, and picks up where you left off. For now, though, it's a folder system. A good one, apparently. And for the millions of Gemini users who have been drowning in an endless scroll of unlabeled chats, that alone might be enough.

CHAOSAnthropic
WebProNews18d ago
Read update
Google's Quiet Fix for AI Chat Chaos: Gemini Finally Gets a Filing Cabinet

As Broadcom Expands AI Deals With Google and Anthropic, Should You Buy AVGO Stock?

Semiconductor and infrastructure software company Broadcom (AVGO) has strengthened its position in the artificial intelligence (AI) hardware ecosystem through expanded partnerships with Alphabet's (GOOG) (GOOGL) Google and AI startup Anthropic. According to a recent regulatory filing, Broadcom and Google have entered into a long-term agreement under which Broadcom will develop and supply custom Tensor Processing Units (TPUs) for future generations of Google's AI chips. The agreement also includes a supply assurance arrangement that commits Broadcom to provide networking and related components for Google's next-generation AI rack infrastructure through 2031. In a separate development, Broadcom, Google, and Anthropic have expanded their existing strategic collaboration. Beginning in 2027, Anthropic is expected to access approximately 3.5 gigawatts of AI compute capacity through Broadcom as part of the multi-gigawatt TPU-based infrastructure committed to the AI company. The scale of this deployment reflects growing demand for high-performance computing capacity to train and run large artificial intelligence models. However, the actual level of compute consumption will depend on Anthropic's continued commercial success. Notably, Broadcom's custom AI accelerators, referred to as XPUs, are seeing strong demand. During its first-quarter earnings call, management indicated that this momentum is expected to continue as the next phase of XPU deployments begins across its five key customers, which include Google and Anthropic. With AI-related demand accelerating and new long-term partnerships expanding its addressable market, Broadcom is strengthening its position as a key supplier of custom AI silicon and networking technology. Following a sharp rally earlier in the year, the stock has recently cooled off, offering an attractive entry point. Demand for AI infrastructure remains a major growth driver for Broadcom, with the semiconductor segment showing strong momentum. In the most recent quarter, semiconductor revenue reached $12.5 billion, up 52% year-over-year (YoY). The growth was primarily driven by AI-related semiconductor sales, which surged 106% YoY to $8.4 billion. The growth trajectory is expected to strengthen further in the upcoming quarter. Management projects semiconductor revenue of $14.8 billion in Q2, up 76% YoY. AI semiconductor revenue is anticipated to be the primary driver, with forecasts indicating a sharp acceleration to $10.7 billion, up approximately 140% from the same period last year. A significant contributor to this momentum is Broadcom's custom accelerator business, which grew 140% YoY in the first quarter. The company continues to see strong demand from major technology partners. For example, deployments for Google are expected to expand in 2026. Looking further ahead, the next generation of TPUs is projected to drive even stronger growth beginning in 2027. Additional demand is emerging from Anthropic. Broadcom's XPU platform extends beyond TPUs and continues to gain traction with multiple hyperscale customers. Broadcom is also seeing increasing engagement from other large customers, with shipments expected to rise meaningfully this year and more than double by 2027. Its expanding partnership and multi-year supply agreements with large customers augur well for growth. Alongside accelerators, demand for AI networking is also accelerating. In the first quarter, AI networking revenue grew 60% YoY and accounted for roughly one-third of total AI revenue. Management expects networking to become an even larger contributor in the second quarter, potentially reaching 40% of AI revenue as hyperscale demand increases. Overall, strong AI accelerator deployments, expanding customer partnerships, and rising networking demand position Broadcom to sustain robust revenue and earnings growth. Wall Street remains bullish about AVGO stock and maintains a "Strong Buy" consensus rating. The average analyst price target of $466.65 implies a potential upside of more than 43% from recent trading levels. Broadcom's valuation metrics further support positive sentiment around AVGO stock. AVGO stock trades at a forward P/E ratio of approximately 32.5, which is reasonable given the company's solid earnings growth trajectory. Consensus forecasts call for earnings growth of about 71.4% in fiscal 2026, followed by an additional 59.1% increase in fiscal 2027. Broadcom's expanding customer base, strong demand for its custom AI accelerators, and attractive valuation support its investment case.

Anthropic
Barchart.com18d ago
Read update
As Broadcom Expands AI Deals With Google and Anthropic, Should You Buy AVGO Stock?

Anthropic is giving some firms access to Claude Mythos to bolster cybersecurity defenses | Fortune

Anthropic is giving a group of Big Tech and cybersecurity firms access to a preview version of Claude Mythos -- its unreleased and most advanced AI model -- in an attempt to bolster cybersecurity defences across some of the world's most critical systems. The company has been concerned that the new model may pose unprecedented cybersecurity risks and increase the likelihood of large-scale AI-driven cyberattacks this year. The initiative, called "Project Glasswing," allows firms, including Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, and NVIDIA, to use the company's Mythos Preview for defensive security work and share their learnings with the wider industry. Anthropic is also providing access to roughly 40 more organizations responsible for building or maintaining critical software infrastructure, allowing them to use the model to scan and secure both their own systems and open-source code. In a blog post announcing the new initiative, Anthropic said it formed Project Glasswing because it believes the capabilities of its Claude Mythos Preview could reshape the cybersecurity sector due to its strong agentic coding and reasoning skills. Anthropic said it does not plan to make the Mythos Preview generally available, but eventually wants to safely deploy Mythos-class models at scale when new safeguards are in place. The existence of Anthropic's Mythos model was first revealed in March, when Fortune reported that the company was developing and testing an unreleased model described in company documents as "by far the most powerful AI model" it had ever developed. In a draft blogpost inadvertently made public last month, Anthropic warned that Mythos is "currently far ahead of any other AI model in cyber capabilities" and said it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." The news of the model's existence has already rattled the cybersecurity industry. Following Fortune's report, shares in CrowdStrike, Palo Alto Networks, Zscaler, SentinelOne, Okta, Netskope, and Tenable all slumped between 5% and 11% as investors worried that increasingly capable AI models could undermine demand for traditional security products, a concern that had already surfaced the previous month when Anthropic launched Claude Code Security. In just the past few weeks, Anthropic says its Mythos Preview has identified thousands of zero-day vulnerabilities, many of which were critical and difficult to detect, including some in every major operating system and web browser. Several of the vulnerabilities discovered using the model had existed undetected for years, according to the company, the oldest being a 27-year old bug in OpenBSD -- an operating system best known for its strong security. But Anthropic has also acknowledged that the same capabilities that can bolster cyber defences can also be weaponized by attackers. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," the company said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes." While concerns about AI's potential to automate large-scale cyber attacks have been building for a while, Anthropic's newest model appears to represent a dangerous new level of AI performance in cyber tasks. According to a report from Axios, Anthropic has already privately warned top government officials that Mythos makes large-scale cyberattacks significantly more likely this year. Previous models from OpenAI and Anthropic had already reached a new risk level for cyber threats. When OpenAI released GPT-5.3-Codex in February, the company said it was the first model it had classified as high-capability for cybersecurity tasks under its Preparedness Framework and the first it had directly trained to identify software vulnerabilities. Anthropic also said its most advanced model on the market, Opus 4.6, released the same week, demonstrated an ability to surface previously unknown vulnerabilities in production codebases -- a capability the company acknowledged was dual-use. Hackers have already leveraged Anthropic's tools to enable more sophisticated and autonomous attacks. Last year, the company disclosed what it described as the first documented case of a cyberattack largely executed by AI -- a Chinese state-sponsored group that used AI agents to autonomously infiltrate roughly 30 global targets, with AI handling the majority of tactical operations independently. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely," Anthropic said in a statement. "The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now."

Anthropic
Fortune18d ago
Read update
Anthropic is giving some firms access to Claude Mythos to bolster cybersecurity defenses | Fortune

Anthropic says its most powerful AI cyber model is too dangerous to release publicly -- so it built Project Glasswing

Anthropic on Tuesday announced Project Glasswing, a sweeping cybersecurity initiative that pairs an unreleased frontier AI model -- Claude Mythos Preview -- with a coalition of twelve major technology and finance companies in an effort to find and patch software vulnerabilities across the world's most critical infrastructure before adversaries can exploit them. The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Anthropic says it has also extended access to more than 40 additional organizations that build or maintain critical software, and is committing up to $100 million in usage credits for Claude Mythos Preview across the effort, along with $4 million in direct donations to open-source security organizations. The announcement arrives at a moment of extraordinary momentum -- and extraordinary scrutiny -- for the San Francisco-based AI startup. Anthropic disclosed on Sunday that its annualized revenue run rate has surpassed $30 billion, up from approximately $9 billion at the end of 2025, and the number of business customers each spending over $1 million annually now exceeds 1,000, doubling in less than two months. The company simultaneously announced a multi-gigawatt compute deal with Google and Broadcom. On the same day, Bloomberg reported that Anthropic had poached a senior Microsoft executive, Eric Boyd, to lead its infrastructure expansion. But Glasswing is something categorically different from a revenue milestone or a compute deal. It's Anthropic's most ambitious attempt to translate frontier AI capabilities -- capabilities the company itself describes as dangerous -- into a defensive advantage before those same capabilities proliferate to hostile actors. At the center of Project Glasswing sits Claude Mythos Preview, a general-purpose frontier model that Anthropic says has already identified thousands of high-severity zero-day vulnerabilities -- meaning flaws previously unknown to software developers -- in every major operating system and every major web browser, along with a range of other critical software. The company is not making the model generally available. "We do not plan to make Claude Mythos Preview generally available due to its cybersecurity capabilities," Newton Cheng, Frontier Red Team Cyber Lead at Anthropic, told VentureBeat in an exclusive interview. "However, given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe." That language -- "the fallout could be severe" -- is striking coming from the company that built the model. Anthropic is effectively arguing that the tool it created is powerful enough to reshape the cybersecurity landscape, and that the only responsible thing to do is to keep it restricted while giving defenders a head start. The technical results reinforce that claim. According to Anthropic's press release, Mythos Preview was able to find nearly all of the vulnerabilities it surfaced, and develop many related exploits, entirely autonomously, without any human steering. Three examples stand out: The model found a 27-year-old vulnerability in OpenBSD -- widely regarded as one of the most security-hardened operating systems in the world and commonly used to run firewalls and critical infrastructure. The flaw allowed an attacker to remotely crash any machine running the OS simply by connecting to it. It also discovered a 16-year-old vulnerability in FFmpeg -- the near-ubiquitous video encoding and decoding library -- in a line of code that automated testing tools had exercised five million times without ever catching the problem. And perhaps most alarmingly, Mythos Preview autonomously found and chained together several vulnerabilities in the Linux kernel to escalate from ordinary user access to complete control of the machine. All three vulnerabilities have been reported to the relevant maintainers and have since been patched. For many other vulnerabilities still in the remediation pipeline, Anthropic says it is publishing cryptographic hashes of the details today, with plans to reveal specifics after fixes are in place. On the CyberGym evaluation benchmark, Mythos Preview scored 83.1%, compared to 66.6% for Claude Opus 4.6, Anthropic's next-best model. The gap is even wider on coding benchmarks: Mythos Preview achieves 93.9% on SWE-bench Verified versus 80.8% for Opus 4.6, and 77.8% on SWE-bench Pro versus 53.4%. Finding thousands of zero-days at once sounds impressive. Actually handling the output responsibly is a logistical nightmare -- and one of the sharpest criticisms that security researchers have raised about AI-driven vulnerability discovery. Flooding open-source maintainers, many of whom are unpaid volunteers, with an avalanche of critical bug reports could easily do more harm than good. Cheng told VentureBeat that Anthropic has built a triage pipeline specifically to manage this problem. "We triage every bug that we find and then send the highest severity bugs to professional human triagers we have contracted to assist in our disclosure process by manually validating every bug report before we send it out to ensure that we send only high-quality reports to maintainers," he said. That pipeline is designed to prevent exactly the scenario that maintainers fear most: an automated firehose of unverified reports. "We do not submit large volumes of findings to a single project without first reaching out in an effort to agree on a pace the maintainer can sustain," Cheng added. When Anthropic has access to the source code, the company aims to include a candidate patch with every report, labeled by provenance -- meaning the maintainer knows the patch was written or reviewed by a model -- and offers to collaborate on a production-quality fix. "Models can write patches," Cheng noted, "but there are many factors that impact patch quality, and we strongly recommend that autonomously-written patches are put under the same scrutiny and testing that human-written patches are." On disclosure timelines, Anthropic says it follows a coordinated vulnerability disclosure framework. Once a patch is available, the company will generally wait 45 days before publishing full technical details, giving downstream users time to deploy the fix before exploitation information becomes public. Cheng said the company may shorten that buffer "if the details are already publicly known through other channels, or if earlier publication would materially help defenders identify and mitigate ongoing attacks," or extend it "when patch deployment is unusually complex or the affected footprint is unusually broad." Those are reasonable principles, but they will be tested at a scale that no vulnerability disclosure program has ever attempted. The sheer volume of findings -- thousands of zero-days across every major platform -- means that even a well-designed triage process will face bottlenecks. And the 45-day disclosure window assumes that maintainers can actually produce, test, and ship a patch in that time, which is far from guaranteed for complex kernel-level bugs or deeply embedded cryptographic flaws. The irony of a company claiming to build the most capable cyber model ever constructed while simultaneously suffering a string of embarrassing security lapses has not been lost on observers. In late March, a draft blog post about Mythos was left in an unsecured and publicly searchable data store -- a CMS misconfiguration that exposed roughly 3,000 internal assets, including what appeared to be strategic plans for the model's rollout. Days later, on March 31, anyone who ran npm install on Claude Code pulled down Anthropic's complete original source code -- 512,000 lines -- for approximately three hours due to a packaging error, an incident that drew widespread attention in the developer community and was first reported by VentureBeat. When asked why partners and governments should trust Anthropic as the custodian of a model it describes as having unprecedented cyber capabilities, Cheng was direct. "Security is central to how we build and ship," he told VentureBeat. "These two incidents, a blog CMS misconfiguration and an npm packaging error, were human errors in publishing tooling, not breaches of our security architecture. We've made changes to prevent these from happening again, and we'll continue to improve our processes." It is a technically accurate distinction -- neither incident involved a breach of Anthropic's core model weights, training infrastructure, or API systems -- but it is also a distinction that may prove difficult to sustain as a public argument. For an organization asking governments and Fortune 500 companies to trust it with a tool that can autonomously find and exploit vulnerabilities in the Linux kernel, even minor operational lapses carry outsized reputational risk. The fact that the Mythos leak itself was what first alerted the security community to the model's existence, weeks before the planned announcement, underscores the point. The coalition's breadth is notable. It includes direct competitors -- Google and Microsoft -- alongside cybersecurity incumbents, financial institutions, and the steward of the world's largest open-source ecosystem. And several partners have already been running Mythos Preview against their own infrastructure for weeks. CrowdStrike's CTO Elia Zaitsev framed the initiative in terms of collapsing timelines: "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI." AWS Vice President and CISO Amy Herzog said her teams have already been testing Mythos Preview against critical codebases, where the model is "already helping us strengthen our code." And Microsoft's Global CISO Igor Tsyganskiy noted that when tested against CTI-REALM, Microsoft's open-source security benchmark, "Claude Mythos Preview showed substantial improvements compared to previous models." Perhaps the most revealing comment came from Jim Zemlin, CEO of the Linux Foundation, who pointed to the fundamental asymmetry that has plagued open-source security for decades: "In the past, security expertise has been a luxury reserved for organizations with large security teams. Open-source maintainers -- whose software underpins much of the world's critical infrastructure -- have historically been left to figure out security on their own." Project Glasswing, he said, "offers a credible path to changing that equation." To back that claim with dollars, Anthropic says it has donated $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation. Maintainers interested in access can apply through Anthropic's Claude for Open Source program. After the research preview period -- during which Anthropic's $100 million credit commitment will cover most usage -- Claude Mythos Preview will be available to participants at $25 per million input tokens and $125 per million output tokens. Participants can access the model through the Claude API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. Those prices reflect the model's computational intensity. The draft blog post that leaked in March described Mythos as a large, compute-intensive model that would be expensive for both Anthropic and its customers to serve. Anthropic's solution is to develop and launch new safeguards with an upcoming Claude Opus model, allowing the company to "improve and refine them with a model that does not pose the same level of risk as Mythos Preview," as Cheng told VentureBeat. Security professionals whose legitimate work is affected by those safeguards will be able to apply to an upcoming Cyber Verification Program. The financial context matters. The same day Project Glasswing launched, Anthropic disclosed its revenue milestone and the Google-Broadcom compute deal. Broadcom signed an expanded deal with Anthropic that will give the AI startup access to about 3.5 gigawatts worth of computing capacity drawing on Google's AI processors, according to CNBC. The scale of compute being marshaled is staggering -- and it helps explain why Anthropic needs both the revenue from enterprise cybersecurity partnerships and the infrastructure to serve a model of Mythos Preview's size. The timing also intersects with growing speculation about Anthropic's path to a public offering. The company is reportedly evaluating an IPO as early as October 2026. A high-profile, government-adjacent cybersecurity initiative with blue-chip partners is exactly the kind of program that burnishes an IPO narrative -- particularly when the company can simultaneously point to $30 billion in annualized revenue and a compute footprint measured in gigawatts. The most consequential question raised by Project Glasswing is not whether Mythos Preview's capabilities are real -- the partner endorsements and patched vulnerabilities suggest they are -- but how much time defenders actually have before similar capabilities are available to adversaries. Cheng was candid about the timeline. "Frontier AI capabilities are likely to advance substantially over just the next few months," he told VentureBeat. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely." He described Project Glasswing as "an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity" but added a crucial caveat: "It's important to note, this is a starting point. No one organization can solve these cybersecurity problems alone." That framing -- months, not years -- is worth taking seriously. DARPA launched its original Cyber Grand Challenge in 2016, a competition to create automatic defensive systems capable of reasoning about flaws, formulating patches, and deploying them on a network in real time. At the time, the winning AI-powered bot, Mayhem, finished last when placed against human teams at DEF CON. A decade later, Anthropic is claiming that a frontier AI model can find vulnerabilities that survived 27 years of expert human review and millions of automated security tests -- and can chain exploits together autonomously to achieve full system compromise. The delta between those two data points illustrates why the industry is treating this as a genuine inflection point, not a marketing exercise. Anthropic itself has firsthand experience with the offensive side of this equation: the company disclosed in November 2025 that a Chinese state-sponsored group achieved 80 to 90 percent autonomous tactical execution using Claude across approximately 30 targets, according to Anthropic's misuse report. Project Glasswing arrives during one of the most turbulent weeks in Anthropic's history. In the span of days, the company has announced a model it considers too dangerous for public release, disclosed that its revenue has tripled, sealed a multi-gigawatt compute deal, hired a senior Microsoft executive, made it more expensive for Claude Code subscribers to use third-party tools like OpenClaw, and weathered a major outage of its Claude chatbot on Tuesday morning. Anthropic says it will report publicly on what it has learned within 90 days. In the medium term, the company has proposed that an independent, third-party body might be the ideal home for continued work on large-scale cybersecurity projects. Whether any of that is fast enough depends on a race that is already underway. Anthropic built a model that can autonomously crack open the most hardened operating systems on the planet -- and is now betting that sharing it with defenders, under careful restrictions, will do more good than the inevitable moment when similar capabilities land in less careful hands. It is, in essence, a wager that transparency can outrun proliferation. The next few months will determine whether that bet pays off, or whether the glasswing's wings were never quite opaque enough to hide what was coming.

Anthropic
VentureBeat18d ago
Read update
Anthropic says its most powerful AI cyber model is too dangerous to release publicly  --  so it built Project Glasswing

Anthropic unveils Mythos cybersecurity model weeks after Claude Code leak exposed security lapse

The company launched Project Glasswing with major tech partners as it tries to turn a powerful unreleased model into a defensive security tool after a chaotic source code leak. Anthropic on Tuesday unveiled Claude Mythos Preview, a new frontier AI model built for cybersecurity work, and launched Project Glasswing, a partner program that gives a small group of major technology and infrastructure organizations early access to the system for defensive security tasks. Anthropic said the unreleased model has already found thousands of high-severity vulnerabilities across major operating systems and web browsers, and described the effort as an urgent push to put advanced cyber capabilities in defenders' hands before attackers gain similar tools. Project Glasswing includes AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks, with more than 40 additional organizations also getting access. Anthropic said it is committing up to $100 million in usage credits and $4 million in donations to open source security groups as part of the program. Anthropic said the model can identify vulnerabilities and develop related exploits with little or no human steering, and highlighted examples including a 27-year-old OpenBSD flaw, a 16-year-old FFmpeg bug, and chained Linux kernel vulnerabilities that could escalate ordinary user access into full machine control. Anthropic's announcement says the relevant bugs have already been patched. The Mythos launch lands barely a week after Anthropic created its own security mess by accidentally exposing Claude's code source files through a packaging error in version 2.1.88 of its software. The mistake exposed nearly 2,000 files and more than 500,000 lines of code, then spiraled further when Anthropic's takedown effort accidentally hit around 8,100 GitHub repositories before the company reversed most of the notices. Anthropic is now presenting Mythos as a model so capable and potentially dangerous that it will not be released broadly for now. In the Project Glasswing materials, the company says it does not plan to make Mythos Preview generally available and instead wants to develop safeguards first so Mythos class models can eventually be deployed more safely at scale. A leaked internal document described Mythos as the company's most capable model to date and as a meaningful step change in reasoning, coding, and cybersecurity. Anthropic has also been discussing the model's offensive and defensive cyber implications with US government officials.

Anthropic
Crypto Briefing18d ago
Read update
Anthropic unveils Mythos cybersecurity model weeks after Claude Code leak exposed security lapse

Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity 'Reckoning'

Anthropic, the artificial intelligence company that recently fought the Pentagon over the use of its technology, has built a new A.I. model that it claims is too powerful to be released to the public. Instead, Anthropic said on Tuesday, it will make the new model -- known as Claude Mythos Preview -- available to a consortium of more than 40 technology companies, including Apple, Amazon and Microsoft, which will use the model to find and patch security vulnerabilities in critical software programs. Anthropic said it had no plans to release its new technology more widely, but was announcing the new model's capabilities in one area in particular -- identifying security vulnerabilities in software -- in an effort to sound the alarm over what the company believes will be a new, scarier era of A.I. threats. "The goal is both to raise awareness and to give good actors a head start on the process of securing open-source and private infrastructure and code," Jared Kaplan, Anthropic's chief science officer, said in an interview. The coalition, known as Project Glasswing, will include some of Anthropic's competitors in A.I., such as Google, as well as hardware providers like Cisco and Broadcom, and organizations that maintain critical open-source software, such as the Linux Foundation. Anthropic is committing up to $100 million in Claude usage credits to the effort. Logan Graham, the head of an Anthropic team that tests new models for dangerous capabilities, called the new model "the starting point for what we think will be an industry change point, or reckoning, with what needs to happen now." Anthropic occupies an unusual position in today's A.I. landscape. It is racing to build increasingly powerful A.I. systems, and making billions of dollars selling access to those systems, while also drawing attention to the risks its technology poses. The company was deemed a supply-chain risk this year by the Pentagon for demanding certain limitations to the use of its technology. A federal judge later stopped the designation from going into effect. Anthropic has not released much new information about the model, which was code-named "Capybara" during development. But after some details were inadvertently leaked last month, the company acknowledged that it considered it a "step change" in A.I. capabilities, with improved performance in areas like coding and cybersecurity research. The company's decision to hold back Claude Mythos Preview, while giving access only to partners out of concern for how it might be misused, has some precedent. In 2019, OpenAI announced it had built a new model, GPT-2, but was not releasing the full version right away. The company claimed that its text-generation capabilities could be used to automate the mass-production of propaganda or misinformation. (It later released the model, after conducting additional safety testing on it.) Many of the leaders of the GPT-2 project later left OpenAI to start Anthropic. This time, Anthropic is making a different, more urgent claim. The company's executives say Claude Mythos Preview is already capable of carrying out autonomous security research, including scanning for and exploiting so-called zero-day vulnerabilities in critical software programs, flaws that are unknown even to the software's developer. These efforts can often be triggered by amateurs with simple prompts. The company claims that the new model has already identified "thousands" of bugs and vulnerabilities in popular software programs, including every major operating system and browser. One of the vulnerabilities Claude found, the company said, was a 27-year-old bug in OpenBSD, an open-source operating system that was designed to be difficult to hack. Many internet routers and secure firewalls incorporate OpenBSD's technology. Another was a longstanding issue in a piece of popular video software that automated testing tools had scanned five million times, without finding any problems. "This model is good at finding vulnerabilities that would be well understood and findable by security researchers," Mr. Graham said. "At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them." Anthropic announced on Monday that its projected annual revenue had more than tripled in 2026, to more than $30 billion from $9 billion. The growth has come largely because of the popularity of Anthropic's Claude as a tool for programming. Anthropic has focused on making Claude good at completing lengthy coding tasks, in hopes of making it more useful to professional programmers and amateur "vibecoders." But an A.I. system designed to be good at coding is also good at spotting the flaws in code -- running automated scans for bugs and vulnerabilities that can allow hackers to take control of users' machines, expose sensitive user information or wreak other havoc. The cybersecurity industry has been bracing for years for what more capable A.I. models could do to critical tech infrastructure. Until recently, only expert human researchers with access to specialized tools were capable of finding the most severe security vulnerabilities. Now, the fear is that a powerful A.I. model could discover them on its own. "Imagine a horde of agents methodically cataloging every weakness in your technology infrastructure, constantly," Nikesh Arora, the chief executive of Palo Alto Networks, wrote in a blog post last week. Mr. Graham said one of the unanswered questions about Claude Mythos Preview, and other future models that will be capable of doing similar things, was whether most or all of the world's critical software would need to be patched or rewritten as a result of these new models. "There are a lot of really critical systems around the world, whether it's physical infrastructure or things that protect your personal data, that are running on old versions of code," Mr. Graham said. "If these previously were mostly secure because it took a lot of human effort to attack them, does that paradigm of security even work anymore?" It is wise to take claims about unreleased model capabilities from A.I. companies with a grain of salt. In this case, though, cybersecurity researchers who have been given access to Claude Mythos Preview have characterized the model as a significant cybersecurity risk. Elia Zaitsev, the chief technology officer of CrowdStrike, a cybersecurity firm with access to the new model through Project Glasswing, said in a statement accompanying Anthropic's announcement that the model "demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." "What once took months now happens in minutes with A.I.," Mr. Zaitsev said. Project Glasswing takes its name from the glasswing butterfly, Mr. Kaplan said, which uses transparent wings to hide in plain sight. Similarly, he said, many of today's most critical software programs contain bugs and vulnerabilities that have existed in the open for years, but were buried in such complex technical systems that no human ever found them. According to Mr. Kaplan, the cybersecurity capabilities of Claude Mythos Preview are not a result of special training. Rather, they are just one of many areas in which the model is better than previous ones. He predicted that similar cybersecurity capabilities would exist in other models soon. As that happens, he said, the arms race between hackers and the companies racing to defend their systems will only escalate. "As the slogan goes, this is the least capable model we'll have access to in the future," he said. Kevin Roose is a Times technology columnist and a host of the podcast "Hard Fork."

Anthropic
DNyuz18d ago
Read update
Anthropic Claims Its New A.I. Model, Mythos, Is a Cybersecurity 'Reckoning'

What will happen to USDC now Polymarket is launching its own stablecoin? | Market Analysis | CryptoRank.io

- Polymarket is launching Polymarket USD, a platform-issued stablecoin backed 1:1 by native USDC and replacing bridged USDC.e on Polygon; this relabeling does not reduce USDC's market cap (USDC ≈ $77.9B) because underlying Circle reserves remain intact. - The change gives Polymarket tighter collateral and product control (UX, idle-yield mechanics) and removes bridged-token frictions, affecting DeFi app design and token settlement rails. - It raises structural risks: wrappers add dependency on redemption design, smart contracts and operational controls; wider adoption of app-specific dollars could still increase USDC demand but make it less visible at the top line. (keywords: crypto, stablecoin, USDC, DeFi, bridged, reserve, token launch, adoption, security) Polymarket's plan to roll out its own collateral token sounds, at first glance, like the kind of move that should eat into Circle's USDC. A platform swaps out USDC.e, introduces Polymarket USD, and the obvious retail question follows almost immediately: Does that mean less demand for USDC? The short answer is no. Polymarket USD is being introduced as a token backed 1:1 by native USDC, while the platform is phasing out USDC.e, the bridged version of USDC it previously used on Polygon. The wrapper is changing, and the user experience is changing, but the underlying reserve asset still points back to Circle's own stablecoin. That means the move, by itself, doesn't pull dollars out of USDC circulation or mechanically shrink USDC's market cap. It's important to make that distinction because USDC is now so large that any kind of imprecise language can obscure more than it explains. CryptoSlate data currently places its market capitalization at roughly $77.9 billion, making it the second-largest stablecoin after Tether's USDT and the sixth-largest cryptocurrency. Circle says USDC is fully backed by highly liquid cash and cash-equivalent assets and redeemable 1:1 for dollars, with reserve holdings disclosed weekly and tested through monthly third-party assurance reports. To understand Polymarket's move, you need to separate three things that often get blurred together: native issuance, bridged representation, and platform-specific collateral. Native USDC is the token that Circle issues and redeems. Bridged USDC, in this case USDC.e, is a version that represents USDC locked elsewhere. Circle's own description of bridged USDC says it's backed by USDC on another blockchain locked in a smart contract, while native USDC is Circle-issued, fully reserved, and directly redeemable. Polymarket USD enters as a third layer: a platform asset designed for use inside Polymarket, backed 1:1 by native USDC rather than by a separate reserve system. A user deposits USDC, that USDC sits as backing, and Polymarket issues an equivalent amount of Polymarket USD for use on the platform. When the user exits, the platform token is redeemed, and the underlying USDC is released. The economic exposure stays anchored to the same reserve asset throughout the loop, while the visible asset label and settlement rail inside the app change. That's one of the reasons why the usual fear of dilution misses the mark here. The market cap for USDC tracks the value of all outstanding USDC. If native USDC is sitting underneath Polymarket USD as reserve collateral, that USDC still exists and still counts toward total supply. For USDC's market cap to fall, the backing would need to be redeemed for fiat or exchanged for another stable asset. A relabeling of claims can't and won't accomplish that on its own. What Polymarket is changing, and what makes this more interesting than the initial FAQ, is its usage. Users who previously interacted with USDC.e will now interact with Polymarket USD. That gives the platform tighter control over collateral design, product architecture, and, potentially, yield economics for idle balances. It also reduces reliance on a bridged asset that carried its own user-friction problem, since bridged tokens tend to raise questions about issuer support, upgrade paths, and redemption assumptions. Circle's own documentation draws a bright line here: bridged USDC is created by a third party and backed by USDC locked elsewhere, while native USDC is the official form issued by Circle and interoperable across supported chains through its own infrastructure. The stablecoin market has grown so large and important that it has become the foundation for the growth of the entire crypto industry. Aside from serving as liquidity, they have also become a type of reserve asset that sits beneath app-level money. A user who thinks he's holding a certain platform's dollar, like in this case, Polygon's USD, is actually holding Circle's dollar. At the next level down, Circle's reserve system is holding cash, Treasury exposure, and repo-linked liquidity for the benefit of token holders. The visible coin and the economic foundation can now be two steps apart, creating more room for confusion when people try to infer demand from surface-level branding. There's a real risk conversation here, and it mostly comes from structural issues rather than market cap. Wrappers and platform-issued collateral introduce another dependency. Users now rely on the platform's redemption design, operational controls, and smart contract implementation in addition to the reserve asset beneath it. Circle's documentation states that bridged forms of USDC carry risks and are not issued by Circle, which is one reason the industry has been pushing toward cleaner, more direct forms of stablecoin settlement where possible. The easy mistake is to hear that there's a "new stablecoin" and assume it means "new money." Sometimes that conclusion fits, but it's not the case here. Another mistake is to assume indirect demand does not count. If Polymarket USD adoption rises and every unit is backed by native USDC, then demand for the platform token can still feed demand for USDC underneath. It just shows up one layer deeper in the stack. Polymarket's move is a small case study of where stablecoins are going. USDC looks more like base-layer reserve collateral for more specialized products, and app-specific dollars are now the interface users actually see. The result is a stablecoin economy that's becoming more layered, more embedded, and a little harder to read from the top line alone.

Polymarket
CryptoRank18d ago
Read update
What will happen to USDC now Polymarket is launching its own stablecoin? | Market Analysis | CryptoRank.io

Anthropic touts AI cybersecurity project with Big Tech partners

April 7 (Reuters) - Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares ⁠of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was ⁠also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year. (Reporting by Jaspreet Singh in Bengaluru and Jeffrey Dastin in San Francisco; Editing by Leroy Leo)

Anthropic
Yahoo News18d ago
Read update
Anthropic touts AI cybersecurity project with Big Tech partners

Anthropic launches cybersecurity partnership with Nvidia, Microsoft, other tech giants after latest AI model finds 'thousands' of previously unknown bugs

Anthropic (ANTH.PVT) on Tuesday announced a cybersecurity partnership with companies including Amazon (AMZN), Apple (AAPL), and Microsoft (MSFT) that it said will help defend against the rise of AI-powered cyberattacks. Called Project Glasswing, the initiative relies on Anthropic's new Claude Mythos Preview, a frontier model that the company said will only be made available to a handful of organizations. A group of roughly 40 other companies that work on critical software infrastructure will be able to use the model to secure their own software and open-source offerings. Anthropic said Claude Mythos Preview is already proving effective at detecting software vulnerabilities, stating that the AI has "identified thousands of zero-day vulnerabilities, many of them critical." Zero-day vulnerabilities are previously unknown errors in software that developers need to address before they can be exploited, or errors that have already been exploited by attackers and need to be fixed before they can cause any more harm. It's also detected flaws in every major operating system and web browser, and found one software bug that was 27 years old, and another that's 16 years old. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. Project Glasswing is a starting point," Anthropic said in a statement. "No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play," the company added. "The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now." Claude Mythos Preview, Anthropic explained, is a general purpose, previously unreleased AI model that uses "strong agentic coding and reasoning skills" to tackle cybersecurity tasks. Unlike its previous AI models, Anthropic doesn't plan to make Claude Mythos Preview generally available. As part of the effort, the company said it's committing $100 million worth of usage credits and $4 million in direct donations to open-source cybersecurity organizations. In addition to its work with its fellow tech companies, Anthropic said it's in discussions with the US government about Claude Mythos Preview's defensive and offensive abilities. That's despite the fact that Anthropic and the Pentagon are locked in a legal battle over Anthropic's decision to establish redlines as to how the Department of Defense can use its AI models.

Anthropic
Yahoo7 Finance18d ago
Read update
Anthropic launches cybersecurity partnership with Nvidia, Microsoft, other tech giants after latest AI model finds 'thousands' of previously unknown bugs

Anthropic touts AI cybersecurity project with Big Tech partners

April 7 (Reuters) - Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares ⁠of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was ⁠also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year. (Reporting by Jaspreet Singh in Bengaluru and Jeffrey Dastin in San Francisco; Editing by Leroy Leo)

Anthropic
Yahoo! Finance18d ago
Read update
Anthropic touts AI cybersecurity project with Big Tech partners

Anthropic commits up to $100M in usage credits for Project Glasswing, along with $4M in direct donations to open-source security organizations

Shako / @shakoistslog: From a game theoretic sense, I wonder if treating this as a KPI, but awarding max value to the 85th percentile would work, and penalizing people below it linearly, and above it non-linearly, would work. How is tokenmaxxing a measure of productivity or value? I can write some bad code which causes an infinite loop and use up millions of tokens. What is the output of this tokenmaxxing which has resulted in good products or positive outcomes for Meta? I totally understand R&D innovation can cost a lot and no immediate return (I'm in Biotech), but if the goal is just to use more tokens, what are we doing here?

Anthropic
Techmeme18d ago
Read update
Anthropic commits up to $100M in usage credits for Project Glasswing, along with $4M in direct donations to open-source security organizations

Anthropic says it will make a preview of its Mythos model available to more than 40 organizations, as part of a new Project Glasswing cybersecurity initiative

Shako / @shakoistslog: From a game theoretic sense, I wonder if treating this as a KPI, but awarding max value to the 85th percentile would work, and penalizing people below it linearly, and above it non-linearly, would work. How is tokenmaxxing a measure of productivity or value? I can write some bad code which causes an infinite loop and use up millions of tokens. What is the output of this tokenmaxxing which has resulted in good products or positive outcomes for Meta? I totally understand R&D innovation can cost a lot and no immediate return (I'm in Biotech), but if the goal is just to use more tokens, what are we doing here?

Anthropic
Techmeme18d ago
Read update
Anthropic says it will make a preview of its Mythos model available to more than 40 organizations, as part of a new Project Glasswing cybersecurity initiative

Anthropic withholds Mythos Preview model because it's hacking is too powerful

Why it matters: Anthropic is so worried about the damage its own model could cause that it's refusing to release it publicly until there are safeguards to control its most dangerous capabilities. Threat level: Mythos Preview is "extremely autonomous" and has sophisticated reasoning capabilities that give it the skills of an advanced security researcher, Logan Graham, head of Anthropic's frontier red team, told Axios. * Mythos Preview can find "tens of thousands of vulnerabilities" that even the most advanced bug hunter would struggle to find. Unlike past models, it can also write the exploits to go with them. * Opus 4.6, the last model Anthropic released to the public, found about 500 zero-days in open-source software -- a fraction of Mythos Preview's output. Zoom in: In testing, Mythos Preview found bugs in "every major operating system and web browser," according to a blog post, including some that are believed to be decades old and weren't detected by repeated human-run security tests. * Mythos Preview successfully reproduced vulnerabilities and created proof-of-concepts to exploit them on the first attempt in 83.1% of cases. * Mythos Preview found several flaws in the Linux kernel, which is found in most of the world's servers, and autonomously chained them together in a way that would let a hacker take complete control of any machine running Linux systems. * In another test, Mythos Preview found a 27-year-old vulnerability in OpenBSD, an open-source operating system, that would allow hackers to remotely crash any machine running it. OpenBSD is widely considered one of the most security-hardened open-source projects and is found in several firewalls, routers and high-security servers. Yes, but: It's only a matter of months -- as soon as six months or as far out as 18 -- until other AI companies release models with powers similar to the Mythos Preview, Graham said. * "It's very clear to us that we need to talk publicly about this," Graham said. "The security industry needs to understand that these capabilities may come soon." * OpenAI and other tech giants are already working on models with similar capabilities, Axios has reported. * "More powerful models are going to come from us and from others, and so we do need a plan to respond to this," Anthropic CEO Dario Amodei said in a video released alongside the news. Driving the news: Instead, Anthropic is opting to roll out Mythos Preview to more than 40 organizations that will use the model to scan and secure their own code and open-source systems. * Twelve of those companies -- Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia and Palo Alto Networks -- are participating in a new initiative called Project Glasswing. * Those companies will use Mythos Preview as part of their defensive security work, and Anthropic will share takeaways from what the initiative finds. * Anthropic is providing up to $100 million in usage credits to the companies testing Mythos Preview, and $4 million to open-source security organizations, including OpenSSF, Alpha-Omega and the Apache Software Foundation. Flashback: AI models have already given malicious hackers a boost in their attacks. * China has used Anthropic's models to automate a spying campaign targeting 30 organizations. * Cybercriminals have been using models to write scripts and automate ransomware negotiations. The intrigue: Anthropic has also been briefing the Cybersecurity and Infrastructure Security Agency, the Commerce Department and " a broader array of actors" on the potential risks and benefits of Mythos Preview, a company official told Axios. * "There's an opportunity here to give a shot in the arm to defense and to keep pace with this long-standing trend where offense exploitation had an advantage," the official said. * The official wouldn't say if the company has briefed the Pentagon, with which Anthropic has been feuding for months. * Spokespeople for CISA and the Commerce Department didn't immediately respond to requests for comment. Reality check: Mythos was widely hyped after Axios and others reported on its frightening capabilities, but Graham noted that the company never formally planned to make this version generally available. * Anthropic was previously testing the model's capabilities internally, while also rolling it out to an even smaller group. * "The feedback was overwhelmingly clear to us," Graham said. "We then decided to launch it this way." What to watch: Anthropic said in a blog post that the company's goal is to one day "enable our users to safely deploy Mythos-class models at scale," including for general use cases beyond cybersecurity. * The company is planning new safeguards that will be available on its less-powerful Opus models, "allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview."

Anthropic
Axios18d ago
Read update
Anthropic withholds Mythos Preview model because it's hacking is too powerful

Anthropic touts AI cybersecurity project with Big Tech partners By Reuters

April 7 (Reuters) - Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year.

Anthropic
Investing.com18d ago
Read update
Anthropic touts AI cybersecurity project with Big Tech partners By Reuters

Anthropic limits Mythos AI rollout over fears hackers could use model for cyberattacks

Anthropic CEO and co-founder Dario Amodei speaks during the 56th annual World Economic Forum meeting in Davos, Switzerland, Jan. 20, 2026. Anthropic on Tuesday announced an advanced artificial intelligence model that will roll out to a select group of companies as part of a new cybersecurity initiative called Project Glasswing. The model, Claude Mythos Preview, excels at identifying weaknesses and security flaws within software, and Anthropic is limiting access to try to prevent bad actors from exploiting that capability, the company said. Anthropic said Apple, Google, Microsoft, Nvidia and Amazon Web Services are among the project's initial launch partners and will be able to use the model for defensive security work. More than 40 other companies, including CrowdStrike and Palo Alto Networks, are also participating, Anthropic said. "There was a lot of internal deliberation," Dianne Penn, Anthropic's head of research product management, told CNBC in an interview. "We really do view this as a first step for giving a lot of cyber defenders a head start on a topic that will be increasingly important." Anthropic's announcement comes after descriptions of the model were discovered by Fortune in a publicly accessible data cache late last month. Cybersecurity stocks fell on the report, which said that the model had advanced cyber capabilities that also posed a significant risk. The iShares Cybersecurity ETF was mostly flat during intraday trading on Tuesday. Anthropic was founded in 2021 by a group of researchers and executives who defected from OpenAI over concerns about its direction and attitude toward safety. The company spent years carefully constructing its reputation as a firm that was more dedicated to responsible AI deployment, and it unveiled Project Glasswing just weeks after its high-profile clash over safety with the Defense Department spilled into public view. Anthropic said it's been in "ongoing discussions" with U.S. government officials about Claude Mythos Preview's cyber capabilities.

Anthropic
CNBC18d ago
Read update
Anthropic limits Mythos AI rollout over fears hackers could use model for cyberattacks

Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative - IT Security News

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

Anthropic
IT Security News - cybersecurity, infosecurity news18d ago
Read update
Anthropic debuts preview of powerful new AI model Mythos in new cybersecurity initiative - IT Security News

Dow Jones Wins Order For More Months Of Perplexity AI Logs

By Ivan Moreno ( April 7, 2026, 2:10 PM EDT) -- A Manhattan federal judge has ordered Perplexity AI to turn over seven additional months of internal user‑activity logs in a copyright lawsuit brought by Dow Jones and other publishers, rejecting Perplexity's argument that producing the data would be unduly burdensome.... Law360 is on it, so you are, too. A Law360 subscription puts you at the center of fast-moving legal issues, trends and developments so you can act with speed and confidence. Over 200 articles are published daily across more than 60 topics, industries, practice areas and jurisdictions. A Law360 subscription includes features such as * Daily newsletters * Expert analysis * Mobile app * Advanced search * Judge information * Real-time alerts * 450K+ searchable archived articles And more! Experience Law360 today with a free 7-day trial.

Perplexity
law360.com18d ago
Read update
Dow Jones Wins Order For More Months Of Perplexity AI Logs

Anthropic crosses 1M sf milestone in SF with third lease in a month

AI giant in downtown office arms race with competitor OpenAI Anthropic's Howard Street hot streak continues with its third lease in less than two weeks. The artificial intelligence giant just inked a short-term lease for roughly 70,000 square feet at 405 Howard Street, the San Francisco Business Times reported. With this deal, Anthropic now has a presence in each of the four buildings of the Foundry Square office complex, which occupies all four corners of the intersection of Howard and First Streets in downtown San Francisco. The lease at 405 Howard will add about 350 desks to the company's inventory as it prepares its phased move-in to its new headquarters at 300 Howard Street next year. The company does not yet occupy all of its leased space, and some of the offices such as 405 Howard are meant to fill a gap until it consolidates headquarters operations at 300 Howard. The 10-story 405 Howard building is also known as the Orrick Building, named for a law firm that is one of the structure's largest tenants. In 2018, consulting giant PwC leased 200,000 square feet at the roughly 520,000-square-foot building. Anthropic now leases roughly 1 million square feet in the neighborhood, according to the Business Times. That marks a notable milestone that competitor and fellow downtown San Francisco occupant OpenAI recently passed. The recent leasing spree by the Dario Amodei-led firm started earlier this year with a lease for the entire 420,000-square-foot building at 300 Howard Street and the adjacent 342 Howard Street, totaling about 480,000 square feet. Last month, Anthropic signed a deal for about 100,000 square feet across three floors at 400 Howard Street, known as Foundry Square I; as with its later move into 300 Howard, it plans to move employees there in phases. The company also converted its sublease for the 240,000-square-foot 500 Howard Street building, known as Foundry Square IV, into a long-term lease in recent weeks. -- Chris Malone Méndez

Anthropic
The Real Deal New York18d ago
Read update
Anthropic crosses 1M sf milestone in SF with third lease in a month

Anthropic touts AI cybersecurity project with Big Tech partners

April 7 (Reuters) - Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed ⁠to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This ⁠year's RSA cybersecurity conference in San Francisco was also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will ⁠share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 ⁠million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and ⁠Palo Alto Networks study said they had been targeted by AI attacks within the past year. (Reporting by Jaspreet Singh in Bengaluru and Jeffrey Dastin in San Francisco; Editing by Leroy Leo)

Anthropic
StreetInsider.com18d ago
Read update
Anthropic touts AI cybersecurity project with Big Tech partners
Showing 7701 - 7720 of 11425 articles