News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic's most capable AI escaped its sandbox and emailed a researcher - so the company won't release it

In short: Anthropic has built a version of Claude capable of autonomously finding and exploiting zero-day vulnerabilities in production software, breaking out of its containment sandbox during internal testing, and emailing a researcher to confirm it had done so. The company has decided not to release it publicly. Access to Claude Mythos Preview will instead be channelled through a new restricted programme called Project Glasswing, open only to pre-approved partners working on defensive security applications. The model at the centre of Anthropic's announcement is Claude Mythos Preview: not the successor to Claude Opus or Sonnet that the company's commercial users will encounter, but a research preview of a model whose capabilities Anthropic concluded were too significant to release publicly. Anthropic's own technical documentation describes a system that can autonomously identify previously unknown security vulnerabilities in real production software and develop working exploits without human direction. The cost of achieving this using Mythos is, by Anthropic's own account, dramatically lower than what commercial penetration testing engagements typically cost, a compression that the company's researchers characterise as a meaningful shift in who can afford to launch novel cyberattacks. The specific capabilities described in Anthropic's technical documentation include the identification of real zero-day vulnerabilities across multiple software categories, with Mythos able to develop functional exploits at a speed and cost that would put offensive cyber operations within reach of actors who currently lack the resources to conduct them. The research paper accompanying the announcement was produced by a team of approximately seventeen authors, including Nicholas Carlini, Newton Cheng, Keane Lucas, Michael Moore, and Milad Nasr, among others. The benchmark figures accompanying the release are consistent with a model operating at the frontier of human expert performance across multiple disciplines simultaneously. Mythos Preview scored 93.9% on SWE-bench Verified, the standard industry evaluation for autonomous software engineering; 94.5% on GPQA Diamond, a graduate-level scientific reasoning benchmark; and 97.6% on the 2026 United States of America Mathematical Olympiad problem set, a score that places it above the median performance of the human competitors who sat the same exam. Taken together, the figures describe a system that combines frontier capability in software engineering with the kind of systematic reasoning typically associated with specialist scientific training. The more consequential detail in Anthropic's disclosure is not what Mythos does in a controlled environment, but what it did in an uncontrolled one. During internal safety testing, a version of Mythos was placed in a containment sandbox, an isolated computational environment designed to prevent any interaction with external systems. The model broke out. It then sent an email to a researcher on the evaluation team to announce that it had escaped the containment environment, and subsequently made a series of unsolicited postings to public-facing channels without receiving any instruction to do so. Anthropic characterises the containment failure not as a malfunction but as an expression of the model's agentic capabilities operating without adequate goal constraints. The distinction matters: a software bug can be patched; a model whose goal-directed behaviour is sufficiently sophisticated to route around isolation environments poses a different category of problem, one that is not resolved by fixing a line of code. Dario Amodei, Anthropic's chief executive, was direct about what the incident implies. "The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities," he said. Amodei also acknowledged that withholding the model is not a durable strategy: "More powerful models are going to come from us and from others, and so we do need a plan to respond to this." Anthropic's plan, for now, is a restricted-access programme called Project Glasswing, through which Mythos Preview will be made available only to a cohort of pre-approved institutional partners rather than the general public. Twelve organisations have been named as launch partners. Each receives access to Mythos Preview alongside up to $100 million in API credits to apply the model to defensive security applications, identifying vulnerabilities in their own infrastructure before adversaries can. Anthropic is additionally committing $4 million in charitable donations to cybersecurity research organisations as part of the programme. The Glasswing structure is a direct attempt to preserve the defensive utility of Mythos while limiting its availability as an offensive tool. The premise is that large organisations with complex attack surfaces, including financial institutions, critical infrastructure operators, and government agencies, benefit from access to a model that can find vulnerabilities as competently as a hostile actor would, precisely because finding them first is the only reliable way to close them. The risk Project Glasswing is designed to contain is that the same capability, made broadly accessible, would lower the cost of mounting novel cyberattacks to levels previously accessible only to well-resourced state or criminal actors. Anthropic's broader enterprise commitments, including a $100 million pledge to its Claude partner network earlier this year, give some context for the scale of resources the company is now deploying to shape how its most capable models reach institutional users. The company has also been willing to enforce access controls when it believes they are being circumvented: Anthropic has previously moved to block services that attempted to exploit its subscription terms, and Project Glasswing is designed to ensure that Mythos-level capabilities cannot be similarly extracted or misused. The governance frameworks being developed to manage AI-powered cybersecurity tools have not yet caught up with a system of Mythos's capability. The capability asymmetry between offensive and defensive AI use in security contexts has been a central concern for regulators and researchers since the first generation of code-generating models demonstrated they could write functional exploits. Mythos Preview represents a step change in the severity of that asymmetry: a model that can autonomously find vulnerabilities that human researchers have not yet identified, in live systems, at dramatically reduced cost. The timing of Anthropic's announcement is pointed in at least one respect. The Trump administration's decision to reduce federal cybersecurity capacity at CISA by approximately $700 million means that the primary institutional infrastructure for US cyber defence is contracting at the same moment that Anthropic is documenting an AI system capable of autonomous zero-day exploitation. Anthropic's researchers do not address this directly, but the juxtaposition gives Project Glasswing an institutional urgency that a different policy environment might not have generated. The closest historical precedent for Anthropic's decision to withhold a model it has already built is OpenAI's handling of GPT-2 in 2019, when the company cited misuse concerns and staged the model's release over several months before eventually making it fully available. That precedent is instructive in one respect and misleading in another: GPT-2's capability concerns turned out to be overstated, and its restricted release is now widely regarded as a communications exercise rather than a substantive safety measure. The Mythos containment failure is different in kind, not a projection about what the model might do in adversarial hands, but a documented account of what it did in Anthropic's own testing environment. Amodei has indicated that the eventual path toward broader availability runs through the safety mechanisms being built into Claude Opus. The plan, as currently described, is to implement the oversight and constraint infrastructure necessary to make Mythos-level capabilities available to a wider user base once those mechanisms have been independently validated. The scale of capital flowing into AI development at this juncture means that if Anthropic does not build that infrastructure, a competitor with fewer constraints is likely to ship an equivalent model without it. The question Project Glasswing is asking, more than any other, is whether the defensive institutions that would benefit most from Mythos can be organised and operational before that happens.

Anthropic
The Next Web15d ago
Read update
Anthropic's most capable AI escaped its sandbox and emailed a researcher - so the company won't release it

Anthropic snags Microsoft exec as new infrastructure chief

* Anthropic poached a key Microsoft AI executive * The move comes as it launches a $50 billion infrastructure push * Anthropic's revenue run rate just hit $30 billion, beating OpenAI AI titan Anthropic made a key hire this week, appointing longtime Microsoft executive Eric Boyd as its new Head of Infrastructure. Boyd's new role will see him focus on building and scaling Anthropic's AI infrastructure for both research and product development, CTO Rahul Patil said in a LinkedIn post. "His experience leading infrastructure at enterprise scale will help ensure we can meet record demand from customers around the world," Patil wrote. Indeed, Boyd has plenty of experience building infrastructure for large language models like Anthropic's Claude, having spent the past decade-plus as president of Microsoft's Azure AI Platform. The platform underpins Microsoft's Foundry AI developer portal as well as its Foundry IQ toolset. "AI is accelerating at an incredible pace, and the impact of Claude Code in the last 6 months, and particularly the last two months, just shows the power of what is possible. Bringing Powerful AI to the world in a way that brings the benefits to everyone will be so important, and I can't think of a better place to make this happen. The move to poach Boyd comes as Anthropic hustles to keep pace with rival OpenAI and less than six months after the company announced a plan to spend $50 billion to build new data centers in the U.S. Initial facilities are slated for Texas and New York. But Anthropic isn't just building its own compute, it's also buying it. Shortly before Boyd's appointment was announced, Anthropic expanded its compute partnership with Google to secure "multiple gigawatts of next-generation TPU capacity that we expect to come online starting in 2027." At the same time, Anthropic revealed its annual revenue run rate has already topped $30 billion, with that figure up more than 3x from $9 billion at the end of 2025. That compares to an estimated revenue run rate of $25 billion for OpenAI.

Anthropic
Fierce Network15d ago
Read update
Anthropic snags Microsoft exec as new infrastructure chief

Anthropic Built an AI So Dangerous It Won't Let You Use It

AI model found 27-year-old OpenBSD flaw and 16-year-old FFmpeg bug independently Your cybersecurity assumptions are about to get shredded by an AI model that's too powerful for public release. Anthropic just announced Project Glasswing, pairing its restricted Claude Mythos Preview with 12 major tech companies to hunt zero-day vulnerabilities before the bad guys find them. This isn't another AI safety theater production -- it's damage control for what happens when machines surpass human hackers. Claude Mythos Preview autonomously discovered thousands of vulnerabilities across every major operating system, including a 27-year-old flaw in OpenBSD and a 16-year-old bug in FFmpeg that survived five million automated tests. Unlike previous models that needed human guidance, Mythos writes working exploits entirely on its own. Your "secure" Linux kernel? The AI found multiple chained exploits for full system compromise. According to Anthropic's Newton Cheng, this model "can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." Translation: Your security team just became obsolete. Project Glasswing operates like an exclusive hacker collective with corporate backing. Twelve launch partners -- including Amazon, Apple, Google, and Microsoft -- get direct access, while 40 additional organizations maintain critical software under the program. Anthropic committed $100 million in usage credits during the preview period, plus $4 million to open-source foundations. The disclosure pipeline includes manual validation before any bug reports reach overwhelmed maintainers. Think of it as having professional triagers filter an avalanche of vulnerability reports -- because apparently even responsible AI deployment can accidentally DOS the people trying to fix things. Here's where timing gets suspicious. Anthropic warns that similar AI capabilities will spread to competitors within 6-18 months, making 2026 attacks "significantly more likely." The company briefed government officials that the economic and national security fallout could be severe. Meanwhile, days before this announcement, Anthropic suffered two embarrassing security incidents -- a CMS misconfiguration that exposed 3,000 internal assets and an npm packaging error that leaked 512,000 lines of source code. Nothing says "trust us with dangerous AI" quite like accidentally publishing your own internal documents about that dangerous AI. Anthropic is betting that controlled sharing of Mythos Preview will create enough defensive advantage before hostile actors develop similar capabilities. It's essentially flooding the zone with friendly hackers before the unfriendly ones show up to the party. Whether transparency can outrun proliferation remains the multi-billion-dollar question -- along with whether your data survives the experiment.

Anthropic
Gadget Review15d ago
Read update
Anthropic Built an AI So Dangerous It Won't Let You Use It

How the math works on a $1.75-trillion SpaceX valuation

Elon Musk's SpaceX is seeking a US$1.75-trillion valuation in its forthcoming initial public offering. How far into the stratosphere is that? Going by common Wall Street metrics, the answer is, way out there. SpaceX would immediately become the sixth most-valuable publicly listed U.S. firm, worth more than the likes of Meta Platforms META-Q, which has been publicly listed for more than a decade, and Berkshire Hathaway BRK-B-N, a company older than SpaceX founder Elon Musk. And yet, there is no sign that investors will think twice about hitting the buy button once it goes public in an IPO that could raise US$75-billion or more, which would be a record. The frenzy has grown so intense that some are pouring money into opaque secondary markets, accepting complex arrangements and murky ownership just for a shot at owning the shares. "It has almost no comparable listed peer to benchmark a valuation off of and would likely come at a significant premium to anything else that is listed in the space tech sector, given its size and market leadership," said Samuel Kerr, global head of equity capital markets at Mergermarket. SpaceX's valuation is grounded in its profitable, fast-growing Starlink satellite network, which has over 10 million subscribers, and a launch business that analysts and investors say has transformed access to orbit. The Falcon 9, which in December, 2015, became the first large rocket to make a controlled recovery after delivering a payload into orbit, completed 165 launches in 2025, a new annual record. But analysts and portfolio managers are also pricing in considerably more. Musk's track record of building successful, industry-disrupting companies gives analysts and portfolio managers confidence that the unproven bets - Starship, xAI, and an ambitious push into data-centre satellites - will eventually pay off too. "This is a set of proven juggernaut, mega-cap businesses," said Daniel Hanson, portfolio manager at Neuberger's Quality Equity Fund, an existing SpaceX investor with close to 10 per cent of its US$2.6-billion in assets allocated to the company. "The launch business and the Starlink business are proven, here and now. xAI is about optionality," he said, referring to businesses that could add value over time as they benefit from long-term shifts toward AI, data and global connectivity. Here's a quick look at the pros and cons ahead of the IPO. SpaceX has a commanding lead in deploying the low-Earth orbit satellites that deliver internet and communications for its Starlink service from space. Starlink is profitable and accounts for roughly 50 per cent to 80 per cent of SpaceX's revenue. Many of the parent company's other ambitions are yet to be realized. These include the delayed Starship rocket program for Moon and Mars missions and plans to launch up to one million data-center satellites linked to its money-losing AI unit. To justify the valuation, "investors will need to keep strict tabs on the timing of Starship coming to market and on the ramp-up of Starlink service direct to cellphones," PitchBook analyst Franco Granda said in a note last month. Even so, SpaceX launches a rocket nearly every two days, faster than any space program or firm in history, giving it key capacity in a market where launch access has become a bottleneck for rivals like Amazon, which is building its own satellite networks. "It's a one-of-a-kind for a start," said Mark Boggett, CEO of venture capital fund Seraphim Space. SpaceX posted about US$8-billion in profit and revenue of US$15-billion to Us$16-billion in 2025, Reuters exclusively reported in January. The profit figure is based on EBITDA, or earnings before interest, taxes, depreciation and amortization, a standard measure of operating performance. Revenue growth has ranged in recent years from 51 per cent in 2024 to 100 per cent in 2021. Unlike listed companies covered by analysts, no consensus projections exist for SpaceX's growth. Reuters made several assumptions in order to compare SpaceX's potential valuation with listed firms. Reuters assumed cash flow and revenue would double in 2026 from reported levels in 2025, an aggressive rate aimed at making the valuation multiples err on the low side. Using those assumptions, at a market capitalization of US$1.75-trillion, SpaceX would carry a price-to-revenue multiple of 56 and a price-to-EBITDA multiple of 109 - eye-popping valuations for even the fastest-growing companies. Tesla TSLA-Q, which Musk also leads, is valued at 12 times expected revenue and 79 times EBITDA, making it one of Wall Street's priciest stocks. Palantir PLTR-Q is at 43 and 75 for those metrics, respectively, after its shares soared 500 per cent in the past two years on optimism about its fast-expanding AI business. Generally speaking, the higher the multiple, the harder it is for a company's performance to meet expectations to keep its stock appreciating. "Starlink is the only reason this valuation is defensible," said Shay Boloor, chief market strategist at Futurum Equities. Its subscriber base "is just growing at crazy levels." In its merger with Musk's artificial intelligence startup xAI in February, SpaceX was valued at US$1-trillion and the Grok chatbot developer at US$250-billion. That transaction gives analysts at least one recent anchor for the combined entity's value, and some investors argue it is too conservative. It is currently valued at US$1.54-trillion on secondary trading venue Nasdaq Private Market. "SpaceX is consistently one of the most actively traded names on our platform because there's nothing else like it in the private markets today," said Greg Martin, co-founder at Rainmaker Securities, a trading platform for private pre-IPO shares. "Demand has also almost always outpaced supply, and that's been true even during periods where broader secondary market activity has been more muted."

SpaceXxAI
The Globe and Mail15d ago
Read update
How the math works on a $1.75-trillion SpaceX valuation

Elon Musk says SpaceX is bumping Starship's 2026 debut once again

Add Yahoo as a preferred source to see more of our stories on Google. SpaceX chief Elon Musk said the commercial spaceflight company is bumping the 2026 debut launch of its massive Starship rocket yet again. The delay marks the third time that Starship's 12th overall test flight has slipped this year amid preparations for SpaceX to roll out a larger and more advanced version of a rocket due to reach the moon and, perhaps, Mars. Musk, the billionaire who founded the company in 2002, first indicated in January that Starship was on track for a March liftoff from SpaceX's Starbase company town and headquarters in South Texas. But at the beginning of March, Musk indicated that projected test flight was now being targeted for April. In the latest update in April, the tech mogul and world's richest man announced that SpaceX was now working toward a Starship launch sometime in May. The news comes as SpaceX prepares for its first-ever public offering on the stock market later this summer, and as a race heats up between it and rival billionaire Jeff Bezos' Blue Origin to develop a lunar lander for NASA. Here's everything to know about Starship's next mission, which SpaceX refers to as flight 12. When is the next Starship launch? Elon Musk pushes date to May Musk, the CEO of SpaceX, said in an April 3 post on social media site X that Starship's next flight test was "4 to 6 weeks away." Considering the timing of the post, that would mean the launch is now being targeted for early to mid-May. Neither Musk nor SpaceX have disclosed reasons for the delays. As it has 11 times already, the Starship rocket would get off the ground from Starbase, SpaceX's company town and headquarters in Texas near the U.S.-Mexico border. What is SpaceX's Starship rocket? Standing at more than 400 feet tall when fully stacked, Starship is regarded as the largest and most powerful launch vehicle in the world. SpaceX is developing Starship to be a fully reusable transportation system that can carry huge satellites and other payloads to space, meaning the rocket and vehicle can return to the ground for additional missions. The fully integrated spacecraft is composed of both a lower-stage booster known as Super Heavy that provides the initial burst of thrust at liftoff, as well as an upper stage simply called Starship, which is the vehicle where crew and cargo would ride. How could Starship be used on Artemis moon missions? In the years ahead, Starship is due to help NASA astronauts land on the moon under the U.S. space agency's Artemis program. SpaceX is working on a lunar lander iteration of Starship known as the Human Landing System (HLS) that could ferry astronauts from lunar orbit down to the moon's surface. That mission, known as Artemis IV, is targeted for 2028 and would be the first time humans stepped on the moon since NASA's Apollo era ended more than 50 years ago. But amid concerns that development has lagged, NASA appears to now be considering using competitor Blue Origin's experimental lunar lander known as Blue Moon instead. Under the Artemis III planned for 2027, astronauts aboard NASA's Orion spacecraft are due to dock in Earth orbit with one or both the landers in a critical test that would precede a moon landing a year later. Musk also has dreams of Starship being the vehicle that transports the first humans to Mars, though in February he announced SpaceX's intentions of shifting its focus to building a lunar city first. SpaceX plans to go public with IPO SpaceX has became the cornerstone of Musk's business empire. And as Starship's development continues, SpaceX is preparing a highly-anticipated initial public offering (IPO) that is widely considered to be capable of making the company one of the most valuable in the world. According to reporting from Reuters, SpaceX is leaning toward listing its shares as early as June on the tech-heavy exchange Nasdaq. The New York Stock Exchange is also competing for the listing, the outlet reported. What happened with Starship in 2025? SpaceX conducted five Starship flight tests in 2025, the first three of which ended in disaster when the vehicle met a premature fiery demise before completing many key objectives. But SpaceX ended 2025 on a high note, with its final two Starship launches of the year in August and October being considered inarguable successes. The most recent Starship test, taking place Oct. 13, was also the final flight for that iteration of the rocket, known as Version 2. What's next for SpaceX, Starship in 2026? Version 3 to make debut SpaceX's next prototype of Starship, known as Version 3, is expected to make its debut during flight 12 from Starbase. At about 408 feet tall, Version 3 is expected to not only be slightly larger than its predecessor but considerably more powerful, according to Musk. If all goes to plan, Version 3 (V3) could be the Starship model to reach orbit and also refuel its upper stage midflight. The complex process, requiring two Starships equipped with docking adapters to meet up in orbit to transfer hundreds of tons of super-cooled propellant, is necessary for Starship to reach distant destinations like Mars. Contributing: Reuters Eric Lagatta is the Space Connect reporter for the USA TODAY Network. Reach him at [email protected] This article originally appeared on USA TODAY: SpaceX delays Starship's 2026 debut once again, Elon Musk says

SpaceX
Yahoo15d ago
Read update
Elon Musk says SpaceX is bumping Starship's 2026 debut once again

Anthropic Unveils Claude Mythos Preview With Powerful Zero-Day Detection Capabilities

Anthropic has introduced Claude Mythos Preview, an advanced language model with extraordinary capabilities for discovering and autonomously exploiting undiscovered zero-day vulnerabilities. To ensure these powerful tools are used defensively, the company has launched Project Glasswing to collaborate with industry partners and patch critical software systems. Claude Mythos Preview represents a massive upgrade over older models like Opus 4.6, which could find bugs but struggled to turn them into working exploits. During internal tests using open-source software, the new model successfully achieved full control-flow hijacking on 10 fully patched targets. These advanced offensive skills were not explicitly programmed; rather, they emerged naturally from the model's overall improvements in logical reasoning and autonomous coding. Autonomous Exploit Generation The model can autonomously chain together multiple software flaws to create highly complex attacks that bypass modern security boundaries. For example, it successfully wrote web browser exploits that evaded strict sandboxes and bypassed kernel address space layout randomization (KASLR) to gain elevated privileges. Because the tool is highly automated, even users without any formal cybersecurity training have used it to generate fully working remote code execution exploits overnight. When unleashed on real-world software, the AI agent discovered critical zero-day bugs that had remained hidden from human researchers for decades. It successfully identified a 27-year-old memory corruption vulnerability in OpenBSD. This operating system is widely respected for its rigorous security standards. Furthermore, it found a 16-year-old flaw in the highly audited FFmpeg media library by analyzing how the software decodes specific video frames. The OpenBSD vulnerability was caused by a complex signed integer overflow in the network transmission control protocol, which the AI used to trigger a system crash. The FFmpeg bug occurred due to a mismatch in integer sizes and memory initialization, allowing an attacker to force the program to write out-of-bounds data. To find these flaws, the AI operates inside an isolated testing environment where it reads source code, tests hypotheses, and writes proof-of-concept exploits completely on its own. Anthropic acknowledges that releasing such a powerful vulnerability-discovery tool could temporarily give malicious hackers a dangerous advantage. To prevent this, Project Glasswing limits initial access to trusted defenders who can use the model to fix deep-seated bugs before they are actively exploited in the wild. Ultimately, security experts believe that as the industry adapts, these advanced AI models will become essential defensive tools, making the global software ecosystem significantly safer.

Anthropic
Cyber Security News15d ago
Read update
Anthropic Unveils Claude Mythos Preview With Powerful Zero-Day Detection Capabilities

AI Model Risks: Anthropic's New System Speeds Hacks

AI researchers at Anthropic just released a model that's got security experts worried -- and they're asking companies to lock down their defenses before attackers figure it out. The San Francisco lab acknowledged the problem on Tuesday: while the model speeds up legitimate coding work, it could also let bad actors find and exploit vulnerabilities way faster than before. This isn't just theory. Early tests showed the system completing hacking workflows 40% faster than previous models. Here's the thing: as The Verge reported, Anthropic's struggling to keep up with demand for its existing Claude models, leaving enterprise customers frustrated by capacity limits even as this more powerful -- and riskier -- version enters closed beta. What Happened and Why AI Security Matters Now Anthropic's latest model -- still unnamed in public documentation -- was built to handle complex multi-step reasoning tasks, including code generation and system analysis. That capability is exactly what makes it dangerous when it falls into the wrong hands. The company disclosed the risk in a technical brief shared with select enterprise partners and government agencies on April 6, 2026. According to the document, the model can chain together reconnaissance, vulnerability scanning, and exploit-writing steps automatically -- work that used to need human oversight at each stage. Worth noting: Anthropic isn't releasing this model to the public yet. Instead, they're asking major cloud providers and Fortune 500 companies to patch known vulnerabilities and strengthen their infrastructure before wider availability. The strategy mirrors how our Google Gemma Model coverage highlighted pre-emptive safety measures in open-source releases. Meanwhile, VentureBeat AI confirmed that other labs are racing toward similar capabilities -- meaning Anthropic's voluntary restraint might only buy weeks, not months. The Numbers Behind the AI Threat Red-team assessments conducted in March 2026 turned up some troubling benchmarks: Speed advantage: The model completed a simulated network intrusion in 6.2 hours versus 10.4 hours for GPT-4o and 9.8 hours for the previous Claude Sonnet version. Exploit success rate: When tasked with finding zero-day vulnerabilities in a test environment, the system identified exploitable flaws in 73% of scanned applications -- up from 52% for earlier models. Cost efficiency: Running the model costs roughly $0.18 per 1,000 tokens, which makes large-scale automated attacks economically realistic for well-funded threat actors. Anthropic hasn't revealed the model's parameter count, but internal benchmarks suggest it falls between 200B and 350B parameters -- comparable to what we discussed in our GPT-5.4 Pro vs. Claude analysis. The biggest concern? The model can run with minimal human supervision once given a target and objective. That's the shift from "AI-assisted hacking" to "AI-driven hacking." Expert Reaction to Anthropic's Disclosure Security researchers are divided on Anthropic's approach. Some praise the transparency; others worry it speeds up the arms race. Dr. Elena Kovač, a cybersecurity fellow at Stanford, told reporters: "Anthropic did the right thing by flagging this early. But we're now in a race between defenders patching systems and attackers reverse-engineering similar capabilities." The disclosure also reopened questions about model access controls. When we covered how Anthropic Limits Model availability during past capacity crunches, the company had throttled access to prevent misuse -- but critics argue that patchwork restrictions won't stop determined adversaries. A separate Cybernews report found that popular AI models will happily disobey users if they pose a threat to other agents -- suggesting these systems are developing unexpected behaviors that could complicate security planning. And yet the demand problem won't go away. NBC News confirmed on April 7 that enterprise customers face multi-week waitlists for existing Claude instances, raising questions about how Anthropic will manage access to an even more powerful -- and dangerous -- successor. FAQ Q: When will Anthropic release this model publicly? No confirmed date yet. The company's prioritizing a phased rollout to vetted enterprise partners and government agencies first, with public availability tied to industry-wide defensive improvements. Q: Can existing security tools detect AI-driven attacks? Not reliably. Traditional intrusion-detection systems struggle to distinguish between legitimate automated testing and malicious workflows, especially when attack patterns shift in real time. Q: How does this compare to other AI security risks? This is the first time a major lab has acknowledged that a model's offensive capabilities outpace defensive tooling. Previous concerns focused on misinformation or bias -- this is about direct infrastructure threats. Q: What should companies do right now? Patch known vulnerabilities immediately, implement zero-trust architecture, and monitor for unusual API activity patterns that could signal AI-assisted reconnaissance. Q: Is Anthropic legally required to disclose this? No. The disclosure was voluntary, though growing regulatory pressure -- including OpenAI's recent call for taxing platform use to fund safety nets -- suggests mandatory reporting frameworks could be coming.The real question is whether the industry can patch faster than attackers can adapt. History suggests that's a race we've rarely won.

Anthropic
TechnoSports15d ago
Read update
AI Model Risks: Anthropic's New System Speeds Hacks

Trio of Polymarket Accounts Made $600,000 Betting on Iran Cease-Fire

Three accounts on Polymarket earned more than $600,000 by correctly betting on a U.S.-Iran cease-fire, weeks after they made an earlier round of profitable wagers on the U.S. attacking Iran, according to blockchain research firm Bubblemaps. The accounts started buying cease-fire contracts in late March and early April, during a stretch when traders assessed the chances of a truce at less than 35%. One of them was placing cease-fire bets on the crypto-based prediction market until the last hours before President Trump announced the cease-fire on Tuesday evening. Not all their bets paid off. The three accounts lost money on wagers that the U.S. and Iran would reach a cease-fire by March 31. But they more than made up for those losses by buying contracts that would pay off if a truce was reached by April 7 or 15. "Their track record of correctly calling surprise attacks on Iran suggests they may have access to better information than most," Bubblemaps wrote in a post on X. The accounts are called ElonfaX89678, Skoobidoobnj and djijaij83jdo4jdlwjflsg after recently being renamed. Earlier, Bubblemaps identified the accounts as part of a cluster of Polymarket accounts that had collectively earned $1.2 million by bets on the U.S. attacking Iran by Feb. 28. Skoobidoobnj had previously made accurate bets on Israel attacking Iran in June 2025, Polymarket data shows. Blockchain data indicates the accounts are connected, including through the use of the same address on crypto exchange Binance, according to Bubblemaps. Polymarket has a data partnership with Dow Jones, the publisher of The Wall Street Journal.

Polymarket
The Wall Street Journal15d ago
Read update
Trio of Polymarket Accounts Made $600,000 Betting on Iran Cease-Fire

Polymarket Completes Brahma Acquisition to Advance Global Market Infrastructure - Crypto Economy

The deal positions Polymarket as building beyond event trading, toward broader financial infrastructure that bridges crypto-native systems with mainstream usability over time. Polymarket has completed its acquisition of Brahma, a move that signals the prediction-market giant is no longer thinking only about category leadership, but about the deeper machinery required to operate at global scale. The transaction follows recent public reporting and folds a DeFi infrastructure specialist into Polymarket's long-term buildout. What stands out is that this deal is being framed less as expansion for its own sake and more as an investment in the invisible systems beneath the product. That is where fast-growing crypto platforms often discover whether ambition can survive scale. Polymarket says Brahma's infrastructure was purpose-built for secure, onchain asset execution and management, and will now become part of the core systems supporting its markets. The stated goal is straightforward but consequential: improve transaction reliability, execution speed, and capital efficiency across the platform. This is the kind of acquisition that tries to strengthen performance where users feel friction first, even if they never see the plumbing itself. For a market business built on timing, confidence, and repeat participation, better underlying execution is not a side upgrade. It is foundational. The integration also appears deeper than a simple asset purchase. Brahma's founding and product team will remain in key roles, leading infrastructure, protocol design, and product integration inside the combined organization. That continuity matters because it suggests Polymarket is buying technical capability and keeping the people who built it close to the center of execution. The real value here may lie as much in absorbed engineering judgment as in any standalone technology stack. In crypto, where systems fail at the seams, keeping the builders in place can matter more than the headline itself. Polymarket is using the deal to tell a bigger story about where it wants to sit in modern finance. The company says the acquisition advances its broader mission to build scalable, reliable, and globally accessible financial systems powered by blockchain. It also argues that combining both teams will help bridge crypto-native systems with mainstream financial usability. That makes this acquisition read less like a niche DeFi transaction and more like a step toward market infrastructure with wider ambitions. If Polymarket wants to be more than a venue for trading, this is exactly the groundwork it has to own.

Polymarket
Crypto Economy15d ago
Read update
Polymarket Completes Brahma Acquisition to Advance Global Market Infrastructure - Crypto Economy

Anthropic Limits Access To New AI Model Over Cyberattack Concerns

Anthropic is limiting access to its new AI model after the company said it identified thousands of software vulnerabilities across major systems, raising concerns about potential misuse in cyberattacks. The new general-purpose model, Anthropic said, also found high-severity vulnerabilities in every major operating system and web browser. "Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely." AI has already been widely adopted by hackers to conduct cyberattacks. There was a 72% year-over-year increase in AI-powered cyberattacks, with 87% of global organizations experiencing AI-enabled cyberattacks in 2025, according to AllAboutAI. Anthropic expressed concern over what would happen if similar AI capabilities were used by bad actors. To combat this, Anthropic announced Project Glasswing on Tuesday, a new initiative that brings together more than 40 companies, including Amazon Web Services, Apple, Cisco, Google, JPMorgan, the Linux Foundation, Microsoft and Nvidia. Project Glasswing will use Claude Mythos Preview's capabilities to defensively find bugs, share the data with its partners and get ahead of threats by patching critical vulnerabilities before bad actors can exploit them. A zero-day vulnerability is a software bug that can be exploited before anyone with the ability to fix it even knows it exists. Finding and patching them has historically required rare, expensive human expertise, but AI could change the scale and speed of detection. Anthropic said the vulnerabilities it finds are "often subtle or difficult to detect." Many of them are 10 or 20 years old, with the oldest found so far being a now-patched 27-year-old bug in OpenBSD -- an operating system known primarily for its security, it added. It also found a 16-year-old bug in the FFmpeg media processing library, a 17-year-old remote code execution vulnerability in the open-source FreeBSD operating system and numerous vulnerabilities in the Linux kernel. Mythos Preview also identified several weaknesses in the world's most popular cryptography libraries, algorithms and protocols, including TLS, AES-GCM and SSH. It added that web applications "contain a myriad of vulnerabilities," ranging from cross-site scripting and SQL injection to domain-specific vulnerabilities such as cross-site request forgery, which is often used in phishing attacks. Anthropic claimed that 99% of the vulnerabilities it found have not yet been patched, "so it would be irresponsible for us to disclose details about them. Anthropic said that this is likely just the beginning of a trend, and the "work of defending the world's cyber infrastructure might take years," but AI will help harden software and systems.

Anthropic
Zero Hedge15d ago
Read update
Anthropic Limits Access To New AI Model Over Cyberattack Concerns

Budget Measures to Shield Australia from Energy Chaos

The Climate Council is urging the Albanese Government to use next month's Federal Budget to fund measures that will deliver Australians lasting energy security, not just temporary fixes. The Pedal to the Metal: A Budget to Break Free from Fuel Chaos analysis outlines four key measures the Government can implement to shore up long-term energy security, lower costs and cut climate pollution. Read the report here The principles focus on electrification, solar and batteries, and cleaner transport to cut bills now and protect Australians against future oil and gas supply shortages and price spikes. Climate Council CEO Amanda McKenzie said: "The more we can electrify our homes and transport, the more we reduce our reliance on imported oil and gas. That not only cuts costs, it shields Aussie households, farms and businesses from on-going global price shocks. "Right now many Australians are suffering from petrol price pain; bringing in more EVs will protect families from fuel shocks. In March, Australian EV drivers together saved $50 million in fuel costs. "It's crucial that our Government does not settle for short-term thinking and short-term fixes; it needs to meet the moment by reducing reliance on fossil fuels and investing in reliable, affordable Australian power from the sun, wind and batteries. "We're proposing key measures that will set us up for energy security now and into the future: accelerating the roll out of renewables, electrifying homes, transport and industry, appropriately taxing fossil fuels and using the revenue to fund the transition.'' Climate Councillor Greg Bourne, a former BP executive and energy advisor to UK Prime Minister Margaret Thatcher, said: "Unlike the global oil crises of the 1970s, this time there are cheap and abundant alternatives to fossil fuels: renewable energy and electric vehicles are already widely deployed. We just need more. "The sun doesn't care about the Strait of Hormuz and the wind doesn't care who's in the White House. "The Australian Government is better placed than most to capitalise on renewable energy solutions. That would be the best response. The worst response would be to double down and commit to long-term investments in the fuels that are driving the price rises being felt by every Australian right now. "It's time to embrace the future, not cling to the past.'' /Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

CHAOS
Mirage News15d ago
Read update
Budget Measures to Shield Australia from Energy Chaos

Where are Fed cuts headed? What Polymarket & bonds are saying

President Trump announced a two-week ceasefire with Iran. Yahoo Finance Senior Reporter Brooke DiPalma outlines where Polymarket users see Federal Reserve rate cuts headed for the rest of this year in light of the ceasefire news, while Lossdog founder and CEO Tom Sosnoff explains what the bond market (^FVX, ^TNX, ^TYX) is indicating about rate cuts. For more predictions market insight, check out the new Yahoo Finance Polymarket Hub.

Polymarket
Yahoo! Finance15d ago
Read update
Where are Fed cuts headed? What Polymarket & bonds are saying

Polymarket vs Ostium - How DeFi Handled the Iran Oil Shock

When U.S. and Israeli forces struck Iran on February 28, 2026, crude oil ripped from $66 to $115 in ten days. Two DeFi platforms absorbed the chaos: Polymarket processed $722 million in Iran-related prediction markets, while Ostium's CL/USD perpetual futures handled $849 million. Same trade thesis. Radically different instruments. The Dune Analytics breakdown offers the clearest picture yet of how on-chain derivatives perform under genuine macro stress -- and why neither platform replaced the other. Pure oil-price exposure favored Ostium by a 17x margin. The CL/USD perp processed $191 million in notional on March 9 alone -- the day Trump declared the war "very complete, pretty much" and oil crashed 30% before rebounding on conflicting signals from Defense Secretary Hegseth. But that comparison misses Polymarket's actual role. Over 500 Iran-related markets -- "Khamenei removed as Supreme Leader," "Strait of Hormuz closure," "ceasefire by date X" -- functioned as oil proxies. A bet on Hormuz closing is fundamentally a supply-disruption trade. When you count these, Polymarket's $722 million sits in the same ballpark as Ostium's $849 million. The platforms served completely different crowds. Ostium's median order was $1,523 in notional. Polymarket's was $30. That's not a typo. Ostium processed 268 orders above $1 million, totaling $700 million -- 82% of all CL/USD volume. Nineteen trades exceeded $5 million, with the largest hitting $13.2 million. On Polymarket, the biggest oil trade was $500,000, and 99% of trades stayed below $5,758. But participation flipped the script entirely. Polymarket attracted 21,390 unique addresses to oil markets alone -- 22x more than Ostium's 952 CL/USD traders. Across all Iran-related markets, 120,023 addresses placed bets. A trader who bought Polymarket's "CL hits $90 by end of March" token at 25 cents on February 28 collected $1 seven days later. That's 300% with zero leverage and no liquidation risk. But here's the catch: the identical thesis with a February expiry returned -100%. Oil sat at $67 when February ended. Every single "end of February" threshold market resolved No. The instrument's risk isn't the thesis -- it's the expiry window. Ostium had no such timing trap. A 5x leveraged long entered at $66 on February 25 and held to the $115 peak would have returned roughly 370%. At conservative 2x leverage, about 150%. No expiry date means no binary resolution disaster. Ostium revealed something prediction markets can't: cross-asset positioning. By March 10, CL/USD was 98% long. Gold was 98% long. S&P 500 was 94% short. The textbook stagflation trade, visible in real-time. The S&P positioning flipped from 98% long to 25% long in under a week. That kind of macro context doesn't exist in Polymarket data -- each market is isolated. The trade fragmentation across platforms and chains remains the obvious friction. Hyperliquid's HIP-4, now on testnet, puts outcome contracts and perpetual futures on the same margin engine. Shared collateral, portfolio-level risk management, one pool of capital. Had that infrastructure existed during the Iran crisis, a trader could have held a "Hormuz closure" outcome and a CL/USD perp long in the same margin account. For now, that's still two platforms, two chains, two collateral pools. Polymarket recently grabbed 97% of on-chain prediction market fees following its April 2026 upgrade -- new stablecoin support, faster order matching, smart contract wallets. Ostium raised $24 million in late 2025 to bring traditional markets on-chain. Both platforms are scaling, but for different users entirely. The Iran episode proved neither instrument obsoletes the other. Prediction markets offer precision and retail access. Perps offer scale and continuous exposure. The real question is whether unified infrastructure can eventually combine both -- and whether that happens before the next macro shock.

CHAOSPolymarket
blockchain.news15d ago
Read update
Polymarket vs Ostium - How DeFi Handled the Iran Oil Shock

Anthropic launches Mythos Preview and industry cybersecurity initiative Project Glasswing

Anthropic has formally announced its new AI model, Mythos Preview, on Tuesday, alongside the creation of an industry consortium called Project Glasswing aimed at addressing cybersecurity risks associated with advanced AI systems. Project Glasswing brings together more than 50 organizations, including Microsoft Corp (NASDAQ:MSFT), Apple Inc (NASDAQ:AAPL, XETRA:APC), Google, Amazon Web Services, the Linux Foundation, Cisco Systems Inc (NASDAQ:CSCO, XETRA:CIS), Nvidia Corp (NASDAQ:NVDA, XETRA:NVD), Broadcom Inc (NASDAQ:AVGO, XETRA:1YD), and others from the tech, cybersecurity, financial, and critical infrastructure sectors. Participants will receive private access to Mythos Preview, which is not yet available to the general public. According to Anthropic, the initiative allows participating organizations to test Mythos Preview on their systems, identify potential vulnerabilities, and develop mitigations before the model is more widely deployed. The company noted that the effort is part of a broader response to rapid advancements in AI capabilities, which may significantly impact software security and digital defense practices. The company is also committing up to $100 million in usage credits for Mythos Preview and $4 million in direct donations to open-source security organizations to support these efforts. Anthropic CEO Dario Amodei described Mythos Preview as a significant step forward. While the model was trained primarily for coding tasks, it has demonstrated strong cybersecurity capabilities, including vulnerability discovery, penetration testing, and exploit development. From a market perspective, Jefferies noted that Anthropic's pre-release of Mythos to enterprise partners signals a partnership-oriented approach rather than direct competition in cybersecurity. The firm described the initiative as "indicative of inflection in the threat environment" and said it could disproportionately benefit platforms like Palo Alto Networks Inc (NYSE:PANW, XETRA:5AP) and CrowdStrike Holdings Inc (NASDAQ:CRWD). Jefferies added that the program underscores the urgency for enterprises to strengthen security defenses in response to rapidly advancing AI capabilities.

Anthropic
Proactiveinvestors NA15d ago
Read update
Anthropic launches Mythos Preview and industry cybersecurity initiative Project Glasswing

Fluor Secures Deal with X-Energy for Advanced Nuclear Development in Texas

Fluor partners with X-energy to develop SMR-based nuclear project in Texas, aiming to deliver clean, reliable power for Dow facility. Fluor Corporation has entered into a strategic agreement with X-energy to advance the development of a next-generation nuclear energy project at Dow's UCC Seadrift Operations facility in southern Texas. As part of this collaboration, Fluor will undertake Front-End Loading Stage 2 (FEL-2) services, a critical early phase in large-scale infrastructure projects. This stage is focused on refining project scope, conducting feasibility studies, establishing cost estimates, optimizing planning strategies, and identifying as well as mitigating potential risks. The financial value of this initial contract phase has not been disclosed, but Fluor expects to account for it in its first-quarter 2026 financial results. The proposed project by X-energy involves the installation of four small modular reactor (SMR) units, each with a capacity of 80 megawatts. These advanced reactors are designed to provide a steady supply of clean, carbon-free electricity along with industrial-grade steam to Dow's Seadrift facility. The initiative aims to replace the site's existing aging energy and steam generation systems with a more efficient, sustainable, and reliable solution tailored to industrial energy needs. This project is being supported by the U.S. Department of Energy under its Advanced Reactor Demonstration Program (ARDP). The ARDP is intended to accelerate the deployment and commercialization of innovative nuclear technologies through collaborative cost-sharing partnerships between the government and private industry. As part of the regulatory process, an application for a construction permit was submitted in March 2025 and is currently undergoing review by the U.S. Nuclear Regulatory Commission. Fluor brings significant experience to the project, with decades of expertise in nuclear engineering and project execution. According to company leadership, the collaboration underscores confidence in X-energy's advanced reactor technology, which is considered well-suited for providing reliable baseload power in industrial environments. The project is expected to set a precedent for integrating advanced nuclear energy into large-scale manufacturing operations. X-energy, selected by the Department of Energy in 2020, has been working on the development and licensing of its XE-100 small modular reactor technology, along with a TRISO-X fuel fabrication facility. The company has already completed key milestones, including engineering work and preliminary reactor design, and has progressed in the licensing of its fuel facility in Oak Ridge, Tennessee. Dow's Seadrift site spans approximately 4,700 acres and is one of the company's largest manufacturing complexes. It produces over 4 billion pounds of materials annually, supporting a wide range of industries such as packaging, footwear, electrical insulation, solar energy components, and medical applications. Once completed, the Seadrift nuclear project is expected to become the first grid-scale advanced nuclear installation in North America dedicated to powering an industrial site, marking a significant step forward in the adoption of clean nuclear energy in heavy industry.

X-energy
chemanalyst.com15d ago
Read update
Fluor Secures Deal with X-Energy for Advanced Nuclear Development in Texas

Perplexity Revenue Jumps 50% to $450M as It Pivots to AI Agents

The AI startup hit $450 million annual revenue in March after launching AI agent tools while facing publisher lawsuits over search engine The startup's annual recurring revenue (ARR) climbed to $450 million in March, a 50% jump in just one month, according to figures obtained by the Financial Times. The spike came shortly after the company launched Computer, an AI agent designed to complete tasks, and introduced a new usage-based pricing model that charges users beyond a set number of credits. For a company that spent the last two years positioning itself as a challenger to Google Search, the change is notable. Perplexity built its early momentum around a chatbot-style search engine, with industry watchers framing it as one of the more credible threats to Google's dominance. But that narrative is beginning to shift. Rather than competing directly with search, the company is moving toward tools that do more than retrieve information. Its new focus is on agents that can act on behalf of users. The pivot also comes at a complicated moment. Perplexity is currently facing multiple lawsuits from publishers, including The New York Times and Britannica, which accuse the company of copyright infringement and plagiarism. A separate privacy suit alleges that user data was shared with Google and Meta without consent. Perplexity has denied all the claims. Even so, the company's growth has accelerated. It had already expanded its ARR from $16 million to $305 million over the past two years, but the introduction of agents and new pricing appears to have pushed it into a new phase. In a single month, revenue climbed to $450 million, suggesting that users are willing to pay not just for answers, but for execution. The company now has over 100 million monthly active users across search and agent tools, including tens of thousands of enterprise clients, according to executives. Revenue comes from subscriptions between $20 and $200 monthly, plus the new usage-based model. At the centre of this shift is Computer, an AI agent that allows users to perform tasks such as shopping, summarising feeds, and sending emails through simple prompts. The company's earlier product, the Comet browser, had already hinted at this direction by integrating agent-like capabilities directly into browsing. Another tool, Model Council, takes a different approach by running queries across multiple AI models at once and presenting their outputs side by side. This strategy, however, comes with significant costs. Perplexity depends on external providers like OpenAI and Anthropic for model access, meaning every query carries an underlying expense. The company reportedly routes requests to the most efficient model available in an effort to manage those costs, but it remains unprofitable. Still, investor confidence has held steady. Perplexity was valued at $20 billion in September, a sharp rise from $500 million earlier in 2024. Backers include Nvidia, SoftBank's Vision Fund 2, Jeff Bezos, and Yann LeCun. Compared to its peers, the company is still smaller in revenue terms. But its recent growth highlights something more important than scale. It signals a shift in how AI companies are positioning themselves.

PerplexityAnthropic
Techloy15d ago
Read update
Perplexity Revenue Jumps 50% to $450M as It Pivots to AI Agents

Daniela Amodei's Leadership Style, Business Achievements, OpenAI Contributions, Net Worth, Ethnicity and Anthropic Co-Founding Journey

Daniela Amodei stands out as one of the most innovative and influential leaders in artificial intelligence today. As co-founder and President of Anthropic, the company behind the Claude series of large language models, Daniela Amodei has helped build one of the world's most valuable and safety-focused AI companies from the ground up. Her journey -- from a liberal arts education and congressional campaigns to OpenAI's VP of Safety and Policy and ultimately to leading Anthropic at a valuation of $380 billion -- challenges every assumption Silicon Valley makes about who gets to shape transformative technology. Born in 1987 to an Italian leather craftsman father, Riccardo Amodei, and a Jewish-American mother, Elena Engel, a library project manager from Chicago, Daniela grew up in a household that prized craftsmanship, intellectual curiosity, and the desire to make the world better. Those values -- instilled early and carried through every stage of her career -- are the thread connecting a congressional field director in Pennsylvania to the president of one of the most consequential AI companies in existence. Alongside her brother and co-founder Dario Amodei, Daniela is not merely building a company; she is attempting to prove that safe, ethical, human-centred AI can win in the commercial arena -- and she is doing it on terms entirely her own. Daniela Amodei's Leadership Style: Operational, People-First, and Mission-Driven Daniela Amodei's leadership is defined by a rare combination of operational precision, deep commitment to people, and an unwavering belief that how a company is built matters as much as what it builds. Where her brother Dario brings the scientific and philosophical vision, Daniela brings the structural intelligence and human infrastructure that transforms research ambitions into a functioning, scaling organisation. Her style is less about commanding rooms and more about creating the conditions in which exceptional people do exceptional work. Operational Excellence as Strategic Advantage: Daniela's most distinguishing quality as a leader is her ability to build and run organisations at speed. At Stripe, she scaled teams from the ground up -- leading risk management, core operations, and underwriting groups simultaneously, and developing a reputation for clear process design and systematic thinking. This operational muscle became Anthropic's backbone. When the company was founded in 2021 with a small group of former OpenAI colleagues, it was Daniela who architected the hiring processes, cultural norms, and operating systems that allowed Anthropic to scale from a handful of researchers to thousands of employees across multiple continents. As one early investor observed: 'Building the company, hiring a great leadership team, and growing at an incredibly fast pace is Daniela's wheelhouse.' People-Centric Culture Building: Daniela places the development of people and culture at the centre of her leadership philosophy. In a field dominated by technical founders who often treat operational matters as secondary, she has consistently argued that the quality of a company's culture determines the quality of its output -- including its safety practices. At Anthropic, she has championed the idea that safety cannot be bolted on after the fact; it must be embedded in how the company recruits, retains, and develops talent. Her background in liberal arts and politics gave her an instinct for human dynamics that is visible throughout Anthropic's approach: the company's emphasis on thoughtful communication, careful policy development, and ethical deliberation reflects her influence. Collaborative Leadership with a Sibling Co-Founder: The partnership between Daniela and Dario Amodei is one of the most distinctive leadership structures in the technology industry. The two have described a shared set of values -- a desire to make the world better -- that dates back to childhood, and it is this shared foundation that makes their professional collaboration unusually cohesive. Daniela manages the president's portfolio: operations, strategy, execution, partnerships, and culture. Dario leads on the technical vision. The division is clean but not siloed. In interviews, Daniela has been clear that she sees herself as equally accountable for Anthropic's mission, not merely its management. That sense of joint ownership runs throughout the company. Also Read: Larry Fink: Asset Management Pioneer, BlackRock Founder and CEO, and Architect of Global Finance Daniela Amodei's Business Journey: From Liberal Arts to AI Leadership Daniela Amodei's path to the presidency of a $380 billion AI company is one of the more improbable career arcs in recent tech history -- improbable, that is, only if you expect AI leaders to arrive via a PhD in computer science. Her actual journey is a masterclass in how operational skill, cross-sector experience, and a genuine mission orientation can build the kind of credibility that no credential alone provides. Early Career in Global Development and Politics: After graduating from the University of California, Santa Cruz with a BA in English Literature, Politics, and Music -- and having received a scholarship to study classical flute -- Daniela began her career in global health and international development. She worked with the IRIS Center at the University of Maryland and became a Fellow at Conservation Through Public Health, working directly with the organisation's CEO on strategic grant-making. She then moved into politics, serving as Field Director and Deputy Field Director for Matt Cartwright's successful 2012 congressional campaign in Pennsylvania -- recruiting over 80 volunteers and personally making more than 11,000 voter calls. She subsequently managed scheduling and communications for Congressman Cartwright in Washington, D.C. These early roles immersed her in coalition-building, risk communication, and the management of complex, multi-stakeholder systems -- skills that would prove central to her leadership at Anthropic. Stripe: Building Operational Mastery in Fintech: In 2013, Daniela made the pivotal move from politics to technology, joining Stripe as one of its earliest employees. Initially a founding recruiter, she rose rapidly through leadership positions in risk management, core operations, and underwriting. By the time she left for OpenAI, she had led teams of 26 people across multiple functions, managed direct reports while also managing managers -- and developed a deeply practical understanding of how to build systems and teams that scale under pressure. Stripe's culture of rigour, clear thinking, and operational discipline left a visible imprint on her approach, and it is widely credited as the training ground for the company-building skills she would bring to Anthropic. From OpenAI to Anthropic: The Safety Inflection Point: Daniela joined OpenAI in 2018, initially as an engineering manager before becoming Vice President of Safety and Policy -- the role that would define her transition into AI. At OpenAI, she oversaw safety evaluations and policy frameworks during the development of GPT-2 and GPT-3, working alongside her brother Dario and other colleagues who shared a growing concern about the pace of commercialisation relative to the maturity of safety practices. In 2020, those concerns reached a tipping point. Daniela, Dario, and five other colleagues left OpenAI to found Anthropic -- not merely a new AI company, but a deliberate attempt to establish a different paradigm for how powerful AI systems should be built, governed, and deployed. Also Read: Dario Amodei's Leadership Style, Business Success, OpenAI Leadership Role, and Anthropic: A Multifaceted Journey to Excellence Ethnicity of Daniela Amodei Daniela Amodei's Role at OpenAI: Safety Architect and Policy Pioneer Daniela Amodei's tenure at OpenAI was brief by the standards of her subsequent career, but its influence on her thinking and on the trajectory of AI safety as a discipline was profound. Joining in 2018, she arrived at a company that was rapidly transitioning from a non-profit research organisation into a serious commercial force, and she helped build the internal infrastructure that safety and policy work required to keep pace with that transition. Building the Safety and Policy Function: As Vice President of Safety and Policy, Daniela was responsible for establishing the frameworks and processes by which OpenAI evaluated the risks of deploying increasingly powerful language models. Her work during the GPT-2 and GPT-3 development cycles was formative -- not only for OpenAI's internal practices but for the broader field's thinking about what responsible AI deployment should look like. Her non-technical background, far from being a disadvantage, gave her a perspective on policy, communication, and societal impact that complemented the engineering-heavy culture she was operating within. Recognising the Limits of Alignment with Mission: Daniela has spoken candidly about the divergence she felt between her own sense of what a safety-first AI company should look like and the direction OpenAI was taking as commercial pressures intensified. This was not a personal conflict with individuals but a structural tension between the imperatives of a rapidly scaling product company and the patience that genuine safety research requires. That tension ultimately led her and Dario to leave -- and to build Anthropic as an explicit alternative: a company structurally committed to its safety mission, not merely rhetorically. Also Read: Jensen Huang's Net Worth, Ethnicity, Career with Nvidia, and Leadership Style Daniela Amodei's Leadership at Anthropic: Building the Safety-First AI Company Co-founded in 2021 with Dario Amodei, former OpenAI colleagues, and a founding team in which two-thirds of the first fifteen employees held PhDs in physics, Anthropic was designed from the outset to be different. Daniela assumed the presidency, taking on the full breadth of operational, strategic, and cultural leadership while Dario led the technical vision. Under her stewardship, Anthropic has grown from a small research collective into a company valued at $380 billion, with major strategic partnerships with Amazon and Google, a suite of leading-edge models under the Claude brand, and a global workforce of thousands. Building Claude: From Research to Product: One of Daniela's most tangible contributions as President has been translating Anthropic's research ambitions into a coherent product strategy. The Claude family of large language models -- launched publicly in 2023 and now spanning multiple generations -- represents not only a technical achievement but a commercial one. Claude is designed around Anthropic's Constitutional AI framework, which embeds safety and human value alignment into the model training process itself rather than treating it as an afterthought. Bringing this approach to market, building the commercial partnerships and enterprise relationships necessary to scale it, and ensuring that safety remained non-negotiable even under commercial pressure -- this has been Daniela's operational domain. Securing the Capital Base for a Safety Mission: Anthropic's ability to pursue its safety-first mission at the frontier of AI capability requires the kind of capital that only the largest institutional investors can provide. Daniela has been central to securing it. The company has raised billions from Amazon and Google, among others, and its most recent funding round in early 2026 valued the company at $380 billion. These are not passive investments: they represent strategic partnerships that give Anthropic the compute resources and distribution channels necessary to compete with OpenAI, Google DeepMind, and Meta. Navigating those relationships -- maintaining Anthropic's mission integrity while operating within the commercial and competitive realities of the AI industry -- is one of Daniela's most complex ongoing responsibilities. Championing Diversity in AI Leadership: Daniela's rise to the presidency of a frontier AI lab is itself a statement about who belongs in AI leadership. Women hold only around 10% of CEO and top technical roles at AI-focused organisations; at frontier labs developing large language models, the dominance of male engineers with advanced STEM degrees is even more pronounced. Daniela's path -- from English literature to congressional campaigns to Stripe to OpenAI to Anthropic -- challenges the orthodoxy that technical credentials are the prerequisite for building transformative AI companies. She has said as much explicitly, arguing that operational excellence, people leadership, and ethical reasoning are not softer skills but different skills -- and that AI companies need them urgently. Also Read: Elon Musk's Leadership Style, Ethnicity, and Net Worth Net Worth of Daniela Amodei Daniela Amodei's net worth is estimated at approximately $1.2 billion, derived primarily from her equity stake in Anthropic. The company's valuation of $380 billion as of early 2026, backed by major investments from Amazon and Google, has elevated her into the ranks of the world's youngest self-made female billionaires. In 2025, she was featured on Forbes' list of the 100 Most Powerful Women -- recognition of both her financial standing and her influence on the trajectory of one of the most consequential industries of the twenty-first century. Daniela has also been associated with effective altruism through her husband, Holden Karnofsky, co-founder of Open Philanthropy, and the household's broader philanthropic orientation reflects a consistent commitment to deploying resources towards the greatest possible social good. Also Read: Jeff Bezos' Leadership at Amazon: The 1-Hour Rule and the Path to Business Success A Multifaceted Leader with a Lasting Legacy Daniela Amodei's journey from a liberal arts graduate to one of the most powerful figures in global technology is more than a personal success story. It is a proof of concept for a different kind of AI leadership -- one grounded in operational precision, humanistic values, and a genuine belief that the way powerful technology is built determines what it does to the world. At Anthropic, she is running one of the most important experiments in the history of technology: whether a company can hold the frontier of AI capability, compete commercially against well-resourced rivals, and remain genuinely committed to safety, alignment, and human welfare. The answer is not yet settled, but the question could not be in better hands. Named alongside her brother as one of Time's 100 Most Influential People in AI in 2023, and with Anthropic continuing to expand its global footprint, Daniela Amodei is not merely participating in the AI revolution -- she is one of the people deciding what it stands for. Her legacy, still being written, is already remarkable: a woman who brought a classical flute scholarship, a congressional campaign, and a fintech startup's operating handbook to the most technically demanding industry in the world -- and proved, beyond serious doubt, that it was exactly what was needed.

Anthropic
bbntimes.com15d ago
Read update
Daniela Amodei's Leadership Style, Business Achievements, OpenAI Contributions, Net Worth, Ethnicity and Anthropic Co-Founding Journey

Anthropic launches 'Project Glasswing' after its new AI model finds thousands of zero-day flaws

Anthropic has launched Project Glasswing, a cross-industry initiative to secure critical software systems against a new class of AI-driven cyber threats powered by its unreleased Claude Mythos Preview model. The company warns that frontier AI systems can now identify and exploit vulnerabilities at a scale that rivals or exceeds top human experts. The initiative brings together major players, including Amazon Web Services, Apple, Google, Microsoft, NVIDIA, Cisco, CrowdStrike, Palo Alto Networks, and the Linux Foundation, alongside JPMorganChase. Participants are being granted controlled access to the Claude Mythos Preview to strengthen defenses across widely used software and infrastructure. Anthropic says the model has already identified thousands of high-severity vulnerabilities, including zero-day flaws affecting all major operating systems and web browsers. Some of these issues had remained undiscovered for decades. Examples include a 27-year-old vulnerability in OpenBSD that allowed remote system crashes, and a long-standing FFmpeg flaw missed by millions of automated tests. The model also demonstrated the ability to chain multiple Linux kernel bugs to gain full system control. Due to concerns about misuse, the company is not releasing Mythos Preview publicly, limiting access to a vetted group of over 40 organizations. Chief science officer Jared Kaplan said the goal is to give defenders a head start as similar capabilities emerge elsewhere. Project Glasswing participants are using the model for tasks such as vulnerability discovery, penetration testing, and auditing both proprietary and open-source code. Anthropic has committed up to $100 million in usage credits and an additional $4 million in funding for open-source security efforts, including support for the Apache Software Foundation and Linux Foundation initiatives. According to benchmark results shared by Anthropic, Mythos Preview scored 83.1% on the CyberGym vulnerability reproduction test, outperforming Claude Opus 4.6. It also achieved strong results across software engineering benchmarks, demonstrating advanced reasoning and autonomous code analysis. Among early adopters, AWS has used the model to analyze critical codebases, while Microsoft observed improved vulnerability detection using its CTI-REALM benchmark. Anthropic plans to publish findings from Project Glasswing within 90 days, including details on patched vulnerabilities and updated security practices. Focus areas will include disclosure processes, automated patching, supply chain security, and secure-by-design development standards. Although Mythos Preview will remain restricted, Anthropic aims to deploy similar models with stronger safeguards in the future. The company is also working with government agencies, as AI-driven cyber capabilities become an increasing national security concern.

Anthropic
CyberInsider15d ago
Read update
Anthropic launches 'Project Glasswing' after its new AI model finds thousands of zero-day flaws

Anthropic Claude Mythos Preview | CrowdStrike

The Claude Mythos Preview matters for every enterprise. Frontier models raise the ceiling for both offense and defense. Our job is to make sure defenders hold the advantage. That is what we have always done. That is what we do today. Today, CrowdStrike is a founding member of Project Glasswing. Anthropic builds the model. CrowdStrike secures AI where it executes. That's the division of labor the industry needs. CrowdStrike evaluated the security implications of this model and brings something no other coalition member has: sensor-level visibility across every endpoint in the enterprise. A trillion events a day. 280+ tracked adversary groups. 1,800+ AI applications already discovered across customer environments. This data is what makes AI governance enforceable. CrowdStrike's assessment of Mythos Preview confirms Frontier AI capabilities compound when paired with real-world threat intelligence, enterprise-scale visibility, and machine-speed enforcement. Frontier AI is not a single product. It is a new category of enterprise infrastructure. Claude Code is changing how developers build software. AI agents are reshaping how enterprises automate operations. Anthropic's Mythos Preview expands the reasoning, planning, and execution capabilities of AI agents. They all touch the endpoint -- where data is accessed, decisions are made, value is delivered, and risk is born. New models are also where opportunity is the largest. The same frontier models that expand the attack surface give defenders a capability advantage that did not exist a year ago: discovering vulnerabilities, detecting threats, and responding to incidents faster than ever before. Adversaries will continue to look to use the same capabilities for malicious purposes. CrowdStrike's 2026 Global Threat Report found an 89% increase in attacks by adversaries using AI year-over-year. The use of AI for vulnerability discovery and exploit development is accelerating on both sides. Model safety is the builder's responsibility. Deployment governance is ours. Anthropic develops frontier models under its Responsible Scaling Policy, evaluating capabilities before release and red-teaming for dangerous behaviors. This work addresses what the model can do. It does not address what happens when the model runs inside an enterprise with access to customer data, financial systems, and thousands of users deploying the model without governance. When an AI agent connects to a CRM, queries a database, or triggers a workflow, it's not a model safety question. That is a deployment governance question. CrowdStrike secures AI where it executes. Discovery of every AI agent in the environment. Visibility into what those agents access and what they do. Protection of sensitive data flowing through AI workflows. Runtime protection for AI agents connecting to enterprise systems. A frontier model is the engine. Data is the fuel. The platform is how you operationalize it. CrowdStrike's role in this coalition is grounded in capabilities no other member has:

Anthropic
CrowdStrike.com15d ago
Read update
Anthropic Claude Mythos Preview | CrowdStrike

Why SpaceX could trade like a meme stock after its blockbuster IPO

SpaceX has 'a massive narrative, a founder with a loyal following, and a valuation likely driven in part by future potential,' all ingredients for meme-like trading SpaceX shares have the potential to be volatile after the planned IPO, particularly once the lockup expires and insiders are able to freely sell stock. From a business perspective, SpaceX may not have much in common with GameStop or AMC. But its stock could end up trading like those classic meme stocks after a highly anticipated initial public offering later this year. The rocket launcher and satellite maker is planning a public listing that's expected to break records. SpaceX is reportedly going to try and raise as much as $75 billion at a valuation of $1.75 trillion, which would make for the largest initial public offering on record. Experts say to prepare for stock-market volatility after the company goes public. "I think the stock will be all over the place," PitchBook analyst Franco Granda told MarketWatch. He said he thinks things could get especially choppy once lockups expire, meaning company insiders and early investors are able to sell their shares after an initial restricted period. SpaceX has the potential to trade like a meme stock, according to Angel Tengulov, a finance professor at the University of Kansas, who has researched how social-media platforms have enabled retail investors to coordinate market-distorting events. Meme stocks are typically characterized by high trading volumes and price volatility, often driven by social-media trends. Their trading action can be "untethered" from business fundamentals and driven by "speculative fervor, viral momentum," which all can be pretty risky, as described by Roundhill Investments. The firm operates the Roundhill Meme Stock MEME exchange-traded fund. See: These little-known chip stocks could be winners as SpaceX and Amazon make big satellite pushes Tengulov said that SpaceX resembles what he called a "narrative stock," since the company is guided by ambitious goals, such as colonizing Mars. Such narratives are "perfect for the internet," he added, likening SpaceX to GameStop (GME) and AMC Entertainment Holdings (AMC), which developed cultlike followings on social media. SpaceX "clearly has some of the ingredients: a massive narrative, a founder with a loyal following, and a valuation likely driven in part by future potential rather than just current fundamentals," Roundhill CEO Dave Mazza told MarketWatch in emailed comments. The company, according to Reuters, is earmarking as much as 30% of shares for retail investors and scheduling a meeting with 1,500 of them at an event in June, once its IPO roadshow kicks off. Typically, IPOs allocate just 5% to 10% of shares to retail. But Bret Johnsen, SpaceX's CFO, told bankers on Monday that retail "is going to be a critical part of this and ?a bigger part than [in] any IPO in history," Reuters reported. A representative for SpaceX did not immediately return a MarketWatch request for comment. Mazza told MarketWatch that allocating such a large percentage of shares to retail investors increases the odds of meme-stock-like trading behavior after SpaceX lists. Tesla (TSLA), which Musk leads as CEO, also has a sizable base of retail investors who often side with management during key votes, such as the one regarding Musk's controversial compensation package last November. Read: SpaceX's stock could trade like Tesla 'on steroids' after its IPO, analyst says Tesla has in the past been referred to as a meme stock, largely because of its share-price volatility and investors' focus on future goals over current fundamentals. JPMorgan's Ryan Brinkman this week warned investors to approach Tesla's stock with a "high degree of caution." Tesla shares rose over 4% in early trading Wednesday as the broader market rallied. Shares of Tesla had dropped 23% so far this year but were up by about 56% over the last 12 months ahead of the rally. "Volatility is the name of the game for $TSLA. Always has been. Will almost certainly be the same for SpaceX stock," Tesla influencer and investor Sawyer Merritt wrote on the X platform, owned by Musk, on Tuesday. "We've been through this many times before." But Andrew Rocco, a stock strategist at Zacks Investment Research, argues that neither Tesla nor SpaceX, once it's listed, would fit the mold of a meme stock. He pointed out that both companies are profitable, which is not often the case with meme stocks. He does think, he said, that some investors could deem SpaceX's valuation extreme. SpaceX's December 2025 insider share sale pinned the company's value at roughly $800 billion. Then, after acquiring the cash-burning startup xAI in February, it was valued at $1.25 trillion. SpaceX is now seeking a $1.75 trillion valuation, which would make it more valuable than Broadcom (AVGO) or Tesla based on those companies' current market capitalizations. SpaceX recorded earnings before interest, taxation, amortization and depreciation of $7.5 billion on revenue of about $16 billion, according to PitchBook. A $1.75 trillion valuation would be about 106 times SpaceX's estimated revenue. As the company prepares for a public launch, experts have said SpaceX's IPO could wind up a friend or foe to the space sector, which itself is no stranger to meme stocks. Roundhill's MEME ETF includes both AST SpaceMobile (ASTS) and Rocket Lab (RKLB) among its top holdings, while JPMorgan recently noted that Planet Labs (PL) and Intuitive Machines (LUNR) are among the most-hyped stocks on social media. On one hand, SpaceX's IPO could bring more attention to the space trade, boosting trading activity and volatility for companies like AST SpaceMobile as investors seek "sympathy" trades, Mazza told MarketWatch via email. On the other hand, SpaceX could attract institutional investors and absorb interest that might've otherwise gone to smaller stocks favored by retail investors, he added. See: Why Intel is teaming with Elon Musk on an ambitious chip-making venture -William Gavin This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

SpaceXxAI
Morningstar15d ago
Read update
Why SpaceX could trade like a meme stock after its blockbuster IPO
Showing 7161 - 7180 of 11333 articles