News & Updates

The latest news and updates from companies in the WLTH portfolio.

Amazon expands Anthropic partnership with $100B cloud deal and $25B investment

WASHINGTON -- Amazon is doubling down on its artificial intelligence investments with a massive new commitment from one of its closest partners. Amazon and Anthropic announced an expanded partnership that includes a commitment from Anthropic to spend more than $100 billion over the next decade on Amazon Web Services infrastructure to power its AI models. The deal marks one of the largest long-term cloud and computing agreements in the rapidly growing AI sector, as companies race to secure the massive computing power needed to train advanced systems. Anthropic's Claude models will continue to run on Amazon's custom-built chips, including its Trainium and Graviton processors. The companies said Anthropic will also secure up to 5 gigawatts of computing capacity, underscoring the scale of demand for AI training. "Our custom AI silicon offers high performance at significantly lower cost for customers," Amazon CEO Andy Jassy said in a statement, adding that Anthropic's long-term commitment reflects growing demand for generative AI tools. Anthropic CEO Dario Amodei said the partnership will help the company keep up with rising usage of its models. "Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace," he said. The collaboration builds on earlier work between the companies, including "Project Rainier," a massive AI compute cluster designed to train next-generation models. Amazon is also deepening its financial investment in Anthropic, committing an additional $5 billion now, with the potential for up to $20 billion more tied to future milestones. That comes on top of the roughly $8 billion Amazon has already invested. As part of the expanded partnership, AWS customers will also be able to access Anthropic's full Claude platform directly within their existing Amazon accounts.

Anthropic
CBS 8 - San Diego News1d ago
Read update
Amazon expands Anthropic partnership with $100B cloud deal and $25B investment

Aer Lingus cancels hundreds of flights as summer travel chaos deepens

Thousands of travellers flying from Dublin, Shannon and Cork airports in Ireland may face significant disruption on domestic, European and transatlantic routes during the peak travel season, reported the newspaper. The airline said that schedule changes apply to approximately two per cent of Aer Lingus's overall operations. Aer Lingus said it will reschedule most passengers on alternative same-day services where possible. A spokesperson for Aer Lingus said: "Aer Lingus has commenced operating its planned summer schedule. A number of recent cancellations have been required due to mandatory maintenance on aircraft, along with a limited number of schedule adjustments. "Where schedule adjustments are being made, the vast majority of customers are being reaccommodated on same-day services." As of 25 February 2026, Aer Lingus has joined Ryanair in demanding that passengers between Great Britain and Ireland carry passports. Until then, Aer Lingus allowed a wide range of identification, including a bus pass, work ID card or international student card, as long as it included a photograph. The carrier's spokesperson told The Independent: "All customers, including Irish or British nationals, travelling on Aer Lingus and Aer Lingus Regional services between the Republic of Ireland and the UK will now require a valid passport or Irish passport card. "The other forms of photo ID previously accepted will no longer be valid for travel."

CHAOS
The Independent1d ago
Read update
Aer Lingus cancels hundreds of flights as summer travel chaos deepens

SpaceX says it can buy AI coding tool Cursor for $60B later this year

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

SpaceXxAIAnthropic
Beaumont Enterprise1d ago
Read update
SpaceX says it can buy AI coding tool Cursor for $60B later this year

Amazon expands Anthropic partnership with $100B cloud deal and $25B investment

WASHINGTON -- Amazon is doubling down on its artificial intelligence investments with a massive new commitment from one of its closest partners. Amazon and Anthropic announced an expanded partnership that includes a commitment from Anthropic to spend more than $100 billion over the next decade on Amazon Web Services infrastructure to power its AI models. The deal marks one of the largest long-term cloud and computing agreements in the rapidly growing AI sector, as companies race to secure the massive computing power needed to train advanced systems. Anthropic's Claude models will continue to run on Amazon's custom-built chips, including its Trainium and Graviton processors. The companies said Anthropic will also secure up to 5 gigawatts of computing capacity, underscoring the scale of demand for AI training. "Our custom AI silicon offers high performance at significantly lower cost for customers," Amazon CEO Andy Jassy said in a statement, adding that Anthropic's long-term commitment reflects growing demand for generative AI tools. Anthropic CEO Dario Amodei said the partnership will help the company keep up with rising usage of its models. "Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace," he said. The collaboration builds on earlier work between the companies, including "Project Rainier," a massive AI compute cluster designed to train next-generation models. Amazon is also deepening its financial investment in Anthropic, committing an additional $5 billion now, with the potential for up to $20 billion more tied to future milestones. That comes on top of the roughly $8 billion Amazon has already invested. As part of the expanded partnership, AWS customers will also be able to access Anthropic's full Claude platform directly within their existing Amazon accounts.

Anthropic
KTVB 71d ago
Read update
Amazon expands Anthropic partnership with $100B cloud deal and $25B investment

Universal Music has shot itself in the foot over AI fair use, says Anthropic

Anthropic wants a judge to throw out the copyright lawsuit filed against it by three music publishers, including Universal, because AI training is fair use. The publishers argue that AI-generated lyrics are diluting the market for songs, but Universal recently told its investors that's not happening Anthropic has laid out its fair use defence as it continues to battle with the music publishers who have accused the AI company of copyright infringement. In its latest legal filing, Anthropic asks the judge overseeing the case to throw out entirely all the copyright claims made against it by Universal Music Publishing, Concord and ABKCO. The new filing explores lots of relatively complex copyright law technicalities. Though fortunately one of its key arguments is a little more straightforward: which is that Universal claims Anthropic's AI model Claude is diluting the market for its music, yet Universal recently told its investors that there was nothing to worry about when it comes to the flood of AI music impacting on its revenues. But before it gets to using Universal's own words against it, at the start of its new legal filing Anthropic is a little more philosophical, claiming that when it trains Claude with existing texts - including the music companies' lyrics - the process is like how "a human learns from reading and re-reading books, poems, stories, lyrics" and "then internalises the themes, substance and style of these materials". And just like when a human learns through reading, Anthropic's training of its AI model "endows Claude with capabilities that stretch far beyond the text on which Claude was trained", meaning it can "reason, explain, code and create". Developing AI models in this way, Anthropic then argues, is "the kind of new idea" that the US Copyright Act "not only allows, but encourages". And "allowing copyright holders to veto such a transformative technology" would ignore the US Supreme Court's mandate that "the primary objective of copyright is not to reward the author, but to serve the public". Universal, Concord and ABKCO claim that Anthropic infringed copyrights in lyrics they publish by copying them into its Claude training dataset without getting permission. Anthropic, like most AI companies, insists that AI training constitutes 'fair use' under US copyright law, meaning it doesn't need permission. There are various criteria for assessing whether the use of copyright-protected works constitutes fair use, though two are particularly important in all the AI copyright cases. First, is the use transformative, so the output is sufficiently different to the input? And second, does the use unfairly dilute the market for the original work? The publishers are relying heavily on the second factor in this legal battle, because - while you can use Claude to output lyrics to existing songs - in the vast majority of cases that's not happening. Claude is outputting texts, possibly lyrics, that are totally different to any texts that were copied into the AI model's training dataset. Which is transformative use. In the two judgements we already have from other legal disputes centred on copyright and AI training, one also involving Anthropic, the judges concluded that using copyright protected works to train AI models was highly transformative. Indeed, as Anthropic notes in its new legal filing, the judge in one of those cases said AI training was "among the most transformative [uses] many of us will see in our lifetimes". But it may be that copyright owners can still defeat an AI company's fair use defence on market dilution grounds. Although that didn't happen in either of those previous two AI copyright cases, the judge in one of them - involving Meta - said there probably were market dilution arguments that could be used to show that an AI company's use of existing works was not fair use. The publishers have presented arguments of that kind in their bid to get a summary judgement in their favour in this case. But in its latest filing, Anthropic insists that those arguments are "not legally viable" and, even if they were, the publishers "have not met their burden to show undisputed market harm". Probably the strongest market dilution argument put forward by the publishers is that all the AI-generated content now being created every day is starting to compete with human-made content, for both audiences and revenue. For example, because in music streaming there is a finite pot of subscription money to be shared with creators and rightsholders, if any of that money goes to AI-generated music, artists and songwriters, and their labels and publishers, will make less money as a result. That is true, but the legal dispute here is whether that is the kind of market dilution that stops the use of copyright protected works from being fair use. Anthropic reckons it isn't. Because it isn't that a specific set of lyrics outputted by Claude is diluting the market value of a specific set of lyrics that were included in the training dataset, instead the market dilution is more generic: human created works in general will make less money because of new competition from AI-generated works. In the Meta case, judge Vincent Chhabri suggested that more generic market dilution of that kind was still a factor that went against a fair use defence. But Anthropic disagrees. "The fair use inquiry is anchored on the problem of substitution, not competition", it writes. Citing previous legal precedent, it claims that the key question is whether or not Anthropic "used an original work to achieve a purpose that is the same as, or highly similar, to that of the original work", not if "a transformative use facilitated the entry of a new competitor". No one set of lyrics outputted by Claude directly substitutes any one set of lyrics copied into the AI's training dataset, which means there is no market dilution, Anthropic reckons. Even if AI-generated lyrics and music are in general terms a new competitor for the music industry. And, anyway, as it currently stands, AI-generated lyrics and music aren't much of a new competitor for human music creators and their business partners in the music industry, or at least that's according to a statement made by Universal on its most recent investor call. Quoting that investor update, Anthropic's filing states, "just last month, Universal Music Group admitted that 'we're seeing no indication that AI royalty dilution is a material issue for UMG from a revenue perspective'" Contradictory messaging from the majors aside, the music publishers will be hoping that the judge in this case will share Chhabri's opinion that a more generic form of market dilution is still grounds for rejecting a fair use defence. If that fails, the publishers do have one more tactic to employ. In the other earlier Anthropic ruling, the judge said AI training wasn't fair use if an AI company used pirated content for its training materials, which Anthropic did. However, that argument against fair use has ended up in a separate lawsuit being pursued by the publishers.

Anthropic
CMU1d ago
Read update
Universal Music has shot itself in the foot over AI fair use, says Anthropic

Fox's Jen Griffin Reports Iran 'Never Agreed' to Trump's Ceasefire Extension as Strait of Hormuz Chaos Rages On

Fox News chief national security correspondent Jennifer Griffin reported on Wednesday that Iran "never agreed" to President Donald Trump's extension of a ceasefire as chaos in the Strait of Hormuz rages on. Griffin joined Harris Faulkner on The Faulkner Focus on Wednesday, one day after Trump announced an extension to a ceasefire agreement with Iran while he continues to try and hammer out a deal. Griffin reported, however, that Iran "never agreed" to the extension and the country's shaky leadership structure makes negotiations and clear guidelines more difficult. "Based on the fact that the Government of Iran is seriously fractured, not unexpectedly so and, upon the request of Field Marshal Asim Munir, and [Pakistan] Prime Minister Shehbaz Sharif, of Pakistan, we have been asked to hold our Attack on the Country of Iran until such time as their leaders and representatives can come up with a unified proposal," Trump wrote in a Truth Social post. Griffin reported on the ceasefire and various reports of Iran striking and seizing ships in the Strait of Hormuz, where roughly 20% of the world's oil supply moves through. Iran announced its own blockade in the Strait after Trump's original ceasefire agreement. Griffin reported: Iran never agreed to an extension of the ceasefire and the IRGC Navy continues to attack commercial shipping in the Strait of Hormuz. Since President Trump announced the extended ceasefire three commercial ships have come under attack from Iran. Early this morning an Iranian Revolutionary Guard Corps gun boat fired at the Liberian Greek-owned ship 15 miles northeast of Oman. Iran's Navy fired on the vessel with no warning, according to the British Royal Navy, causing heavy damage to the bridge. All crew were reported safe. Three hours later the U.K. Maritime Trade Operations Centre reported evidence of a second incident, this time eight nautical miles west of Iran. She also reported the U.S. blockade has turned back more than two dozen vessels, but dozens of Iranian ships are still making it through the Strait of Hormuz. Griffin noted that the U.S. military supply has also taken significant hits during the war, with a reported 45% depletion in precision strikes missiles and a roughly 50% depletion in anti-ballistic missiles interceptors. Faulkner asked for clarification on the ceasefire extension, arguing Iran's stance is causing confusion. "Just a pause there, because you reported and reminded everybody that Iran never actually agreed to the ceasefire. So that's interesting, because they accused the United States blockade in the Strait of Hormuz of being a violation of the cease fire. So I mean, so they never agreed to it, but they expect it to be honored," she said. "Well, the real question, Harris, is who is negotiating on behalf of Iran? Who is in charge there?" Griffin responded. "And we really still are not clear at this point. The IRGC and the IRGC Navy does one thing, and their speaker of Parliament says another thing to Pakistan. So it's very difficult to know what is actually happening and what is happening with the leadership of Iran right now." Watch above via Fox News.

CHAOS
Mediaite1d ago
Read update
Fox's Jen Griffin Reports Iran 'Never Agreed' to Trump's Ceasefire Extension as Strait of Hormuz Chaos Rages On

Mozilla's Firefox Says It Fixed 271 Vulnerabilities With Help From Anthropic's Mythos

Anthropic is not the only company developing AI models focused on identifying software vulnerabilities. Competitors such as OpenAI have also been pushing frontier models in this space, including GPT-5.4-Cyber, a specialized version of its GPT-5.4 model designed specifically for cybersecurity tasks. Some researchers see the release of these models as a major breakthrough in AI-assisted security. Critics, however, question whether the technology will make defense easier or simply give attackers more tools to exploit systems. Because of these concerns, Anthropic said it decided not to release Mythos publicly. Instead, the company opted for a limited rollout to a small number of technology and financial organizations under a program called Project Glasswing. Firefox provides one example of how the model can assist defenders. Using Mythos, Mozilla engineers were able to identify hundreds of vulnerabilities in the browser before shipping the latest release. "Our belief is that the tools have changed things dramatically, because now we have automated techniques that can cover, as far as we can tell, the full space of vulnerability-inducing bugs," Bobby Holley, Firefox's chief technology officer, told WIRED. In practical terms, this means AI tools may be able to scan for potential bugs across far larger portions of code much faster than traditional automated tools. According to Holley, security teams previously relied on a mix of automated techniques, such as fuzz testing, and manual vulnerability hunting by security researchers to find critical flaws. The introduction of powerful AI models signals a shift in the industry, allowing software systems to be analyzed at scale in ways that were previously difficult or impossible. "There were categories of bugs that you could find with human analysis that you couldn't find with automated analysis," Holley said. "Therefore, it was always possible, if you were a threat actor willing to spend many millions of dollars, to find a bug; we tried to drive the price of that as high as possible." However, the arrival of tools like Mythos could also create new challenges for the open-source ecosystem. According to Raffi Krikorian, Mozilla's chief technology officer, powerful organizations may gain early access to advanced AI security tools while smaller open-source projects struggle to keep up. "The underlying economics haven't changed," Krikorian wrote in an opinion essay published in the New York Times. "The most valuable software infrastructure in the world continues to be maintained by people working for free, while the companies building fortunes on top of it never had to pay for its upkeep. Now a powerful new capability has arrived -- and as we've seen repeatedly in tech, there's the risk that organizations with resources will receive it first and learn to protect themselves, while others are left vulnerable." Holley also noted that some large companies are already preparing for the impact. "I've talked to engineering leaders at very large companies who are saying that they're going to be pulling thousands of engineers off everything to be working on this for the next six months," he said. That shift could prove challenging for smaller projects. Many open-source initiatives are maintained by volunteers or small teams with limited resources, while large technology companies have significantly bigger engineering teams and budgets to respond to newly discovered vulnerabilities. Holley added that Firefox gained access to Mythos through a partnership between the Firefox team and Anthropic, even though Mozilla itself was not officially part of the Project Glasswing program.

Anthropic
Techloy1d ago
Read update
Mozilla's Firefox Says It Fixed 271 Vulnerabilities With Help From Anthropic's Mythos

Trillion-Dollar Flip-Flop? SpaceX Says Orbital Data Centers May Never Make Money

For more than a year, Elon Musk has been prophesying a new era for AI. The SpaceX CEO has said that building AI data centers in space is a “no brainer,†and that it would be the cheapest place to put AI within two to three years. But the company’s pre-IPO filing presents a much more conservative outlook. SpaceX is preparing to make what could be the largest initial public offering in history, targeting a valuation of roughly $1.75 trillion with a $75 billion raise. The U.S. Securities and Exchange Commission (SEC) requires companies to submit an S-1 statement before going public, in part to inform potential investors of the risks. SpaceX’s S-1 filing, reviewed by Reuters, reportedly admits that orbital data centers may never be commercially viable. "Our initiatives to develop orbital â€"AI compute and in-orbit, lunar, and interplanetary industrialization are in early stages, involve significant technical complexity and unproven technologies, and may not achieve commercial viability," the filing states, according to Reuters. Gizmodo was unable to view the document or verify its contents, and SpaceX did not respond to a request for comment by the time of publication. This cautious filing is a far cry from the lofty ambitions SpaceX outlined in a late-January FCC application. The company requested permission to launch an orbital data center constellation of up to 1 million Starlink satellites, claiming that harnessing the “near-constant†solar power available in orbit will reduce operating costs, energy demands, and the environmental impacts associated with terrestrial data centers. “Launching a constellation of a million satellites that operate as orbital data centers is a first step toward becoming a Kardashev Type II civilizationâ€"one that can harness the Sun’s full powerâ€"while supporting AI-driven applications for billions of people today and ensuring humanity’s multiplanetary future among the stars,†the application stated, according to SpaceNews. Of course, a key purpose of an S-1 is to disclose risks to investors, so it’s not surprising to see a more cautious tone emerge in the pre-IPO filing. Beyond questions of commercial viability, SpaceX acknowledges major technical hurdles, warning investors that any future orbital data centers will operate “in the harsh and unpredictable environment of space, exposing them to a wide and unique range of space-related risks that could cause them to malfunction or fail.†Indeed, scientists, satellite experts, and SpaceX competitors have openly criticized the plans outlined in the FCC application, arguing that current technology and capabilities are not sufficient to build and operate orbital data centersâ€"much less a constellation of 1 million. For SpaceX, part of the problem is that the satellites and the rocket it would use to launch them aren’t even ready yet. Musk has said that SpaceX could build orbital data centers by “simply scaling up Starlink V3 satellites,†which the company has yet to debut. SpaceX will launch them using its Starship rocket, which has yet to demonstrate the full rapid reusability and launch cadence that building an orbital data center would require. "Any failure or delay in the development of Starship at scale or in achieving the required launch cadence, reusability and capabilities thereof would â delay â€"or limit our ability to execute our growth strategy," the S-1 filing â€"reportedly states. It’s refreshing to see SpaceX finally acknowledge the enormous challenges and risks that stand in the way of its orbital data center dream. There’s a chance this could scare away some prospective investors, but at the end of the day, SpaceX’s longstanding dominance over the commercial launch industry and willingness to venture into the booming AI market still make it an attractive investment. Whether the risks will outweigh the potential rewards remains to be seen.

SpaceX
Gizmodo1d ago
Read update
Trillion-Dollar Flip-Flop? SpaceX Says Orbital Data Centers May Never Make Money

SpaceX to Buy AI Tool Cursor for $60 Billion: A Game-Changer for Tech Giants | Technology

In a significant move set to reshape the AI industry landscape, SpaceX has announced its intention to acquire the AI coding tool Cursor for a staggering $60 billion. The deal, disclosed on the social platform X, reflects SpaceX's strategy to bolster its competitive edge against AI giants Anthropic and OpenAI. Created by San Francisco-based startup Anysphere, Cursor has gained popularity among expert software engineers. SpaceX, led by CEO Elon Musk, believes that acquiring Cursor and its widespread distribution network could open up new customer bases and revolutionize its AI approaches. Additionally, SpaceX revealed a potential $10 billion investment plan to collaborate with Cursor if the acquisition doesn't proceed. The collaboration aims to utilize xAI's expansive Colossus infrastructure, based in Memphis, Tennessee, to advance Cursor's AI capabilities, pushing the boundaries of computer programming.

SpaceXAnthropicxAI
Devdiscourse1d ago
Read update
SpaceX to Buy AI Tool Cursor for $60 Billion: A Game-Changer for Tech Giants | Technology

SpaceX says it can buy AI coding tool Cursor for $60B later this year

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

SpaceXAnthropicxAI
Barchart.com1d ago
Read update
SpaceX says it can buy AI coding tool Cursor for $60B later this year

The Limits of Chaos Engineering According to AI Systems

Join the DZone community and get the full member experience. Join For Free In the early days of Chaos Monkey, breaking things at random was almost a badge of honor. Kill a service. Drop a node. Add latency. Watch what happens. That model made sense when most systems were relatively deterministic, and the primary question was simple: Will the application survive if a component disappears? But AI infrastructure has changed the problem. In environments built on LLM pipelines, vector stores, retrieval systems, inference gateways, and automated control loops, random failure injection is no longer enough. In some cases, it is not even the right test. Breaking a node is easy. Breaking a system's ability to preserve its intended behavior under stress is much harder and much more relevant. That is why chaos engineering needs a new layer: intent. As AI systems become more autonomous, resilience can no longer be measured only by uptime. We also need to know whether the system continues to behave correctly when critical assumptions fail. That requires moving from random chaos to intent-based chaos engineering: a methodology where architects define what "healthy" means, then deliberately challenge the system's ability to maintain that state under realistic failure conditions. The difference is simple. Random chaos asks, "What breaks if I inject failure?" Intent-based chaos asks, "Can this system still preserve the outcome it was designed to deliver?" That shift matters more in AI infrastructure than almost anywhere else. The Problem With Random Chaos in AI Systems Traditional chaos experiments are infrastructure-centric. Engineers kill pods, introduce network loss, or terminate processes to verify that failover mechanisms work. These are useful tests, but they often miss the kinds of failures that matter most in AI-heavy systems. A generative AI stack can remain "up" while still being operationally broken. A retrieval layer might respond within SLA, yet return a degraded context. A model gateway may remain available while silently increasing hallucination risk because upstream embeddings have drifted. An inference service may autoscale correctly while downstream rate limiting causes user-facing timeouts. None of these show up cleanly in the old chaos model. In AI-driven infrastructure, the most dangerous failures are often not binary. They are semantic, degradational, and behavioral. This is where intent becomes essential. If the purpose of a retrieval pipeline is to preserve context relevance under load, then resilience testing should validate that outcome. If the purpose of an AI operations system is to maintain stable incident triage during telemetry spikes, then chaos experiments should target that objective -- not just randomly break a component and hope the results are meaningful. Defining the Intent Layer Intent is the operational expression of business logic. It translates human expectations into machine-verifiable conditions. For a distributed AI service, intent might look like this: * Retrieval latency must remain below 300ms * Context recall must stay above an acceptable threshold * Inference failover must not degrade policy enforcement * Critical monitoring signals must remain explainable during incident conditions This matters because AI systems are rarely judged only by infrastructure availability. They are judged by whether they preserve correctness, quality, and trustworthiness under stress. Intent-based chaos engineering starts by making those expectations explicit. Instead of saying, "Let's kill 20% of the cluster," the question becomes: * What system behavior are we trying to preserve? * Which conditions threaten that behavior? * How do we validate whether the system remained aligned to intent? That makes the experiment far more useful, especially in production-adjacent environments where blind failure injection can create more noise than insight. From State to Intent Most observability systems are good at reporting the state. They can tell you CPU usage, request latency, pod restarts, error counts, queue depth, or database saturation. What they often cannot tell you directly is whether the system is still fulfilling its intended purpose. Intent-based chaos requires a feedback loop between state and intent. A simplified view looks like this: This model changes the role of chaos engineering. Instead of being a destructive test harness, it becomes a controlled system for measuring whether the platform can keep delivering the outcomes the business actually depends on. Predictive Stress Injection, Not Random Breakage The next step is stress injection. In a traditional chaos framework, the experiment might be: * Terminate a service instance * Introduce packet loss * Degrade a dependency * Create a network partition In intent-based chaos, the experiment is chosen because it challenges a known operational dependency tied to the target behavior. For example, in an AI retrieval system, you may not care whether a single shard fails in isolation. You care whether shard degradation causes context recall to fall below an acceptable level during peak load. That is a more meaningful experiment. This is also where AI becomes useful. Telemetry and incident history can reveal recurring system patterns: * Vector index imbalance before latency spikes * Cache churn before retrieval degradation * Retry storms after inference gateway saturation * Observability blind spots during backpressure events Instead of injecting arbitrary failure, engineers can simulate the stress signatures that actually precede operational instability. That is a very different kind of chaos engineering -- one grounded in observed behavior rather than randomness. Intent Logic in Practice At a high level, the logic looks like this: The important thing here is not the syntax. It is the shift in philosophy. The experiment is not evaluating whether the infrastructure stayed alive. It is evaluating whether the system continued to preserve the outcome it was designed to protect. That is the level at which AI systems need to be tested. Autonomous Remediation Needs a North Star Intent also makes autonomous remediation more reliable. In many modern platforms, remediation is already automated to some degree. Systems restart services, scale resources, fail over traffic, or reroute requests when predefined thresholds are crossed. But automated recovery is only as good as the logic guiding it. Without intent, remediation is reactive. It responds to symptoms. With intent, remediation becomes directional. It knows what outcome it is trying to preserve. This is especially important in AI-driven infrastructure, where the "correct" response is not always obvious. If a retrieval system degrades, should the platform rebuild an index, switch to a fallback store, reduce concurrency, or tighten context filters? The answer depends on the operational intent of the service. Intent becomes the system's North Star. That is what makes self-healing architecture more than just automation. It gives the platform a decision framework. Why This Is Safer for Production One of the biggest objections to chaos engineering in enterprise settings is safety. That concern is fair. Random failure injection in production can be hard to justify, especially in systems that support regulated workloads, customer-facing AI experiences, or security-sensitive operations. Intent-based chaos is safer because it is narrower and more accountable. It does not ask teams to break things blindly. It asks them to define acceptable operating boundaries, simulate realistic threats to those boundaries, and verify whether the platform can recover without violating core expectations. In that sense, intent-based chaos is closer to structured resilience validation than traditional disruption testing. It is a more mature model for environments where uptime alone is no longer the right measure of health. The Next Stage of Chaos Engineering Chaos engineering was originally about teaching distributed systems to survive failure. That mission has not changed. What has changed is the nature of the systems. AI infrastructure is adaptive, stateful, and deeply dependent on the quality of its intermediate behaviors. If we continue to test it with purely random failure models, we will miss the failures that matter most. The future of resilience engineering is not just about causing disruption. It is about preserving intent. That means defining what good behavior looks like, identifying the realistic stressors that threaten it, and building platforms that can detect, validate, and recover against those conditions automatically. Random chaos was a useful first chapter. For AI-driven infrastructure, the next chapter is intentional resilience.

CHAOS
dzone.com1d ago
Read update
The Limits of Chaos Engineering According to AI Systems

SpaceX says it can buy AI coding tool Cursor for $60B later this year

SAN FRANCISCO -- SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

AnthropicxAISpaceX
ABC News1d ago
Read update
SpaceX says it can buy AI coding tool Cursor for $60B later this year

SpaceX says it can buy AI coding tool Cursor for $60B later this year

SAN FRANCISCO (AP) -- SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

xAIAnthropicSpaceX
Seattle Pi1d ago
Read update
SpaceX says it can buy AI coding tool Cursor for $60B later this year

SpaceX says it can buy AI coding tool Cursor for $60B later this year

SAN FRANCISCO - SpaceX says it has the rights to buy artificial intelligence coding tool Cursor for $60 billion later this year as Elon Musk's space exploration and AI company looks for ways to compete with rivals Anthropic and OpenAI ahead of a planned Wall Street debut. SpaceX said that, alternatively, it could pay $10 billion to "work together" with Cursor. Recommended Videos SpaceX announced the deal Tuesday on the social platform X, which along with the AI chatbot Grok is part of a constellation of properties that Musk has merged into his rocket company. Cursor, made by San Francisco startup Anysphere, is a popular AI coding assistant. What SpaceX describes as Cursor's wide "distribution to expert software engineers" is likely part of what makes it attractive to Musk's company, giving it access to a new customer base. Cursor said its new partnership with SpaceX subsidiary xAI will enable it to build future AI products using xAI's massive AI data center complex Colossus, based in Memphis, Tennessee. "We've wanted to push our training efforts much further, but we've been bottlenecked by compute," Cursor said in a statement on X, which didn't mention the possibility of being acquired. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." Cursor, which started in 2022, helped sparked a trend called "vibe coding" as AI coding assistants have become increasingly capable of doing the work of computer programming. Cursor competes with other coding tools like Anthropic's Claude Code and OpenAI's Codex but also has relied heavily on partnerships with those larger AI research companies for the foundations of its technology. It was Cursor's Composer, combined with Anthropic's Claude Sonnet, that a prominent AI researcher was playing with for weekend projects when he coined the phrase "vibe coding" in early 2025.

xAISpaceXAnthropic
WDIV1d ago
Read update
SpaceX says it can buy AI coding tool Cursor for $60B later this year

Amazon $33 Billion Anthropic Deal And The Limits Of AI Infrastructure

On April 20, Amazon announced it was investing up to another $25 billion in Anthropic, bringing its total commitment to $33 billion. In exchange, Anthropic agreed to spend more than $100 billion on AWS infrastructure over the next decade and secured up to 5 gigawatts of compute capacity spanning Amazon's Graviton CPUs and Trainium2 through Trainium4 AI chips. Five gigawatts is roughly the power draw of a mid-sized city. It is also more compute than any single AI company has ever locked in through one contract. And here is the part that matters: most of it does not exist yet. TSMC has not fabricated the chips. The power plants that will feed those data centers are still being built. The cooling systems, the networking fabric, the physical buildings themselves, most of this is capacity that Amazon is committing to construct over the next several years. Amazon did not pay $33 billion for a stake in Anthropic. Amazon paid $33 billion to guarantee that when the compute comes online, Anthropic runs on it. That is a structurally different kind of deal from anything the tech industry has seen at this scale, and the reason it is happening is the single most important fact in AI right now: compute is supply-constrained, and supply does not move at the speed of demand. For most of the last decade, cloud economics worked like this. You had workloads. You rented compute from a hyperscaler. The hyperscaler had plenty of capacity because demand grew incrementally and supply could grow with it. Compute was abundant. It was a commodity. AI broke that model. The demand curve for AI compute is not incremental. It is vertical. Anthropic's annualized revenue run rate hit $30 billion in April 2026, up from $9 billion at the end of 2025. A tripling in four months. Over 1,000 enterprise customers now spend more than $1 million a year on Claude, up from 500 in February. Eight of the Fortune 10 are customers. And Anthropic is not even the largest AI player. Across OpenAI, Anthropic, Google, xAI, and the rest, the aggregate demand for training and inference compute is growing at a rate that the physical world cannot keep up with. The supply side has four constraints, and each one takes years to move: When demand doubles every few months and supply takes three to five years to meet it, the economics change completely. Compute stops being a commodity. It becomes a scarce resource, and scarce resources get locked in by whoever has the capital and the credibility to commit early. That is the frame for understanding Amazon's $33 billion. Amazon is not funding Anthropic. Amazon is pre-committing capacity it plans to build, to a customer whose demand it already knows can absorb it. The money is moving now because the capacity will not exist later unless someone agrees to pay for it before it is built. The second part of the story, and the part most commentary gets wrong, is about what Anthropic is doing across the industry. Anthropic itself frames this as core strategy, describing Claude as "the only frontier AI model available to customers on all three of the world's largest cloud platforms: AWS (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry)." That is not a hedge. It is a distribution strategy. At this point, Anthropic has committed infrastructure relationships with all three major hyperscalers: It would be easy to read this as Anthropic playing hyperscalers against each other. That framing is wrong. The hyperscalers are not rivals here. They are separate ecosystems, and each one has a customer base that Anthropic cannot access any other way. A Fortune 500 company that runs its infrastructure on AWS is not going to switch to Azure just to use Claude. Its security reviews, its compliance audits, its data residency requirements, its identity layer, its existing deployment pipelines, all of it is built around AWS. Same for a Microsoft shop. Same for a Google Cloud customer. The cloud provider is not a vendor you swap. It is a platform you build on top of. So when Anthropic signs a deal with AWS, it is not renting compute. It is buying distribution into the thousands of enterprises that live inside the AWS ecosystem and cannot leave. When it signs with Google, it is buying distribution into Google Cloud's enterprise base. When it signs with Microsoft, it gets access to the Microsoft 365 customer base, which is measured in hundreds of millions of seats. Each hyperscaler has a compute roadmap, a customer base, and a set of enterprise lock-ins that Anthropic cannot build independently. So it is plugging into all three. Not because it is hedging. Because each one reaches a different half of the enterprise market, and Anthropic's revenue curve only keeps growing if it is available wherever its customers already are. The obvious comparison is OpenAI, and it is worth being specific about where the comparison breaks. OpenAI has a similar set of arrangements: roughly $110 billion in recent funding, including $50 billion from Amazon, $30 billion from Nvidia, and $30 billion from SoftBank, and a $300 billion cloud infrastructure deal with Oracle. On the surface, it looks like the same structure as Anthropic's. Investors wire money, the AI company spends it on infrastructure, the infrastructure providers book the spend as revenue. But the OpenAI and Anthropic structures diverge in one critical way. Oracle has to build most of the capacity OpenAI needs, and Oracle does not have the customer ecosystem that a hyperscaler has. If OpenAI's enterprise revenue curve bends the wrong way, Oracle is stuck with data center capacity it cannot easily sell to anyone else. Oracle is betting on OpenAI specifically, not on the AI compute market generally. Amazon, Google, and Microsoft are in a different position. Their data centers serve tens of thousands of enterprise workloads across hundreds of industries. If AI demand from Anthropic plateaus, the same compute gets redeployed to other customers. The capacity is fungible because the ecosystem is fungible. The hyperscalers are not betting on Anthropic. They are betting on AI compute demand broadly, and using Anthropic as the anchor tenant to justify the build. That is a much more durable structure. It also explains why Anthropic can get three hyperscalers to fund its growth simultaneously: each one is getting a flagship AI workload to justify the capacity expansion its other customers are also pulling on. Three things worth taking from this, that the standard "circular financing" read misses entirely: The frame most commentators are using, that AI is a financial bubble with money going in circles, gets the physics wrong. Money is not going in circles. Capacity is being reserved, years in advance, because the underlying supply curve cannot keep up with demand and everyone in the industry knows it. Amazon paid $33 billion for something that does not exist yet. That is not a bubble signal. It is a signal about how constrained the thing it is paying for actually is, and how long it takes to build. Watch the capacity reports. Watch the gigawatt numbers. Watch how fast hyperscalers can actually bring new data centers online. That is where the AI industry is being decided now. Not in the chip benchmarks, not in the model leaderboards, and definitely not in the quarterly earnings calls. In the ground, where the buildings are being built, and in the queue at TSMC, where everyone eventually has to wait their turn.

AnthropicxAI
Forbes1d ago
Read update
Amazon $33 Billion Anthropic Deal And The Limits Of AI Infrastructure

Anthropic aims to expand Mythos access to European banks - report

Anthropic is preparing to make its Mythos AI model available to banks in Europe as lenders worldwide seek to assess the technology after initial access was granted to major US banks, reported Reuters. Cybersecurity specialists consider Mythos a serious challenge for banks and their older technology infrastructure, the report noted. The issue drew warnings from regulators and policymakers at last week's International Monetary Fund Spring meeting in Washington. Several US banks have already received access, while other institutions are still awaiting entry. One of the people said Anthropic intends to widen availability of Mythos to banks in Europe and the UK, as well as to other organisations. The rollout is subject to security checks to make sure access is handled safely, that person said, declining to be identified. A second person said European banks could receive access within days, while the first said the process could take days or weeks. Bloomberg had earlier reported that Anthropic was expected to make Mythos available to UK financial institutions soon. Anthropic did not immediately reply to a Reuters request for comment. At the outset, Anthropic offered the model to participants in its Project Glasswing programme and roughly 40 other organisations involved in building or maintaining critical software infrastructure. JPMorgan Chase, a member of Glasswing, is the only bank Anthropic has publicly identified as having access. However, Bank of America has been part of the initiative from the beginning and has been conducting internal testing of Mythos, the news agency said. Other US banks have more recently indicated they have also obtained access, as regulators move quickly to study the cyber risks associated with the new AI model. German central bank president Joachim Nagel said all institutions should be able to access Anthropic's Mythos model in order to preserve a level playing field and reduce the risk of misuse. In India, the central bank is holding discussions with international regulators, domestic lenders and government officials to gauge the possible risks linked to Mythos, according to Reuters. According to those sources, the Reserve Bank of India's early assessment broadly matches that of other regulators: Mythos could heighten cyber risk by speeding up the identification and exploitation of software flaws. The RBI could seek direct talks with Anthropic. India's payments body, the National Payments Corporation of India, is attempting to obtain early access to Mythos with a limited number of banks to examine vulnerabilities and "day zero" cyber risks before any wider distribution. That may prove difficult, however, because Anthropic's Mythos systems is hosted on tightly controlled servers in the US, and testing with local data in overseas jurisdictions could be hard to arrange. The latest moves come as regulators in the US and UK discuss cyber security concerns tied to Anthropic PBC's newest AI model with major banks, amid broader worries about risks to critical financial infrastructure. Bloomberg reported that financial authorities in parts of Asia are also paying closer attention to cyber threats associated with Mythos.

Anthropic
Retail Banker International1d ago
Read update
Anthropic aims to expand Mythos access to European banks - report

Anthropic considers pulling Claude Code from its $20 Pro plan

These moves suggest current pricing models are becoming unsustainable as AI capabilities advance, potentially forcing significant plan restructuring across providers. Wasn't I just saying that flat-rate AI plans are broken? Now we've got more proof, this time coming from the makers of Claude. Eagled-eyed Claude users caught Anthropic tinkering with the signup page for individual Claude Pro and Max plans. Specifically, they saw that Claude Code was nixed from the features available on Claude Pro. The Claude Code bullet point eventually returned to the Claude Pro signup page, and Anthropic Head of Growth Amol Avasare later posted on social media that the change was just "a small test on ~2% of new prosumer signups," adding that nothing had changed for current Pro ($20/month) and Max ($100/month and up) users. But then Avasare expanded on his comments, noting that when Claude Max first launched a year ago, Claude Code wasn't included, Cowork "didn't exist," and "agents that run for hours weren't a thing." "Max was designed for heavy chat usage, that's it," Avasare continued. "Engagement per subscriber is way up. We've made small adjustments along the way (weekly caps, tighter limits at peak), but usage has changed a lot and our current plans weren't built for this." It's the "our current plans weren't built for this" part that caught my attention. Avasare seems to be stating quite plainly what I wrote yesterday: that flat-rate AI plans like Claude Pro and Claude Max are broken, mainly due to the rise in agentic AI applications like Claude Cowork and Claude Code. So, is Anthropic getting ready to yank Claude Code from Claude Pro users? That's the big question. I've reached out to Anthropic for comment and will update this story if I get a worthwhile response. But following GitHub's recent decision to suspend signups for its own flat-rate Copilot plans as well as pull Claude Opus access from its $10/month Pro plan, the writing seems to be on the wall: Big changes are coming to individual AI plans, and agent-focused features like code and desktop AI assistants might be the first to go. If that seems like a bait-and-switch, well, I don't blame you, especially if you ponied up for an annual AI plan like Claude Pro or Max. One would hope that if Anthropic does downgrade its consumer plans, it would either allow current subscribers to keep their existing functionality or offer refunds to those who want them. Then again, if Anthropic were to go ahead and drop Claude Code from the list of Claude Pro features, the move would create an opportunity for Anthropic's competitors -- namely OpenAI with Codex. Replying to Avasare's comments, OpenAI chief and mortal Anthropic opponent Sam Altman trolled the exec with an "ok boomer," hinting that the ChatGPT creator would be all too happy to scoop up those Claude users angered by any upcoming Claude plan changes.

Anthropic
PCWorld1d ago
Read update
Anthropic considers pulling Claude Code from its $20 Pro plan

Anthropic's Passport Checkpoint: How ID Checks Are Locking Chinese Developers Out of Claude

Anthropic's Claude, once a go-to tool for coders worldwide, now demands passports. Government-issued photo IDs. Live selfies. The AI firm rolled out identity verification last week, targeting 'a small number of cases' tied to fraud or abuse. But Chinese founders feel the sting hardest. Their startups grind to a halt. The policy hit quietly. On April 14, Anthropic updated its help center page. Users must show physical documents -- passports, driver's licenses, national IDs -- no scans or copies. A selfie matches the face. San Francisco's Persona Identities handles it all, checking against databases in over 200 countries, including China, per South China Morning Post. Why now? Abuse. Tens of thousands of fake accounts. Millions of interactions siphoned off. Chinese labs, Anthropic claims, farmed responses to distill their own models. Dragonfly's Haseeb Qureshi noted on X: 'Anthropic has been getting mass-farmed by Chinese Labs across tens of thousands of accounts... the scale is staggering.' That explains September 2025's crackdown: bans on firms over 50% owned by Chinese entities, even overseas-registered ones. Revenue hit? Hundreds of millions, executives admitted. Geofencing AI in a Fractured World China sits on Anthropic's unsupported list -- alongside Russia, Iran, North Korea, Belarus. VPNs? Flagged. Suspicious patterns trigger checks. Fail, and accounts vanish. Repeated policy breaks. Under-18 use. Terms violations. All ban-worthy post-verification. Chinese developers scramble. Black markets boom. Sellers hawk verified accounts or proxies, per SCMP. Demand stays hot despite the ban covering mainland, Hong Kong, Macau. One X user posted screenshots of the prompt hitting a 'Chinese account' on Claude Max signup. No escape. But it's bigger than code. Founders building abroad -- Silicon Valley startups with Chinese roots -- face extinction. The Information reports Claude Code shut down for some last week, right after the policy drop. Ownership tests snare them. A Singapore entity? Fine, unless majority Chinese-held. Anthropic insists it's narrow. A spokesperson told Business Insider: 'This applies to a small number of cases where we see activity that indicates potentially fraudulent or abusive behavior.' Privacy pitch: Persona holds the data. Anthropic accesses records only. No training use. Backlash and Black Markets Collide Users revolt. Privacy hawks flee to ChatGPT, Gemini -- no KYC there. Hacker News threads erupt: Persona's past breaches worry some. A father griped his 15-year-old's paid account got suspended for age. Chinese voices on X call it 'deep hatred.' Package 包叔 fumed: 'Claude对中国真是深仇大恨' -- Claude harbors unprecedented hate for China. Workarounds proliferate. Proxies. Shared accounts. But risks mount. Bans cascade. TechNode warns: 'The barrier to entry has been significantly raised: individuals without passports are excluded.' No passport? No Claude. This isn't isolated. OpenAI probed DeepSeek for API data theft in 2024, per TechCrunch. US labs fortify. Gaps widen between open-source chasers and frontiers. Chinese models like DeepSeek thrive anyway -- half their top researchers homegrown, a Hoover study found. Anthropic bets safety trumps access. Founders pay the price. Chinese innovators pivot to domestic tools or rivals. Geopolitics invades the prompt bar. AI's border walls rise higher.

Anthropic
WebProNews1d ago
Read update
Anthropic's Passport Checkpoint: How ID Checks Are Locking Chinese Developers Out of Claude

SpaceX to Let Cursor Train Its AI on xAI Supercomputer, Teases $60B Acquisition

SpaceX has struck a deal with AI coding platform developer Cursor that will allow it to use xAI's Colossus supercomputer to "create the world's best coding and knowledge work AI." On X, SpaceX said, "the combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models." As part of the deal, SpaceX will retain the option to purchase Cursor for $60 billion later this year. If it doesn't, it will owe Cursor $10 billion "for our work together." All of this comes weeks before SpaceX's impending IPO, which could value the company as high as $1.75 trillion. Although SpaceX has been primarily a rocket company for most of its existence, it has diversified in recent years. While development of the Starship launch vehicle continues apace, in 2026, SpaceX also supports 10 million+ Starlink customers. In February, it also absorbed Grok-developer xAI, making it the parent company of X and bringing most of SpaceX CEO Elon Musk's various ventures under a single banner ahead of the IPO. The Cursor deal is just the latest step in boosting SpaceX's value ahead of its IPO, but it also solves a number of problems facing SpaceX, xAI, and Cursor. It means Cursor can train its own AI model(s) on xAI's massive dataset, and would no longer be dependent on OpenAI and Anthropic to enhance its coding toolsets. In a Tuesday blog post, Cursor says it "released Composer less than six months ago as our first agentic coding model. After that, Composer 1.5 scaled reinforcement learning by over 20x. Composer 2 then added continued pretraining, reaching frontier-level performance at a fraction of the cost of other models. Each step up in compute has translated to meaningfully more capable models." It acknowledged, however, that Cursor has "been bottlenecked by compute," so the SpaceX deal means "our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." This move also gives xAI its own coding tool to better compete with contemporary AI firms, and it adds even more narrative and actual value to SpaceX. Cursor's momentum can now fuel the continued expansion and growth of xAI and Grok, which has struggled to maintain relevance versus ChatGPT, Gemini, and other AI chatbots. SpaceX also locks Cursor at a set value for its potential purchase, which, at the rate Cursor's valuation has grown, is a victory in itself. Cursor was valued at just $2.5 billion in January 2025, but that jumped to $29.3 billion by year's end, The Wall Street Journal reports. Last week, it was looking at a funding round that would push its estimated value to over $50 billion. However, this also represents a strategic risk for SpaceX. Although Cursor and xAI may be able to develop a proprietary coding tool to compete with other major AI companies, doing so will take time. If it takes too long, or never quite catches up, SpaceX could be saddled with a company that peaked before it was purchased. That's on top of the debt it acquired with the mergers with xAI and its subsidiary, Twitter/X. Fortunately for Musk and his fellow SpaceX shareholders, the IPO will probably come before the gamble needs to show its returns. But with Musk claiming xAI needs to be rebuilt from the ground up, how it is rebuilt may go a long way to deciding it and Cursor's long-term future.

xAISpaceXAnthropic
PCMag UK1d ago
Read update
SpaceX to Let Cursor Train Its AI on xAI Supercomputer, Teases $60B Acquisition

SpaceX to Let Cursor Train Its AI on xAI Supercomputer, Teases $60B Acquisition

SpaceX has struck a deal with AI coding platform developer Cursor that will allow it to use xAI's Colossus supercomputer to "create the world's best coding and knowledge work AI." On X, SpaceX said, "the combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models." As part of the deal, SpaceX will retain the option to purchase Cursor for $60 billion later this year. If it doesn't, it will owe Cursor $10 billion "for our work together." All of this comes weeks before SpaceX's impending IPO, which could value the company as high as $1.75 trillion. Although SpaceX has been primarily a rocket company for most of its existence, it has diversified in recent years. While development of the Starship launch vehicle continues apace, in 2026, SpaceX also supports 10 million+ Starlink customers. In February, it also absorbed Grok-developer xAI, making it the parent company of X and bringing most of SpaceX CEO Elon Musk's various ventures under a single banner ahead of the IPO. The Cursor deal is just the latest step in boosting SpaceX's value ahead of its IPO, but it also solves a number of problems facing SpaceX, xAI, and Cursor. It means Cursor can train its own AI model(s) on xAI's massive dataset, and would no longer be dependent on OpenAI and Anthropic to enhance its coding toolsets. In a Tuesday blog post, Cursor says it "released Composer less than six months ago as our first agentic coding model. After that, Composer 1.5 scaled reinforcement learning by over 20x. Composer 2 then added continued pretraining, reaching frontier-level performance at a fraction of the cost of other models. Each step up in compute has translated to meaningfully more capable models." It acknowledged, however, that Cursor has "been bottlenecked by compute," so the SpaceX deal means "our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models." This move also gives xAI its own coding tool to better compete with contemporary AI firms, and it adds even more narrative and actual value to SpaceX. Cursor's momentum can now fuel the continued expansion and growth of xAI and Grok, which has struggled to maintain relevance versus ChatGPT, Gemini, and other AI chatbots. SpaceX also locks Cursor at a set value for its potential purchase, which, at the rate Cursor's valuation has grown, is a victory in itself. Cursor was valued at just $2.5 billion in January 2025, but that jumped to $29.3 billion by year's end, The Wall Street Journal reports. Last week, it was looking at a funding round that would push its estimated value to over $50 billion. However, this also represents a strategic risk for SpaceX. Although Cursor and xAI may be able to develop a proprietary coding tool to compete with other major AI companies, doing so will take time. If it takes too long, or never quite catches up, SpaceX could be saddled with a company that peaked before it was purchased. That's on top of the debt it acquired with the mergers with xAI and its subsidiary, Twitter/X. Fortunately for Musk and his fellow SpaceX shareholders, the IPO will probably come before the gamble needs to show its returns. But with Musk claiming xAI needs to be rebuilt from the ground up, how it is rebuilt may go a long way to deciding it and Cursor's long-term future.

AnthropicxAISpaceX
PC Mag Middle East1d ago
Read update
SpaceX to Let Cursor Train Its AI on xAI Supercomputer, Teases $60B Acquisition
Showing 461 - 480 of 10807 articles