The latest news and updates from companies in the WLTH portfolio.
When OpenAI launched in 2015, it promised that artificial intelligence would serve humanity, not shareholders. Today the company sits at the center of a growing conflict over defense contracts, commercialization, and who will control the future of AI. These disputes reveal a deeper issue: How artificial intelligence companies are structured may shape how they balance profit, safety, and national security. OpenAI's 2025 restructuring was meant to reassure critics. Under a memorandum of understanding with California Attorney General Rob Bonta, the OpenAI Foundation, the nonprofit parent, appoints the entire board of the OpenAI Group Public Benefit Corp., with the attorney general exercising oversight. On paper, this preserves mission control even as outside investors expand their ownership stakes. But a more consequential feature of the restructuring has received less attention: It changes the underlying organizational design in a way that gives OpenAI greater latitude to compete more aggressively in pursuit of profit. The nonprofit foundation that controls OpenAI's operating business is legally structured as a public charity. Prior to the restructuring, the charity's core function was to govern the for-profit subsidiary and ensure that it developed AI in the interests of humanity. To reinforce this role, the for-profit arm was subject to caps on profit distributions based on the idea that profits had to be reinvested in the business, which was itself understood to advance the social mission. Now the OpenAI Foundation resembles European enterprise foundations that control companies such as Novo Nordisk, IKEA, or Carlsberg. The foundation pursues charitable goals and may receive distributions, while the operating company runs a competitive business. That shift became clearer when OpenAI announced that the foundation would use income from the for-profit to fund philanthropic work, beginning with a $25 billion commitment to health programs, disease research, and AI resilience initiatives. The announcement underscores a growing separation between business activity and philanthropy. Tensions surrounding Anthropic, OpenAI, and Pentagon contracts are often framed as a clash of values: commercialization versus safety, expansion versus restraint. But there is another explanation. Both Anthropic and OpenAI are controlled by nonprofits, yet their governance structures differ in ways that may shape how they respond to controversial government contracts. Anthropic is what we call a socially oriented for-profit. The operating company exists primarily to pursue a social mission: namely, safe AI development, under the supervision of a Delaware purpose trust with the power to appoint a majority of the board. The trust's role is governance rather than extracting profit. OpenAI's structure has evolved in a different direction. Although originally structured as a socially oriented for-profit, after the restructuring it now functions primarily as an income-generating for-profit whose surplus can be distributed to a nonprofit parent that funds charitable initiatives. The distinction between socially oriented and income-generating for-profits, which we develop in an academic article, may seem subtle but has important consequences. On the surface, both Anthropic and OpenAI operate through Delaware public benefit corporations. Directors must balance shareholder interests with the public benefit stated in the charter and the interests of stakeholders, though courts generally defer to board judgment. Yet their internal logic differs. Anthropic's structure is mission-centered. The company exists to build safe AI, and its commercial decisions are filtered through that mission. Opportunities in sensitive areas such as defense contracts are naturally evaluated through the lens of safety-first development. While Anthropic's governance structure includes a failsafe allowing sufficiently large stockholder supermajorities to amend the trust and its powers, that provision appears reserved for extreme circumstances, not for directing ordinary business strategy, as the recent clash with the Pentagon suggests. OpenAI's structure creates different incentives. While it also claims to pursue beneficial AI, including in the recent negotiations with the Pentagon over safety constraints, its drive toward an IPO at a potential $1 trillion valuation creates pressure to focus primarily on generating profits. Some of that economic upside may flow to the OpenAI Foundation, which says it will use those resources for philanthropic goals, such as curing diseases. OpenAI's restructuring removed earlier understandings that profits would remain largely reinvested in the business. Instead, the company can generate and distribute surplus. This shift aligns investor interests with those of the nonprofit owner, because both benefit when the company produces financial returns. It may also make the company easier to position in traditional capital markets, including a possible future IPO. Just as importantly, the new structure gives the operating company greater strategic flexibility. Government contracts can be framed as revenue-generating investments that ultimately support charitable purposes through the nonprofit's distributions. There are few clear legal rules specifying how OpenAI's nonprofit parent must balance safe AI development against income generation. "Beneficial AI" isn't a precise operational standard, and boards retain considerable discretion in interpreting how commercial activity advances the mission. Securing a Pentagon contract can be presented as an investment that strengthens both AI development and philanthropic output, not as a departure from a mission. Anthropic's structure offers less flexibility. Because its operating company is primarily a socially oriented for-profit, safe AI development is still its core mission. Controversial contracts can't easily be reframed as income-generating opportunities for a charitable parent. This makes trade-offs sharper. Anthropic's investors and customers, including firms involved in defense projects, must weigh the company's safety commitments against commercial realities. If Anthropic declines certain government engagements, it risks losing revenue and strategic position in a market where government demand is increasingly influential. Its governance structure therefore narrows the space for presenting commercial expansion as mission-compatible. That focus may limit maneuverability in an industry defined by enormous capital needs and fierce competition. The Pentagon dispute highlights a broader lesson about corporate governance. Organizational design influences how boards frame decisions, how investors evaluate risk, and how mission commitments are interpreted. By restructuring itself as an income-generating for-profit, OpenAI has embedded commercialization within its governance architecture. Anthropic has maintained a stronger mission focus on safe AI, but at the cost of reduced strategic flexibility. In frontier AI, where capital demands are vast and national security stakes are rising, those structural differences may matter more than rhetoric about purpose versus profit. The coming years will test which model of nonprofit control proves more sustainable in the AI economy: the socially oriented for-profit or the income-generating for-profit model. That choice may shape how the next generation of AI companies balance safety, profit, and national power. This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners. Ofer Eldar is a professor at UC Berkeley School of Law and Senior Research Fellow at the Halle Institute for Economic Research.

American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers.

SYDNEY, April 22 (Reuters) - The central banks of Australia and New Zealand said on Wednesday they were monitoring the release of Anthropic's advanced Mythos artificial intelligence model, joining authorities around the world in expressing concerns about the new cybersecurity risks it poses. Designed for defensive cybersecurity tasks, Mythos' vast capabilities have sparked fears about the threat to traditional software security, after Anthropic said a preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." Experts have also warned that the model can identify and exploit previously unknown vulnerabilities faster than companies can fix them. The Reserve Bank of Australia said in a statement it was closely monitoring the development and was "engaging with peer regulators, government and regulated entities." The Reserve Bank of New Zealand said it was also in contact with other regulators both domestically and in Australia over what it called the "developing risk" from Mythos. On Tuesday, Bundesbank President Joachim Nagel called the model a double-edged sword, saying: "it could be used not only to improve digital security systems, but also to leverage their vulnerabilities for malicious purposes." Anthropic has introduced Claude Mythos Preview through a tightly controlled program called Project Glasswing. Access has been granted to major technology companies including Amazon, Microsoft, Nvidia, and Apple. The company has also expanded access to more than 40 additional organisations that build or maintain critical software infrastructure. Experts say Mythos' advanced coding and autonomous capabilities could significantly accelerate sophisticated cyberattacks, especially in sectors like banking, where complex, interconnected, and often decades-old systems remain common. (Reporting by Stella Qiu in Sydney; Writing by Alasdair Pal; Editing by Edwina Gibbs)
SYDNEY, April 22 (Reuters) - The central banks of Australia and New Zealand said on Wednesday they were monitoring the release of Anthropic's advanced Mythos artificial intelligence model, joining authorities around the world in expressing concerns about the new cybersecurity risks it poses. Designed for defensive cybersecurity tasks, Mythos' vast capabilities have sparked fears about the threat to traditional software security, after Anthropic said a preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." Experts have also warned that the model can identify and exploit previously unknown vulnerabilities faster than companies can fix them. The Reserve Bank of Australia said in a statement it was closely monitoring the development and was "engaging with peer regulators, government and regulated entities." The Reserve Bank of New Zealand said it was also in contact with other regulators both domestically and in Australia over what it called the "developing risk" from Mythos. On Tuesday, Bundesbank President Joachim Nagel called the model a double-edged sword, saying: "it could be used not only to improve digital security systems, but also to leverage their vulnerabilities for malicious purposes." Anthropic has introduced Claude Mythos Preview through a tightly controlled program called Project Glasswing. Access has been granted to major technology companies including Amazon (AMZN.O), opens new tab, Microsoft (MSFT.O), opens new tab, Nvidia (NVDA.O), opens new tab, and Apple (AAPL.O), opens new tab. The company has also expanded access to more than 40 additional organisations that build or maintain critical software infrastructure. Experts say Mythos' advanced coding and autonomous capabilities could significantly accelerate sophisticated cyberattacks, especially in sectors like banking, where complex, interconnected, and often decades-old systems remain common. Reporting by Stella Qiu in Sydney; Writing by Alasdair Pal; Editing by Edwina Gibbs Our Standards: The Thomson Reuters Trust Principles., opens new tab

The Philadelphia Inquirer Editorial Board - The Philadelphia Inquirer (TNS) Last April, President Donald Trump announced what will go down as one of the dumbest economic policy decisions in American history. Nearly every economist told the president that tariffs imposed using the International Emergency Economic Powers Act were a loser -- with disagreement coming mostly from how bad their impact would be -- and the administration was warned the move was likely unconstitutional.

American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers. According to Bloomberg, which first reported the probe, a small group of users in a private, online forum gained access to the model via the computer system reserved for Anthropic's external vendors. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told AFP. The users got hold of Mythos by various means, including using access one of them had as a worker at a contractor for Anthropic, Bloomberg reported. Anthropic works with a small number of third-party vendors who help with model development. The firm has delayed a general release of Mythos, which it says can spot undiscovered security holes that have existed for decades, in systems tested by both human experts and automated tools. It shared Mythos first with a few dozen key US tech and financial services players -- such as Nvidia, Amazon and JP Morgan Chase -- to allow them to improve their security infrastructure. But the company has also been accused of overhyping the powers of a technology which is its stock in trade, and the subject of fierce competition with rival OpenAI. bl/cms/kaf/mtp

American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers.

At 125 times 2025 revenue, SpaceX's valuation is historically the kind that will be a drag on the stock. One of the most anticipated initial public offerings (IPOs) in market history will soon be here. SpaceX has confidentially filed with the Securities and Exchange Commission, and is reportedly targeting a June 2026 listing. The $75 billion that SpaceX hopes to raise in the IPO could value the company at $2 trillion-plus, far higher than any previous public offering. That would instantly place it among the six most valuable publicly traded companies in the world, just shy of Amazon. Another unusual aspect of this IPO is that SpaceX could allocate 30% of its shares to retail investors -- at least three times the typical allocation -- yet demand for shares is still likely to exceed supply (making it oversubscribed). So, there is potential to get in on the IPO, but it might be expensive. If you manage to purchase $5,000 in SpaceX stock on Day 1, what might that look like five years from now? Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " Image source: Getty Images. While the space-launch business is the face of SpaceX, its financial engine is really Starlink, the satellite internet provider. Starlink generated nearly $12 billion in revenue in 2025, roughly 60% of the company's total revenue. It's also the only part of the business that's really profitable at this point. And it is very profitable, with "EBITDA margins" (ratios of earnings before interest, taxes, depreciation, and amortization -- EBITDA -- to net revenue) above 60%. The launch business is not as profitable at this point, with cash inflows and outflows still roughly equal, but it is operating on a scale that no one can match. It's truly dominating the global commercial spaceflight market. You're also buying a smattering of other Elon Musk businesses, including xAI. Musk says that he wants to launch orbital data centers, hoping to gain an edge over competitors like Alphabet's Google, OpenAI, and Anthropic. At present, the company is burning cash -- about $1 billion per month -- while pulling in minimal revenue. A $2 trillion valuation would mean SpaceX stock is trading at roughly 125 times 2025 revenue. That is extremely pricey. It's higher than Tesla -- higher, even, than the famously expensive Palantir Technologies. It's also historically the kind of multiple that eventually compresses. Still, stocks can carry extremely high multiples for a long time (Palantir being a good example). The bull case assumes, among other things, that Starlink continues to grow at its current pace and that margins remain high. It also assumes that meaningful progress has been made on making orbital data centers a reality, and that xAI becomes a real contender in the field, and its economics improve considerably. The base case assumes solid execution, but Starlink's growth rate is slowing somewhat. It assumes that launch remains dominant and xAI stays in the conversation -- orbital data centers are still a long way off, but investors remain excited by the possibility. The bear case isn't a doomsday scenario (say, a wider market crash), but it assumes that enough doesn't go as planned for the stock to be dragged down by its extreme multiple. My honest read is that something closer to the bear case unfolds. I think Starlink will continue to grow, and grow fast, but I think there's more of a ceiling than many assume, especially in the developed world. The technology is most valuable to those with the least access to high-quality telecom infrastructure, which also means that its pricing power is ultimately limited. And while the company is far ahead at the moment, it will soon face stiff competition from global players like Amazon Leo (formerly Project Kuiper) and the Chinese project Qianfan. Then there's xAI and the vision of data centers in space. While the idea sounds exciting, to me it's peak hype -- all buzz and no substance. The technical limitations are significant, and the idea that "space is cold" is really a misnomer. Without going into too much detail, space is a vacuum, and that means -- contrary to what many believe -- that it's actually much harder, not easier, to cool things down. There are also plenty of other issues -- servicing the data centers, replacing spent graphics processing units (GPUs), protecting them from radiation, transmitting the data back to earth, not to mention the enormous cost of actually launching and assembling these megastructures -- making me extremely skeptical of the vision. And I think the more you read into it, the more you will be too. And all this distracts from the fact that xAI is a wildly unprofitable business with no clear path to changing that. Of course, the bearish take is my opinion, and plenty of analysts would point to the bull case as being the most likely outcome. So what a $5,000 investment looks like five years from now could be very different depending on what we see from SpaceX. That's the nature of high-multiple, high-growth companies. Their futures are much more uncertain -- and the endpoints more divergent. Ever feel like you missed the boat in buying the most successful stocks? Then you'll want to hear this. On rare occasions, our expert team of analysts issues a "Double Down" stock recommendation for companies that they think are about to pop. If you're worried you've already missed your chance to invest, now is the best time to buy before it's too late. And the numbers speak for themselves: Right now, we're issuing "Double Down" alerts for three incredible companies, available when you join Stock Advisor, and there may not be another chance like this anytime soon. Johnny Rice has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Palantir Technologies, and Tesla. The Motley Fool has a disclosure policy.

Move into coding tools strengthens Musk's push to own AI infrastructure, software, and compute * SpaceX signs deal with Cursor, possible $60 billion acquisition * Agreement allows $10 billion investment or full buyout later * Cursor develops AI coding tools with rapid revenue and growth * Deal aligns with SpaceX strategy to expand AI infrastructure capabilities SpaceX has struck a deal with AI start-up Cursor that could lead to a $60 billion acquisition an unusually large move for a company best known for rockets and satellite internet. The agreement gives SpaceX the option "to acquire Cursor later this year for $60 billion or pay $10 billion for our work together," the company said in a statement. The structure signals flexibility, but the intent is clear: SpaceX is positioning itself deeper inside the artificial intelligence ecosystem ahead of a potential IPO. What the Deal Actually Means At first glance, the combination appears mismatched. Cursor builds AI-powered coding tools, while SpaceX operates launch systems and the Starlink satellite network. But the deal reflects a broader shift in Elon Musk's strategy. Musk has been steadily aligning his ventures around AI infrastructure. SpaceX acquired xAI earlier this year in a deal that reportedly valued the combined entity at $1.25 trillion. The company has also explored AI data centers in orbit and invested in chip manufacturing capabilities. The Cursor deal extends that strategy into software specifically, one of the fastest-growing segments of AI: code generation. Why Coding AI Matters Right Now AI coding tools are becoming a core battleground. Cursor reached $100 million in annual recurring revenue in under two years and raised $3.4 billion from investors including Thrive Capital and Andreessen Horowitz. It was last valued at $29 billion in late 2025, highlighting the rapid growth of developer-focused AI platforms. At the same time, competition has intensified. Products from OpenAI and Anthropic are gaining traction among enterprise users, particularly for automating software development workflows. Cursor itself acknowledged the limits of its growth, noting that lack of compute power had "bottlenecked" its ability to scale models. Access to SpaceX's infrastructure via xAI's supercomputing resources directly addresses that constraint. Michael Truell, Cursor's CEO, described the deal as "a meaningful step on our path to build the best place to code with AI." The Infrastructure Play Behind the Deal This is not just about software. It's about control. Most companies operate in one or two layers. Musk is attempting to integrate all three. SpaceX brings global infrastructure through Starlink and potential orbital data centers. xAI provides model development. Cursor adds a high-utility application layer with immediate enterprise relevance. That combination points toward a vertically integrated AI stack similar in structure, if not scale, to what companies like Microsoft and Google are building. Why Now: Timing the IPO The timing of the deal is critical. SpaceX is preparing for what could be one of the largest IPOs in history, potentially as early as June. Expanding its AI narrative ahead of that listing strengthens its positioning with investors, particularly as AI continues to dominate market valuations. But the structure of the agreement offering either a $10 billion investment or a full acquisition suggests flexibility. SpaceX can deepen its partnership without immediately committing to the full purchase, depending on market conditions and IPO timing. Musk's Long-Term Vision Musk has repeatedly linked AI to his broader ambitions in space. "In the long term, space-based A.I. is obviously the only way to scale," he wrote in a letter to employees, outlining a vision where orbital data centers harness solar energy to power large-scale computation. That idea may still be years away, but the direction is consistent. The Cursor deal is not an isolated transaction it fits into a larger strategy of building infrastructure capable of supporting massive AI workloads. The deal highlights a shift already underway. AI is no longer just a software race. It is becoming an infrastructure race, where access to compute, data, and distribution defines competitive advantage. By moving into coding tools, SpaceX is entering a space where adoption is already accelerating. Developer tools are among the earliest and most practical applications of generative AI, making them a logical entry point. At the same time, the move reflects pressure within Musk's own AI ecosystem. xAI has faced internal challenges and increased competition, and integrating a fast-growing application like Cursor could help accelerate its positioning. The Bigger Play Behind the Deal SpaceX's potential $60 billion acquisition of Cursor is less about coding and more about control. It signals a shift from building rockets to building the infrastructure that powers intelligence. As SpaceX moves closer to an IPO, the deal positions it not just as an aerospace company, but as a player in the next phase of the AI economy. And, Musk is betting that the future of AI will not just run on Earth. It will scale from space.

SYDNEY, April 22 : The central banks of Australia and New Zealand said on Wednesday they were monitoring the release of Anthropic's advanced Mythos artificial intelligence model, joining authorities around the world in expressing concerns about the new cybersecurity risks it poses. Designed for defensive cybersecurity tasks, Mythos' vast capabilities have sparked fears about the threat to traditional software security, after Anthropic said a preview had uncovered "thousands" of major vulnerabilities in "every major operating system and web browser." Experts have also warned that the model can identify and exploit previously unknown vulnerabilities faster than companies can fix them. The Reserve Bank of Australia said in a statement it was closely monitoring the development and was "engaging with peer regulators, government and regulated entities." The Reserve Bank of New Zealand said it was also in contact with other regulators both domestically and in Australia over what it called the "developing risk" from Mythos. On Tuesday, Bundesbank President Joachim Nagel called the model a double-edged sword, saying: "it could be used not only to improve digital security systems, but also to leverage their vulnerabilities for malicious purposes." Anthropic has introduced Claude Mythos Preview through a tightly controlled program called Project Glasswing. Access has been granted to major technology companies including Amazon, Microsoft, Nvidia, and Apple. The company has also expanded access to more than 40 additional organisations that build or maintain critical software infrastructure. Experts say Mythos' advanced coding and autonomous capabilities could significantly accelerate sophisticated cyberattacks, especially in sectors like banking, where complex, interconnected, and often decades-old systems remain common.
Given cybersecurity risks in critical software, Mythos has sparked debate about whether its capabilities should be shared with China In the first of a three-part series on Anthropic's new powerful Mythos AI model, we look at its impact on Chinese AI, cybersecurity and competition with the US. US start-up Anthropic announced its latest artificial intelligence model Claude Mythos Preview on April 7, sparking an unprecedented global response among policymakers and regulators due to its powerful ability to identify and exploit cybersecurity vulnerabilities. Instead of a public release, Anthropic released Mythos to a consortium of US companies including Cisco, JPMorgan Chase and Nvidia to use the model to secure their critical software in an initiative called Project Glasswing. How do Chinese models compare? According to Beijing-based consultancy Concordia AI, China's primarily open-source models still lag closed-source US models in their cyber capabilities. However, the rapid progress of Chinese models over the past year has meant that their capabilities are also improving rapidly.

American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers. According to Bloomberg, which first reported the probe, a small group of users in a private, online forum gained access to the model via the computer system reserved for Anthropic's external vendors. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told AFP. The users got hold of Mythos by various means, including using access one of them had as a worker at a contractor for Anthropic, Bloomberg reported. Anthropic works with a small number of third-party vendors who help with model development. The firm has delayed a general release of Mythos, which it says can spot undiscovered security holes that have existed for decades, in systems tested by both human experts and automated tools. It shared Mythos first with a few dozen key US tech and financial services players -- such as Nvidia, Amazon and JP Morgan Chase -- to allow them to improve their security infrastructure. But the company has also been accused of overhyping the powers of a technology which is its stock in trade, and the subject of fierce competition with rival OpenAI.
Heavy snowfall in East Sikkim caused major traffic disruptions, with rescue teams working to clear roads. Authorities urge caution as snowfall may persist in the coming days Heavy snowfall in East Sikkim on April 21 led to major traffic congestion, leaving tourists and local residents stranded for hours along key routes, including the crucial JNM axis. Authorities launched coordinated rescue and relief operations soon after conditions worsened. Personnel from the Indian Army, the General Reserve Engineer Force (GREF), Sikkim Police, and officials from the state's Tourism and Civil Aviation Department worked alongside local volunteers and tourism stakeholders to manage the situation on the ground. Teams prioritised the evacuation of vulnerable individuals, including elderly travellers, while also regulating traffic flow in difficult weather conditions. Road clearance efforts were carried out simultaneously to reopen blocked stretches. Officials said the prompt response helped prevent the situation from escalating further. Stranded individuals were provided assistance, and no major injuries were reported during the operation. Traffic movement along the JNM axis, considered a lifeline route in the region, was restored after several hours of coordinated intervention. The Tourism and Civil Aviation Department acknowledged the role of all agencies involved, citing "exemplary coordination, dedication, and prompt response" in handling the disruption caused by the sudden snowfall.

Anthropic's AI chatbot, Claude, has been long in the headlines for both good and the bad. It has been building strong momentum with both its core AI models and newer ventures like its cybersecurity-focused Claude Mythos. However, as the company scales rapidly, rising user criticism is casting doubt on whether it can sustain product quality alongside its expanding ambitions. Tensions in the artificial intelligence sector are spilling into public again. And this time the rival company also had something to say. Recent reactions on X (formerly Twitter) and comments from OpenAI CEO Sam Altman highlight a growing divide, not just in technology, but in how AI firms position their products and influence public perception. X user slams Anthropic, Altman sweeps comment A wave of criticism emerged online following claims that Anthropic had removed key features, including "Claude Code", from its Pro subscription tier. One prominent X user described the move as a major misstep, arguing that it undermines the company's core identity as a coding-focused AI provider. The criticism reflects broader dissatisfaction among some users who feel the product experience has not matched the hype surrounding Anthropic's models. The backlash also points to a deeper concern: perceived value. As AI tools become increasingly subscription-driven, users are scrutinising feature access more closely. Removing or restricting capabilities, especially those central to a product's appeal, can quickly erode trust, particularly in a competitive landscape where alternatives are readily available. OpenAI CEO Sam Altman appeared to seize the moment. In a brief but pointed post, he invited users to "come to the light side", a remark widely interpreted as a direct jab at Anthropic. Though light in tone, the comment underscores the intensifying rivalry between leading AI firms, where even subtle messaging can carry strategic weight. The exchange illustrates how product decisions are no longer confined to internal roadmaps; they are instantly dissected in public forums, shaping brand perception in real time. In an industry defined by rapid iteration, user sentiment has become a critical battleground. Sam Altman talks about Anthropic's fear-based marketing Beyond product criticism, Altman has also taken aim at how competitors frame their technological advancements. Speaking on the Core Memory podcast, he questioned Anthropic's approach to promoting its cybersecurity-focused model, Mythos, which the company has described as too powerful for broad public release. Anthropic's positioning, that at t he model could be misused by cybercriminals, has drawn scepticism from critics who view such claims as exaggerated. Altman suggested that this kind of narrative functions less as a safety precaution and more as a strategic tool. He characterised it as a form of "fear-based marketing", arguing that emphasising potential risks can create an aura of exclusivity around a product. By framing AI capabilities as dangerous or restricted, companies may justify limiting access while simultaneously increasing perceived value. Altman used a vivid analogy to illustrate the point, likening the strategy to warning of an impending threat while offering protection at a premium. The implication is that such messaging can reinforce a model where advanced AI remains concentrated among a small group of users, rather than being broadly accessible. However, the critique is not without irony. The wider AI industry, including OpenAI itself, has frequently invoked existential risks and transformative potential in public discourse. Warnings about AI's societal impact, ranging from job displacement to more extreme scenarios, have become a recurring theme across companies and research communities alike. This dual narrative, highlighting both opportunity and danger, serves multiple purposes. It can attract investment, shape regulatory conversations and position companies as both innovators and responsible stewards of powerful technology. At the same time, it raises questions about where genuine concern ends and strategic messaging begins.
)
As it vies to catch up with rivals like OpenAI and Anthropic, SpaceX has done a deal to purchase the fast-growing AI coding start-up Cursor. In a post on X, SpaceX said the companies were "now working closely together to create the world's best coding and knowledge work AI" and that "Cursor has also given SpaceX the right to acquire Cursor later this year for $60bn or pay $10bn for our work together." In its own statement Cursor confirmed it was partnering with SpaceX "to accelerate our model training efforts", which it said had been stymied by lack of compute. "With this partnership, our team will leverage xAI's Colossus infrastructure to dramatically scale up the intelligence of our models," it said. Cursor had been widely reported to be raising a $2bn round at a $50bn valuation in recent days, as it sought investment to increase compute, but that raise will now be halted, as the SpaceX deal will offer it all the compute it needs to expand. It is likely that the reason SpaceX has bought the rights to purchase Cursor, rather than acquiring it straight off is that the space tech and AI giant is keen to win the race to IPO, and any acquisition of such a size would require it to refile for IPO. Reports have suggested a SpaceX IPO between April and June, which means it would precede speculated listings by rival AI giants OpenAI and Anthropic in the near future. Elon Musk has consolidated various businesses over the past year to arrive at a mooted $1.75trn valuation. In February, SpaceX acquired xAI, which in March 2025 had acquired X. Revenue growth from SpaceX's Starlink satellite broadband service is widely and largely credited for the foundation of the valuation. Starlink currently dominates the global satellite internet service industry, with more than 9,000 satellites in orbit and roughly 9m customers. The February merger deal valued xAI at around $250bn, but preceded the departure of all 11 of Musk's co-founders from that company. Now Musk looks set to buy in the talent he believes he needs to compete with his major rivals. Cursor is one of the fastest-growing AI start-ups right now, and well-regarded, boasting some very high profile investors, including Nvidia, Andreessen Horowitz, Google - and indeed, OpenAI's venture fund. It remains to be seen whether the expensive acquisition goes ahead, or whether both companies could take up the agreed alternative within the deal to pay $10bn for "our work together". Don't miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic's digest of need-to-know sci-tech news.
A new artificial intelligence (AI) system developed by Anthropic has reportedly uncovered a large number of security vulnerabilities in an unreleased version of the Firefox browser, raising fresh discussions about the future of cybersecurity and automated code analysis. According to reports, the company's restricted "Mythos Preview" model was used in collaboration with Mozilla to examine the source code of the upcoming Firefox version 150. The system identified around 271 potential security flaws before the software was publicly released.Pentagon Adopts Palantir AI System After Dispute With AnthropicPentagon Adopts Palantir AI System After Dispute With Anthropic Mozilla stated that the results highlight a significant leap in artificial intelligence-assisted security testing. The company's Chief Technology Officer, Bobby Holley, said the findings demonstrate how advanced AI tools can give defenders an advantage over attackers by identifying complex vulnerabilities much faster than traditional methods. He compared the performance with earlier AI tools, noting that a previous model had detected only a small fraction of issues in an earlier Firefox version. The new system, however, was able to analyze far deeper layers of the codebase and uncover hidden risks that might have taken human researchers months to detect. The development has intensified debate within the technology industry over the role of powerful AI systems in cybersecurity. Some experts argue that such tools could revolutionize digital defense by making software significantly more secure, especially in widely used open-source projects. Others caution that the same capabilities could potentially be misused if placed in the wrong hands, increasing concerns about automated hacking or large-scale vulnerability discovery by malicious actors. Mozilla executives emphasized that modern software development is becoming increasingly dependent on advanced AI-based auditing tools. They also called for broader access to such systems for trusted developers working on critical open-source infrastructure. As AI continues to evolve, experts say the balance between security enhancement and potential risk will remain a central challenge for the tech industry.

Anthropic has delayed a general release of its latest model Mythos, which it says can spot undiscovered security holes that have existed for decades American AI developer Anthropic said Tuesday it was investigating unauthorized access to Mythos, its powerful model which the company itself worries could be a boon for hackers. Anthropic said earlier this month it restricted the release of Mythos to 40 major tech firms to give them a head start in fixing cybersecurity vulnerabilities before they could be exploited by attackers. According to Bloomberg, which first reported the probe, a small group of users in a private, online forum gained access to the model via the computer system reserved for Anthropic's external vendors. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson told AFP. The users got hold of Mythos by various means, including using access one of them had as a worker at a contractor for Anthropic, Bloomberg reported. Anthropic works with a small number of third-party vendors who help with model development. The firm has delayed a general release of Mythos, which it says can spot undiscovered security holes that have existed for decades, in systems tested by both human experts and automated tools. It shared Mythos first with a few dozen key US tech and financial services players -- such as Nvidia, Amazon and JP Morgan Chase -- to allow them to improve their security infrastructure. But the company has also been accused of overhyping the powers of a technology which is its stock in trade, and the subject of fierce competition with rival OpenAI.
* Polymarket introduced perpetual futures on April 21, enabling traders to take long or short positions on prediction markets around the clock * Competitor Kalshi will unveil its perpetual futures offering, named "Timeless," on April 27 in New York * Kalshi's platform will feature cryptocurrency perpetual futures, challenging established players like Coinbase and Robinhood * March 2026 saw prediction market transactions reach a historic peak of 192 million * Both companies are competing aggressively to capture the derivatives market amid declining crypto spot trading activity On April 21, prediction markets platform Polymarket rolled out its perpetual futures trading feature, enabling users to take leveraged positions on market outcomes around the clock. This announcement came mere hours after news broke that competitor Kalshi is preparing to introduce its own perpetual futures offering, dubbed "Timeless," scheduled for an April 27 launch event in New York. Perpetual futures contracts, commonly called "perps," are derivative instruments without expiration dates. These instruments allow traders to maintain leveraged positions indefinitely and close them at will, provided they maintain sufficient margin to support the trade. Polymarket characterized its latest offering as enabling users to "go long or short the markets you know 24/7." The platform operates on the Ethereum and Polygon networks and uses USDC for trade settlement. The platform hasn't explicitly stated whether cryptocurrency assets will be included in its perpetual futures lineup. However, its user community has traditionally consisted primarily of cryptocurrency enthusiasts and traders. On April 13, Kalshi CEO Tarek Mansour previewed the "Timeless" product through a mysterious promotional video that disclosed the April 27 release date. Kalshi's offering will incorporate cryptocurrency perpetual futures, positioning it as a direct rival to major platforms like Coinbase and Robinhood. In recent months, both Coinbase and Robinhood have integrated prediction market capabilities into their platforms. Additionally, Coinbase completed a $2.9 billion acquisition of Deribit, a leading crypto derivatives platform, marking the largest merger and acquisition transaction in cryptocurrency industry history. Explosive Industry Growth The prediction markets sector has experienced remarkable expansion recently. Transaction volumes industry-wide exceeded 192 million in March 2026, establishing a new benchmark. Kalshi currently commands an $11 billion valuation and handles more than $100 billion in annualized trading activity. Polymarket holds a $9 billion valuation, maintaining weekly notional volume consistently surpassing $1 billion throughout the first quarter of 2026. During 2025, leading centralized cryptocurrency exchanges reported $86.2 trillion in yearly perpetual futures volume, representing a 47% increase compared to the previous year, based on CoinGecko data. Perpetual futures have maintained popularity in international markets as tools for speculating on near-term price movements, hedging existing positions, and accessing leverage across various market environments. Intensifying Rivalry The strategic timing of Polymarket's product reveal seems calculated. By making its announcement ahead of Kalshi's scheduled debut, the platform may be attempting to secure first-mover advantages with traders and liquidity providers. When approached for comment, representatives from both Polymarket and Kalshi declined to provide statements. Both platforms have demonstrated impressive growth trajectories. Their pivot toward perpetual futures products coincides with stagnant cryptocurrency prices and diminished spot trading volumes. Perpetual futures contracts can sustain trading activity even during periods of price consolidation, which likely explains the strategic appeal for both organizations at this juncture. Kalshi's "Timeless" product remains on schedule for its April 27 launch.

On the same Tuesday, April 21, 2026, two heavyweights of predictive markets strike simultaneously. Polymarket and Kalshi announce their entry into crypto perpetual futures. A decision that disrupts the established order in the American derivatives market. It puts Coinbase and Binance facing new regulated rivals. Kalshi's CEO, Tarek Mansour, had himself set the scene on April 13 with a cryptic video on LinkedIn: a rotating toroidal shape and a date, April 27, 2026 in New York. The project is called "Timeless." The name sums it all up: crypto perpetual futures have no expiration date. They remain open as long as the trader maintains their margin. Just hours after this information was released by The Information, Polymarket responded on X with a dry formula: We value the future. Now, you can amplify it with leverage. The platform has formalized the launch of its own crypto perps with leverage up to 10x. Targeted assets include: The pre-registration list is open. The trading environment will be available 24/7. Polymarket's logic is clear. The platform has indeed built its reputation on binary event markets: elections, sports results, global news... With crypto perps, it thus extends this positioning to continuous directional trading. Analysis: the user who knew how to bet on the outcome of an election can now bet on the direction of the crypto market with leverage. A major evolution of its business model! Kalshi is no longer a niche player. The platform shows a valuation of $11 billion and handles more than $100 billion in annualized volume. In March 2026, it passed the $1 billion monthly volume mark on crypto assets for the first time. On the Polymarket side, the figures are equally impressive: over $1 billion in notional weekly volume throughout Q1 2026. In total, the predictive market recorded 192 million transactions in March. An absolute record! It is therefore a fact: the appetite of crypto traders for these platforms is real. Against them, Coinbase tries to defend its territory. The American exchange put $2.9 billion on the table in August 2025 to buy Deribit. It is the largest international crypto derivatives platform. It then launched perpetual-style futures contracts with a five-year expiration and quadrupled its market share on American derivatives. But the real crypto perpetuals (those used by traders on Binance) remained out of reach for US users until this April. CFTC Chairman Michael Selig stated last month that the agency plans to regulate crypto perps within its regulatory framework. This green light has unleashed initiatives. Kalshi and Polymarket, both holders of a Designated Contract Market (DCM) license issued by the CFTC, are therefore the first regulated platforms to take this step. Their structural advantage over offshore platforms is obvious. In any case, Polymarket and Kalshi no longer just arbitrate the future. They trade it. Their simultaneous launches in crypto perps redraw the landscape of digital derivatives in the United States. For crypto investors, a new era of regulated liquidity opens. To watch closely!

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
