News & Updates

The latest news and updates from companies in the WLTH portfolio.

Malaysia Airlines & MAG Break Profit Records Despite Fuel Price Chaos -- Strong India, China & Australia Travel Demand Fuels Growth

Published on April 5, 2026 Malaysia Airlines, Malaysia Aviation Group (MAG), and its parent company have achieved remarkable success in 2025, defying the odds in an increasingly volatile global aviation landscape. Despite the ongoing geopolitical uncertainties and the relentless pressure from fluctuating fuel prices, MAG has posted record-breaking profits, more than doubling its net earnings compared to the previous year. This impressive performance comes as travel demand from high-growth markets like India, China, and Australia continues to surge, revitalizing the airline's international network. In 2025, the airline witnessed a sharp increase in passenger numbers, driven by expanded routes and restored services to key destinations, such as Paris and Brisbane. This surge in demand has not only fueled Malaysia Airlines' recovery but has also significantly boosted Malaysia's tourism and hospitality industries, with the country seeing a rise in international visitors and growing hotel occupancy rates. As Malaysia Airlines and MAG continue to strengthen their position in the global aviation market, their strategic expansion and focus on enhancing the customer experience promise a bright future for travelers and the tourism sector in Malaysia. Whether it's for leisure or business, Malaysia remains an increasingly attractive destination, with more frequent flights and enhanced connectivity to Asia, Europe, and beyond, offering travelers the best of both worlds. Malaysia Airlines & MAG Break Profit Records Despite Fuel Price Chaos -- Strong India, China & Australia Travel Demand Fuels Growth Malaysia Airlines, the flagship carrier of Malaysia, has continued to thrive in 2025 despite the global challenges posed by geopolitical uncertainties and fluctuating fuel prices. The airline, part of the larger Malaysia Aviation Group (MAG), has made significant strides, reporting impressive profits while also witnessing growing demand across key international routes. This robust performance is attributed to the airline's expanded network, fleet modernization, and a surge in travel demand from high-growth markets such as India, China, Australia, and New Zealand. As travel demand intensifies, the airline and hospitality sectors are set to benefit substantially, shaping a bright future for Malaysia's tourism industry. Malaysia Aviation Group's Record-Breaking Performance Malaysia Aviation Group (MAG) achieved a major financial milestone in 2025 by more than doubling its net profit to RM137 million, up from RM54 million in 2024. Despite facing a volatile global landscape and fluctuating fuel prices, MAG has demonstrated resilience by capitalizing on strong passenger demand, particularly on routes to India, China, and Australia. The airline's Available Seat Kilometres (ASK) increased by 16%, and the total number of passengers carried rose by 12%, amounting to 18.6 million. The rise in passenger numbers, along with a solid load factor of 81%, highlights the ongoing recovery and growth within the aviation sector. Fuel Price Volatility -- A Key Challenge for Airlines One of the major concerns facing airlines worldwide in 2025 is the volatility of fuel prices. Geopolitical uncertainties, particularly in the Middle East, have created fluctuations in oil prices, putting immense pressure on the aviation industry. Malaysia Airlines has not been immune to these pressures. However, the airline has implemented strategies such as fuel hedging to mitigate the impact of rising fuel costs. These strategies, combined with strong market demand, have allowed Malaysia Airlines to maintain profitability and even exceed its financial goals. As fuel prices remain volatile, travelers can expect to see slight increases in airfares but will benefit from more direct and frequent flights to key destinations, particularly from India, China, and Australia. Strong Travel Demand from India, China, and Australia The demand for travel to Malaysia has surged, particularly from the high-growth markets of India, China, and Australia. In 2025, the airline experienced a significant rise in bookings from these countries, reflecting the growing appetite for travel in Asia. With India's travel market booming, especially for long-haul routes, Malaysia Airlines has capitalized on the increased passenger traffic. Additionally, demand from China, spurred by the easing of visa restrictions and renewed interest in international travel, has bolstered Malaysia Airlines' bottom line. Australia, with its strong cultural and historical ties to Malaysia, remains a key market, with the airline increasing its flight frequencies to major cities such as Sydney and Melbourne. This surge in travel demand has not only benefited Malaysia Airlines but has also positively impacted the country's hospitality sector, with hotels and resorts experiencing higher occupancy rates. Malaysia's Tourism Sector -- A Booming Destination for Travelers Malaysia's tourism industry continues to grow, thanks to increased international travel and a thriving domestic tourism market. In 2025, the country welcomed over 42 million international visitors, a notable increase of 11.2% compared to the previous year. The surge in visitors is a result of increased air connectivity, relaxed visa regulations, and the country's extensive efforts to promote its cultural heritage, natural beauty, and modern infrastructure. The growing influx of tourists is not limited to urban destinations like Kuala Lumpur, Penang, and Langkawi. Malaysia's remote islands and pristine beaches are also drawing significant numbers of travelers. For instance, eco-tourism destinations in Borneo, including Sabah and Sarawak, are benefiting from increased interest in sustainable travel and nature tourism. Travelers seeking adventure and natural beauty are flocking to these regions, contributing to the overall success of the country's hospitality industry. Malaysia Airlines' Role in Connecting Travelers to Top Destinations Malaysia Airlines has been pivotal in connecting travelers to Malaysia's key tourist destinations. With an expansive route network covering Asia, Europe, Australia, and the Middle East, the airline has become a gateway for international visitors. The airline's commitment to modernizing its fleet with new aircraft, such as the Boeing 737-10 and Boeing 737-8, ensures that passengers enjoy a comfortable and efficient travel experience. Notably, Malaysia Airlines has resumed services to popular destinations such as Paris and Brisbane, while increasing flight frequencies to key cities in India, Australia, the Maldives, and Bangladesh. These developments are expected to enhance Malaysia's position as a leading tourist destination in Southeast Asia. Hospitality Industry Growth Fueled by Increased Arrivals The rise in international visitors has had a significant positive impact on Malaysia's hospitality industry. As more travelers arrive in Malaysia, both luxury and mid-range hotels are experiencing increased bookings, resulting in higher revenue per available room (RevPAR) and stronger overall performance. In Kuala Lumpur, major hotel chains such as Hilton, Marriott, and InterContinental are benefiting from the surge in tourism, while smaller boutique hotels and eco-resorts in destinations like Langkawi, Penang, and the Cameron Highlands are also seeing increased demand. Hotels and resorts in Malaysia have been quick to respond to this growing demand by offering competitive packages, expanded services, and enhanced amenities to attract international visitors. Many hotels are focusing on the growing trend of sustainable tourism, providing eco-friendly options and local experiences that align with travelers' values. As the number of international tourists continues to rise, the hospitality industry is expected to continue thriving. Travel Tips for Tourists Visiting Malaysia in 2026 As Malaysia's tourism industry continues to grow, travelers are advised to plan ahead to make the most of their visit. Here are some travel tips for tourists looking to explore Malaysia in 2026: * Book Flights Early: With increased demand for travel to Malaysia, it's recommended to book flights well in advance to secure the best deals and preferred travel dates. Malaysia Airlines and other carriers offer competitive rates and frequent flights to major international hubs. * Explore Beyond the Cities: While Kuala Lumpur and Penang are popular tourist hotspots, travelers should venture out to Malaysia's lesser-known destinations such as the islands of Borneo, the Cameron Highlands, and the historical town of Malacca. * Embrace Local Culture: Malaysia's diverse culture is one of its main attractions. Be sure to experience local festivals, try traditional Malaysian food, and visit the country's many cultural landmarks, including temples, mosques, and colonial architecture. * Stay Sustainable: As Malaysia promotes sustainable tourism, consider staying in eco-friendly accommodations and participating in conservation activities, such as wildlife tours and nature hikes. Flight Details from Malaysia Airlines Malaysia Airlines offers a range of international flights that connect travelers to key cities across Asia, Australia, and Europe. Here are some flight details for popular routes: * Kuala Lumpur to London (LHR): 3x weekly direct flights, flight duration 13 hours. * Kuala Lumpur to Melbourne (MEL): 4x weekly direct flights, flight duration 7 hours 30 minutes. * Kuala Lumpur to Beijing (PEK): Daily direct flights, flight duration 6 hours 45 minutes. * Kuala Lumpur to Sydney (SYD): Daily direct flights, flight duration 8 hours. Traveler's Action Checklist Before you embark on your Malaysian adventure, here's a handy checklist to ensure you have everything covered: * Book Flights: Secure your Malaysia Airlines tickets early for the best deals. * Apply for Visa (if required): Check visa requirements based on your nationality and ensure your passport is valid. * Health Precautions: Ensure you have the necessary vaccinations (e.g., Hepatitis A, Typhoid) and travel insurance. * Currency Exchange: The local currency is the Malaysian Ringgit (MYR). It's recommended to exchange some currency in advance or use ATMs available throughout Malaysia. * Local SIM Card/Internet: Stay connected by purchasing a local SIM card at the airport or from convenience stores. FAQ 1. What are the most popular tourist destinations in Malaysia? Kuala Lumpur, Penang, Langkawi, Borneo (Sabah and Sarawak), and the Cameron Highlands are among the most visited spots in Malaysia. Each offers a unique experience, from urban exploration to nature and beach getaways. 2. How can I travel around Malaysia efficiently? Malaysia has an extensive transportation network, including affordable domestic flights, trains, buses, and taxis. Consider using Malaysia Airlines for quick intercity travel, especially for longer distances. 3. Is Malaysia a safe destination for tourists? Yes, Malaysia is considered a safe destination for travelers. However, like any other country, it is always advisable to exercise caution, avoid risky areas, and follow local guidelines, especially when it comes to health and safety. Malaysia Airlines and Malaysia Aviation Group (MAG) have shattered profit records in 2025, overcoming geopolitical turbulence and rising fuel prices. With strong demand from India, China, and Australia, the airline is fueling Malaysia's tourism and hospitality sectors to new heights. Wrapping Up Malaysia Airlines and its parent company, Malaysia Aviation Group, have shown remarkable resilience in 2025, overcoming challenges such as volatile fuel prices and geopolitical risks. The strong demand from key international markets like India, China, and Australia has not only boosted the airline's performance but has also positively impacted Malaysia's tourism and hospitality industries. With an expanding flight network, modernized fleet, and a thriving tourist sector, Malaysia remains a top destination for travelers in 2026 and beyond. Whether you are traveling for business, leisure, or adventure, Malaysia offers a diverse range of experiences to suit every traveler's needs.

CHAOS
Travel And Tour World20d ago
Read update
Malaysia Airlines & MAG Break Profit Records Despite Fuel Price Chaos -- Strong India, China & Australia Travel Demand Fuels Growth

Anthropic reveals $30bn run rate, plan to use new Google TPU

Broadcom's building the silicon and is chuffed about that, but also notes Anthropic remains a risk Broadcom has announced that Google has asked it to build next-generation AI and datacenter networking chips, and that Anthropic plans to consume 3.5GW worth of the accelerators it delivers to the ads and search giant. News of the two deals emerged today in a Broadcom regulatory filing that opens with two items of news. One is a "Long Term Agreement for Broadcom to develop and supply custom Tensor Processing Units ("TPUs") for Google's future generations of TPUs." Google and Broadcom have collaborated to produce custom TPUs. Broadcom CEO Hock Tan recently shared his opinion that hyperscalers don't have the skill to create custom accelerators and predicted Broadcom's chip business will therefore win over $100 billion of revenue from AI chips in 2027 alone. Working on next-gen TPUs for Google will presumably help to make that prediction a reality. So will the second part of Broadcom's announcement: a "Supply Assurance Agreement for Broadcom to supply networking and other components to be used in Google's next-generation AI racks through up to 2031." Broadcom's filing also revealed one user of Google's next-gen TPU will be Anthropic, which starting in 2027, "will access through Broadcom approximately 3.5 gigawatts as part of the multiple gigawatts of next generation TPU-based AI compute capacity committed by Anthropic." The filing includes the following notable statement: That sounds an awful lot like Broadcom putting on the record that the financial arrangements that will make it possible to deploy 3.5GW worth of custom TPUs for Anthropic represent sufficient risk that the company needs to put it on the record in a regulatory filing. In its announcement about the deal, Anthropic seemingly tries to reassure markets about its financial affairs by revealing that "Our run-rate revenue has now surpassed $30 billion -- up from approximately $9 billion at the end of 2025." "When we announced our Series G fundraising in February, we shared that over 500 business customers were each spending over $1 million on an annualized basis," Anthropic wrote. "Today that number exceeds 1,000, doubling in less than two months." Yet Broadcom still worries about the AI upstart. Google's take on the announcements points out that in addition to renting TPUs, Anthropic is a big Google Cloud customer. Anthropic pointed out that it also uses AWS's Trainium AI chips, plus Nvidia kit, so it can "match workloads to the chips best suited for them." ®

Anthropic
TheRegister.com20d ago
Read update
Anthropic reveals $30bn run rate, plan to use new Google TPU

Anthropic Eyes $200 Million Boost in New Venture

Anthropic is reportedly considering a $200 million investment in a new private-equity venture. Key private-equity firms like General Atlantic, Blackstone, and Hellman & Friedman are also in discussions to support this initiative. This move highlights the growing interest and confidence in private-equity ventures. Anthropic is in discussions to inject $200 million into a new private-equity venture, according to reports. This strategic move signals a burgeoning interest in private-equity investments. Reputable firms such as General Atlantic, Blackstone, and Hellman & Friedman are also exploring the possibility of backing this new initiative, showing the venture's potential and gravitas in the financial sector. The involvement of these key players illustrates the strong confidence and expectations within the private-equity sphere, as reported by the Wall Street Journal.

Anthropic
Devdiscourse20d ago
Read update
Anthropic Eyes $200 Million Boost in New Venture

Anthropic in talks to invest $200 million in new private-equity venture - WSJ

This super composite rating is the result of a weighted average of the rankings based on the following ratings: Fundamentals (Composite), Global Valuation (Composite), EPS Revisions (1 year), and Visibility (Composite). We recommend that you carefully review the associated descriptions. This composite rating is the result of an average of the rankings based on the following ratings: Fundamentals (Composite), Valuation (Composite), Financial Estimates Revisions (Composite), Consensus (Composite), and Visibility (Composite). The company must be covered by at least 4 of these 5 ratings for the calculation to be performed. We recommend that you carefully review the associated descriptions.

Anthropic
Market Screener20d ago
Read update
Anthropic in talks to invest $200 million in new private-equity venture - WSJ

Anthropic plans $200m AI venture with private equity to drive adoption | investingLive

Consulting + deployment model emerging as key revenue driver AI developer Anthropic is in discussions to invest roughly $200 million into a new private-equity-backed venture aimed at accelerating enterprise adoption of artificial intelligence tools, according to people familiar with the matter. The proposed initiative, which could raise up to $1 billion in total funding, is expected to include participation from major buyout firms such as Blackstone, General Atlantic, and Hellman & Friedman. The structure would effectively create a dedicated platform to deploy AI solutions across private-equity portfolio companies, positioning Anthropic at the centre of a growing push to monetise AI in the corporate sector. The venture is expected to operate as a hybrid consulting and implementation arm, helping businesses integrate Anthropic's AI tools, particularly its Claude models, into core operations. The focus extends beyond incremental productivity gains, with an emphasis on automating broader business functions across industries. The move highlights an intensifying race among leading AI firms to capture enterprise demand. Anthropic and OpenAI are increasingly targeting corporate clients as a key revenue driver, as adoption shifts from experimentation toward large-scale deployment. OpenAI is reportedly pursuing a similar strategy, exploring its own joint venture model with private-equity partners to embed AI tools directly within portfolio companies. Private-equity-backed businesses represent a particularly attractive entry point. These firms are typically under pressure to improve efficiency and margins, making them more receptive to automation initiatives. Moreover, private-equity sponsors can standardise technology adoption across multiple portfolio companies, allowing AI providers to scale rapidly through a single relationship. The broader trend reflects a shift in how AI is commercialised. Rather than relying solely on software subscriptions, firms like Anthropic are increasingly bundling tools with advisory and implementation services to drive deeper integration and stickier revenue streams. Anthropic has already taken steps in this direction, including a separate $100 million programme to support consulting firms deploying its technology. The company generates most of its revenue from enterprise use of its chatbot and coding tools and is reportedly exploring a future IPO. Taken together, the planned venture underscores a key evolution in the AI cycle, from hype and experimentation toward industrial-scale deployment -- with private equity emerging as a critical distribution channel for enterprise adoption. A lot of people are anti AI. Any sign of intelligence would be welcome right about now though.

Anthropic
News & Analysis for Stocks, Crypto & Forex | investingLive20d ago
Read update
Anthropic plans $200m AI venture with private equity to drive adoption | investingLive

Sources: Anthropic plans to invest $200M in a new venture with PE firms to sell AI tools to their portfolio companies; it's in talks to raise $1B for the effort

Peter Kafka / Business Insider: OpenAI's ambitions are easy to see. So are the doubts about its CEO. (2/11) In the fall of 2023, OpenAI's chief scientist, Ilya Sutskever, acting at the behest of fellow board members and with other concerned colleagues, compiled some 70 pages of memos about Altman and his second-in-command, Greg Brockman -- Slack messages and H.R. documents, some photographed on a cellphone to avoid detection on company devices. One memo begins with a list: "Sam exhibits a consistent pattern of..." The first item is "Lying." Separately, Dario Amodei -- who left to co-found Anthropic -- kept years of private notes on Altman and Brockman. More than 200 pages of related documents, never before publicly disclosed, have circulated in Silicon Valley. In one document, Amodei writes that Altman's "words were almost certainly bullshit."

Anthropic
Techmeme20d ago
Read update
Sources: Anthropic plans to invest $200M in a new venture with PE firms to sell AI tools to their portfolio companies; it's in talks to raise $1B for the effort

Aave faces key exits as Chaos Labs flags V4 upgrade risks | FXStreet

Chaos Labs' departure follows exits by BGD Labs and ACI amid Aave's V4 transition. Chaos Labs announced on Monday that it will step down from its risk management role in Aave, according to a governance forum post by CEO Omer Goldberg. The firm, which has overseen risk management across all Aave V2 and V3 markets and networks since November 2022, was responsible for pricing loans across the protocol. During its tenure, Aave reported zero material bad debt, while total value locked (TVL) grew from $5.2 billion to over $26 billion. Cumulative deposit volume also surpassed $2.5 trillion, with liquidations exceeding $2 billion. Chaos Labs stated that it engaged with DAO contributors "in good faith," noting that Aave Labs had proposed increasing its budget to $5 million to retain its services. However, the engagement ultimately ended due to what it described as a "fundamental misalignment" in risk management strategy. "The more we discussed the path forward, the clearer that gap became," Goldberg wrote. He highlighted several factors that drove the decision, including the departure of key contributors, which significantly increased the operational workload and risk. The rollout of Aave V4 also expanded the scope of responsibilities, introducing greater operational and legal demands on an architecture not built by Chaos Labs. In addition, the firm noted it had operated the partnership at a loss for three years, even after the proposed budget increase. Chaos Labs said it will work to "offboard in an orderly fashion" and ensure the DAO remains well-positioned after its departure. The firm added that it will submit a structured proposal outlining the transition timeline, scope of responsibilities to be handed off and measures to maintain continuity during the process. The launch of Aave V4 on the Ethereum mainnet introduced a hub-and-spoke architecture with unified liquidity to improve capital efficiency. Chaos Labs' departure adds to a series of recent exits by key service providers within the Aave ecosystem. In February, BGD Labs, the core engineering team behind Aave V3, announced plans to end its collaboration after April 1, citing significant changes to the DAO's structure. The Aave Chan Initiative (ACI), a major governance and business development delegate, also said it would wind down its operations. ACI founder Marc Zeller pointed to disagreements over transparency, voting power and budget proposals from Aave Labs. AAVE is up 3% over the past 24 hours as of writing despite the announcement.

CHAOS
FXStreet20d ago
Read update
Aave faces key exits as Chaos Labs flags V4 upgrade risks | FXStreet

How Colossal Biosciences Uses CRISPR and the Safeguards Behind the Science

Safety continues after birth. The controlled preserve enables longitudinal health monitoring tracking unexpected effects from multi-gene editing over lifespans. Researchers monitor cancer rates, immune function, epigenetic effects, aging patterns, and stress indicators, establishing a CRISPR safety baseline for large carnivores informing future conservation applications. This managed care approach allows detection of off-target effects that might only appear during development or maturity. It provides data on how edited genes interact with complete organ systems. And critically, it maintains ability to refine or terminate the program if welfare concerns arise. "We closely monitor and compare embryonic and fetal development against known and expected milestones in case there is ever a need for intervention." The preserve's veterinary clinic, specialized staff, and continuous observation systems ensure immediate response capability.

Colossal
Market Realist20d ago
Read update
How Colossal Biosciences Uses CRISPR and the Safeguards Behind the Science

Polymarket exchange upgrade brings CTF Exchange V2 and USDC-backed collateral

Over the coming weeks, the polymarket exchange will roll out a major technical upgrade designed to improve trading structure, efficiency, and fee flows for users. Polymarket confirmed that it will upgrade its core exchange stack over the next 2-3 weeks, implementing a series of on-chain and matching engine improvements. The project plans to deploy the new CTF Exchange V2 contract, a redesigned central limit order book, and a USDC-backed collateral token called Polymarket USD. Together, these changes aim to streamline trading and improve risk management. According to the team, the upgraded architecture will focus on order structure, order matching efficiency, and more granular fee distribution optimization. Moreover, the stack refresh is expected to support higher throughput while preserving the existing user experience. That said, traders will need to prepare for a temporary halt in activity during the cutover. The deployment of CTF Exchange V2 is at the center of the roadmap, replacing the previous smart contract layer that powers markets on the platform. However, the migration is being scheduled carefully to avoid unexpected disruption. The updated central limit order book will reorganize how bids and asks are stored and matched, allowing more efficient discovery of counterparties. Moreover, the introduction of the USDC-backed Polymarket USD token will standardize collateral across markets, simplifying accounting and reducing settlement friction. The team expects this design to enhance stability, especially as liquidity scales. While the upgrade is not associated with any specific intercontinental exchange polymarket partnership, it reflects a broader institutional-style approach to market structure. As part of the transition, all existing order books will be cleared during a short maintenance period to ensure a clean start on the new infrastructure. The polymarket exchange upgrade will require a full pause in trading while contracts and books are migrated. However, user balances and positions will remain secure at the protocol level throughout this process. The platform will issue a maintenance window announcement at least one week before the downtime begins, giving traders time to adjust or close positions. Moreover, this notice period should help larger liquidity providers and market makers rebalance exposure ahead of the cutover. The temporary reset of order books is intended to avoid mismatches and to align all participants on the refreshed system. Overall, the upcoming changes mark a significant step in the evolution of Polymarket's trading infrastructure. By combining an upgraded smart contract suite, a reengineered matching engine, and standardized collateral, the team aims to deliver a more efficient and transparent environment for price discovery, with smoother fee distribution and improved reliability for all market participants.

Polymarket
The Cryptonomist20d ago
Read update
Polymarket exchange upgrade brings CTF Exchange V2 and USDC-backed collateral

Anthropic compute expands via Google-Broadcom multi-gigawatt TPU deal for 2027

Anthropic is ramping up investment in AI infrastructure as anthropic compute becomes central to meeting soaring enterprise demand for Claude worldwide. Anthropic has signed a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, expected to come online starting in 2027. This large-scale infrastructure build-out will power Anthropic's frontier Claude models and support extraordinary customer demand across global markets. According to Krishna Rao, CFO of Anthropic, this is a continuation of the company's disciplined strategy for infrastructure scaling. Moreover, the new deal reflects a deliberate effort to add the specific capacity required to serve exponential growth in its customer base, while enabling Claude to push the frontier of AI development. "We are making our most significant compute commitment to date to keep pace with our unprecedented growth," Rao said, underscoring that this is Anthropic's largest single investment in compute so far. However, the focus remains on sustainable, long-term capacity planning rather than short-term spikes in usage. Demand from Claude customers accelerated sharply in 2026. Anthropic's run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025. That said, management is positioning this growth as part of a broader, multi-year adoption cycle for advanced AI systems rather than a one-off surge. When Anthropic announced its Series G fundraising in February, the company reported that over 500 business customers were each spending more than $1 million on an annualized basis. Today, that number exceeds 1,000, effectively doubling in less than two months. Moreover, this rapid scaling of large-ticket enterprise accounts highlights growing trust in Claude for critical production workloads. The vast majority of the new compute capacity will be located in the United States. As a result, this partnership represents a major expansion of Anthropic's November 2025 commitment to invest $50 billion in strengthening American computing infrastructure. However, the company continues to emphasize global availability for customers even as it concentrates physical deployment domestically. The latest agreement deepens Anthropic's existing work with Google Cloud, building on the increased TPU capacity the companies announced last October. It also extends Anthropic's relationship with Broadcom, which supplies key components for the new generation of TPU systems. This expanded google broadcom partnership is designed to ensure reliable, long-term access to cutting-edge accelerator hardware. Furthermore, securing multiple gigawatts of TPU power well ahead of 2027 gives Anthropic more predictability in planning future Claude model training and deployment cycles. Anthropic trains and runs Claude on a diverse mix of AI hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs. This multi-platform approach allows the company to match different workloads to the chips best suited for them. Moreover, the diversity of platforms translates into higher performance and better resilience for customers that rely on Claude for mission-critical work. Amazon remains Anthropic's primary cloud provider and training partner, and the two companies continue to collaborate closely on Project Rainier. However, Anthropic compute is also positioned to leverage Google Cloud and Microsoft infrastructure, reflecting a deliberate commitment to a broad, multi-cloud architecture. Claude remains the only frontier AI model offered across all three of the world's largest cloud platforms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry). This wide cloud platform availability gives customers added flexibility in how they deploy generative AI in complex enterprise environments. The new multi-gigawatt TPU expansion positions Anthropic to support more advanced generations of Claude and larger-scale AI applications. Moreover, locking in next generation tpu capacity years ahead of activation provides a buffer against potential supply constraints in the global chip market. From a policy and economic perspective, concentrating most of this compute in the United States aligns Anthropic with broader efforts to bolster domestic digital infrastructure. However, the company has been clear that the goal is to combine robust U.S. capacity with global cloud distribution, ensuring that customers worldwide benefit from the investment. In summary, the long-term infrastructure deal with Google and Broadcom, combined with surging revenue and rapid enterprise adoption, underscores Anthropic's intention to remain a leading builder of frontier AI systems while reinforcing the resilience and scale of its global compute backbone.

Anthropic
The Cryptonomist20d ago
Read update
Anthropic compute expands via Google-Broadcom multi-gigawatt TPU deal for 2027

Anthropic's Claude Code Is Getting Dumber, and an AMD AI Director Publicly Called It Out

When a senior director of AI at one of the world's largest chipmakers takes to social media to complain that his coding assistant has gone soft, it's not just a personal gripe. It's a signal. Sander Land, AMD's director of AI, posted a pointed critique of Anthropic's Claude Code tool on X in early April 2025, describing the AI coding agent as increasingly "dumb and lazy." The complaint resonated widely among developers and AI practitioners, many of whom have noticed a similar pattern: models that once delivered sharp, efficient code now seem to hedge, over-explain, and produce bloated outputs. Land's frustration wasn't abstract. He was talking about real productivity loss in real engineering workflows. As reported by The Register, Land's critique centered on what many power users have described as a regression in Claude Code's capabilities -- not in raw intelligence, but in practical usefulness. The tool, which Anthropic positions as a premium AI-powered coding assistant, has allegedly become more verbose, less decisive, and prone to asking clarifying questions rather than executing tasks directly. For engineers working at scale, that kind of friction compounds fast. This isn't a fringe complaint. The "Lazy AI" Problem Has Been Brewing for Months Across developer forums, GitHub discussions, and X threads, a growing chorus of users has flagged similar behavior in multiple large language models -- not just Claude. OpenAI's GPT-4 variants have drawn comparable criticism. The pattern is consistent: models that initially impressed with confident, concise outputs gradually shift toward safer, more hedged responses over time. Some users call it "model rot." Others call it alignment tax. The underlying cause is debated. One theory holds that reinforcement learning from human feedback (RLHF) -- the process by which models are fine-tuned to be helpful, harmless, and honest -- inadvertently trains models to be cautious to a fault. When human raters consistently penalize confident-but-wrong answers more harshly than vague-but-safe ones, the model learns to equivocate. Over successive training rounds, this produces outputs that feel increasingly watered down. Another possibility: Anthropic and its competitors are deliberately tuning models to reduce liability. A coding assistant that confidently generates flawed code could expose its maker to reputational or even legal risk. Better, from a corporate perspective, to have the model ask "Are you sure?" than to silently introduce a bug into production code. But for someone like Land, whose job involves pushing AI tools to their limits inside one of the semiconductor industry's most demanding engineering environments, caution isn't a feature. It's a bug. The tension here is fundamental. AI companies want their models to be safe. Power users want them to be useful. Those goals aren't always compatible, and the gap between them is widening as models are deployed in increasingly high-stakes professional settings. Anthropic has not publicly responded to Land's specific critique. The company has, however, acknowledged in broader communications that balancing helpfulness with safety remains an active area of research. Claude's system prompt and behavioral guidelines are regularly updated, and Anthropic has positioned its Constitutional AI approach as a more principled alternative to pure RLHF. Whether that approach is contributing to the perceived laziness is an open question. What makes Land's complaint notable isn't just his seniority at AMD. It's that he represents exactly the kind of user Anthropic needs to retain. Enterprise adoption of AI coding tools is accelerating rapidly, with companies like AMD, Google, Meta, and Microsoft integrating these systems into core development pipelines. If the tools start to feel like they're slowing engineers down rather than speeding them up, the business case erodes quickly. And the competition isn't standing still. Google's Gemini models have made aggressive moves into the coding space. OpenAI continues to iterate on its Codex lineage. Startups like Cursor and Cognition Labs are building specialized coding agents that prioritize developer experience above all else. In this environment, a perception of declining quality -- even if the underlying model hasn't technically regressed -- can shift market share fast. The Measurement Problem Part of what makes this debate so difficult to resolve is that "dumber" and "lazier" are subjective assessments. Benchmarks like HumanEval, MBPP, and SWE-bench measure specific coding capabilities under controlled conditions. They don't capture the lived experience of using a tool for eight hours a day on a complex codebase. A model might score higher on a benchmark while simultaneously feeling worse to use in practice -- because it's been optimized for the benchmark rather than for the workflow. This is a known failure mode in AI development. Goodhart's Law -- "when a measure becomes a target, it ceases to be a good measure" -- applies with particular force to language models. Companies optimize for the metrics they can track, and those metrics don't always align with what users actually care about. Land's complaint also touches on a deeper issue: the opacity of model updates. When Anthropic pushes a new version of Claude, users often have no way to know what changed. There's no changelog. No diff. The model just behaves differently one day, and users are left to figure out whether the change was intentional, incidental, or imaginary. For engineers accustomed to version control and reproducibility, this is maddening. Some developers have started maintaining their own informal benchmarks -- sets of prompts they run periodically to track model behavior over time. It's a crude approach, but it reflects a real gap in the tooling. If AI companies want enterprise customers to trust their models, they'll need to provide more transparency about how and when those models change. So where does this leave Anthropic? The company is in a strong position by most measures. Claude has a loyal user base, strong brand recognition among developers, and significant enterprise traction. But the "lazy AI" perception is a reputational risk that won't resolve itself. If Anthropic's alignment work is genuinely making the model less useful for professional coding tasks, the company faces a strategic choice: maintain its safety-first posture and risk losing power users, or find a way to offer different behavioral profiles for different use cases. The latter approach -- sometimes called "adjustable alignment" or user-configurable safety settings -- is gaining traction in the industry. The idea is simple: let enterprise users dial down the hedging and verbosity when they're working in controlled environments where they understand the risks. Consumer-facing deployments would retain tighter guardrails. It's not a perfect solution, but it acknowledges that a single behavioral profile can't serve every user equally well. Anthropic has hinted at moves in this direction. The company's system prompt architecture already allows some customization, and Claude's API offers parameters that influence response style. But the core model behavior -- the tendency to over-explain, to ask rather than act, to pad responses with caveats -- is baked in at the training level. Surface-level adjustments can only do so much. What AMD's Frustration Tells Us About the Market Land's public critique matters beyond the specifics of Claude Code because it illustrates a broader dynamic in enterprise AI adoption. Companies are moving past the honeymoon phase. The initial excitement of having an AI that can write code, summarize documents, and answer questions is giving way to harder questions about reliability, consistency, and integration. The bar is rising. AMD itself is deeply invested in the AI hardware stack, competing with Nvidia for data center GPU market share. The company's MI300X accelerators are designed to run the very models that Land is criticizing. There's an irony there -- AMD is simultaneously selling the infrastructure for AI and struggling with the quality of the AI running on it. But it also gives Land's perspective a certain credibility. He's not a casual user. He's someone whose livelihood depends on these tools working well. For Anthropic, the path forward likely involves more granular control, more transparency, and a willingness to let advanced users take the training wheels off. The company's research on interpretability -- understanding what's happening inside neural networks -- could eventually help diagnose why models become "lazy" after certain training procedures. But that's a long-term play. In the short term, the message from AMD and from the broader developer community is clear: don't sacrifice usefulness on the altar of safety. The best AI coding assistant isn't the one that never makes a mistake. It's the one that makes engineers faster. Right now, for at least some high-profile users, Claude Code is moving in the wrong direction. The stakes are high. Enterprise AI contracts are measured in millions of dollars. Developer loyalty, once lost, is hard to win back. And in a market where four or five serious competitors are fighting for the same customers, the margin for error is thin.

Anthropic
WebProNews20d ago
Read update
Anthropic's Claude Code Is Getting Dumber, and an AMD AI Director Publicly Called It Out

The $2 Million Monthly AI Tab: Inside One Startup's Staggering Anthropic Bill and What It Signals About the Coming Cost Crisis

Chris Yin spends $2 million a month on AI. Not annually. Monthly. The CEO of Swan, a startup that builds AI-powered software for the debt collection industry, disclosed the figure in a recent interview with Business Insider, offering one of the most concrete glimpses yet into how deeply -- and expensively -- young companies are embedding large language models into their core products. Swan's primary vendor is Anthropic, the maker of Claude, and the company's AI expenditure now rivals what many startups spend on their entire headcount. That number deserves to sit with you for a moment. Swan, which was founded in 2022 and has raised over $54 million in venture capital, uses Anthropic's models to power AI agents that handle phone calls, negotiate payment plans, and manage communications with consumers who owe debts. The company essentially replaced a function traditionally performed by armies of low-wage call center workers with AI systems that can operate around the clock, in multiple languages, without breaks or benefits. It's a textbook case of the kind of labor displacement that economists have been warning about -- and that investors have been salivating over -- since ChatGPT burst into public consciousness in late 2022. But here's the thing about building your entire business on top of someone else's AI model: the meter never stops running. Yin told Business Insider that Swan's AI costs have grown in tandem with revenue, which he said is now in the "tens of millions" annually. The $2 million monthly Anthropic bill represents a significant portion of the company's operating expenses, though Yin framed it as a worthwhile trade-off. Each AI-handled interaction costs a fraction of what a human agent would, he argued, and the system can scale in ways that a traditional call center simply cannot. Swan's AI agents reportedly handle millions of consumer interactions per month, a volume that would require thousands of human employees to match. The math, at least on paper, works. But it also reveals an uncomfortable dependency. Swan is far from alone in confronting ballooning AI infrastructure costs. Across Silicon Valley and beyond, startups that built their products on foundation models from Anthropic, OpenAI, and Google are discovering that API costs can become the single largest line item on their income statements -- sometimes eclipsing payroll, office space, and traditional cloud computing combined. The phenomenon has created a new category of financial risk that venture capitalists are only beginning to grapple with seriously. A growing number of AI-native startups now spend between 20% and 50% of their gross revenue on model inference costs, according to estimates from multiple venture capital firms. For companies like Swan that make AI the product rather than a feature, the percentage can climb even higher. And unlike traditional software costs, which tend to decline on a per-unit basis as a company scales, AI inference costs scale roughly linearly with usage. More customers means more API calls. More API calls means a bigger bill from Anthropic or OpenAI. This is the structural tension at the heart of the current AI startup boom. Traditional SaaS companies became enormously profitable precisely because software has near-zero marginal cost. Build it once, sell it a million times. The gross margins are extraordinary -- often 80% or higher. AI-native companies don't enjoy that luxury. Every customer interaction, every generated response, every model inference consumes compute. And compute costs money. Anthropic, for its part, has been raising prices even as it ships more capable models. The company's Claude 3.5 Sonnet and Claude 3 Opus models carry different pricing tiers, with the most capable models costing significantly more per token. For a company like Swan that requires high-quality, nuanced outputs -- negotiating debt repayment plans is not a task where you want the model to hallucinate or sound robotic -- downgrading to a cheaper model isn't always an option. Yin acknowledged to Business Insider that Swan has experimented with routing simpler tasks to less expensive models while reserving the most capable (and costly) Claude variants for complex negotiations. Smart. But still expensive. The broader industry context makes Swan's situation even more telling. Anthropic itself has been on a fundraising tear, having raised over $13 billion to date, with its most recent round valuing the company at $61.5 billion. Much of that capital goes directly into training and serving models -- the same models that companies like Swan are paying handsomely to use. There's a certain circularity to the arrangement: venture money flows into Anthropic, which builds models, which startups pay to access using their own venture money, which eventually needs to be justified by actual revenue from actual customers. The question is whether the unit economics ever truly pencil out at maturity. Some investors believe they will. The argument goes like this: inference costs are falling rapidly as hardware improves and model architectures become more efficient. What costs $2 million a month today might cost $200,000 in three years. Meanwhile, the revenue Swan generates from its AI-powered debt collection services should continue to grow as the company signs more clients. The gap between cost and revenue will widen in Swan's favor over time, the optimists say, creating the same kind of margin expansion that made traditional SaaS companies so lucrative. There's historical precedent for this view. Cloud computing costs fell dramatically over the past two decades, turning Amazon Web Services from an expensive experiment into the backbone of modern software. Early AWS customers who gritted their teeth through high bills in 2008 were rewarded with plummeting per-unit costs by 2015. The AI inference market could follow a similar trajectory -- Nvidia's next-generation chips promise significant improvements in performance per watt, and companies like Groq and Cerebras are building specialized hardware designed to make inference cheaper. But the bears have a counterargument. And it's a compelling one. Unlike cloud storage or basic compute, AI model costs aren't driven solely by hardware. They're also driven by the complexity and size of the models themselves. As Anthropic, OpenAI, and Google race to build ever-more-capable systems, the models are getting larger and more expensive to run, not smaller. Claude's next generation will almost certainly be more capable than today's -- and almost certainly more expensive to serve. So even if the hardware gets cheaper, the models may get pricier, creating a treadmill effect where startups are perpetually chasing cost reductions that never quite materialize as fully as projected. There's also the concentration risk. Swan is building its core product on Anthropic's models. If Anthropic raises prices, changes its terms of service, or experiences an outage, Swan's entire business is affected. It's the API dependency problem writ large -- the same concern that plagued companies built on top of Twitter's API or Facebook's platform in earlier eras of tech. Except the stakes are higher now, because AI isn't a feature for Swan. It is the product. Yin seems aware of this risk. He told Business Insider that Swan maintains the ability to switch between model providers, and the company has tested alternatives from OpenAI and open-source options. But switching costs in AI are nontrivial. Each model has different strengths, different failure modes, different prompting requirements. A conversation flow optimized for Claude won't necessarily perform the same way on GPT-4o. And in a domain like debt collection, where regulatory compliance and consumer protection laws impose strict requirements on what an AI agent can and cannot say, revalidating an entire system on a new model is a significant undertaking. The debt collection industry itself adds another layer of complexity. It's one of the most heavily regulated sectors in consumer finance, governed by the Fair Debt Collection Practices Act at the federal level and a patchwork of state laws that vary widely. The Consumer Financial Protection Bureau has been paying increasing attention to the use of AI in debt collection, and several consumer advocacy groups have raised concerns about AI systems that might mislead or pressure vulnerable consumers. Swan's bet is that AI can actually improve compliance -- a well-trained model doesn't lose its temper, doesn't make unauthorized threats, and can be programmed to follow scripts with perfect consistency. But regulators haven't fully weighed in yet, and the legal framework for AI-conducted debt collection remains unsettled. So Swan is navigating simultaneously: spiraling AI costs, regulatory uncertainty, platform dependency, and the fundamental challenge of building a profitable business on top of someone else's technology. That's a lot of risk for a company that's raised $54 million. And yet, the investors keep writing checks. Swan's fundraising success reflects a broader conviction in the venture capital community that AI-native companies -- despite their unusual cost structures -- represent the next great wave of enterprise software. Firms like Andreessen Horowitz, Sequoia, and others have been pouring billions into startups that use large language models to automate functions previously performed by humans. The total addressable market for AI-driven automation in financial services alone is estimated at tens of billions of dollars, and debt collection -- a $20 billion industry in the United States -- is considered particularly ripe for disruption because it's labor-intensive, margin-thin, and widely despised by consumers and companies alike. The pitch is seductive. Replace humans with AI. Cut costs by 60% or more. Scale instantly. Handle compliance automatically. No turnover, no training, no HR headaches. But the $2 million monthly AI bill complicates the narrative. It suggests that while AI can indeed replace human labor, it doesn't eliminate costs -- it shifts them. Instead of paying salaries and benefits to call center workers, Swan pays Anthropic. Instead of managing a workforce, it manages an API relationship. The nature of the expense has changed. The magnitude, apparently, has not -- at least not yet. This is the reality that many AI startups are quietly confronting in 2025 and into 2026. The hype cycle promised that AI would make everything cheaper. For end customers, that may prove true over time. But for the companies building AI-first products, the economics are more complicated than the pitch decks suggested. Gross margins at many AI-native startups hover between 40% and 60% -- respectable by most industry standards, but a far cry from the 80%+ margins that made traditional SaaS companies so attractive to public market investors. Whether those margins expand as inference costs decline or compress as models grow more expensive will likely determine which AI startups survive and which flame out. For Swan, the bet is that its early investment in AI-powered debt collection will pay off as the technology matures and costs come down. For Anthropic, the bet is that companies like Swan will keep paying -- and that new customers will join them -- in sufficient volume to justify the tens of billions being spent on model development. Both bets could pay off. Both could fail. The only certainty is the bill. Two million dollars. Every month. And rising.

AnthropicCerebras
WebProNews20d ago
Read update
The $2 Million Monthly AI Tab: Inside One Startup's Staggering Anthropic Bill and What It Signals About the Coming Cost Crisis

Chaos at Massachusetts hospital as it faces emergency

A hospital in Massachusetts has been plunged into chaos following a cyberattack, diverting ambulances and disrupting services - in a scene straight out of The Pitt. Signature Healthcare and Brockton Hospital announced on Monday they were responding to a cybersecurity incident affecting certain systems. The cyberattack brought down the 216-bed facility's electronic medical records system, forcing nurses and doctors to switch to pen and paper documentation, Brooke Hynes, who works in strategic communication for Signature Healthcare, told The Enterprise. It also left the hospital without internet services, she said. The hospital has since implemented its 'downtime procedures,' leaving ambulances diverted to nearby hospitals even though the emergency and in-patient services remained open, according to WCVB. Surgeries and procedures are also proceeding as scheduled, but chemotherapy infusion services scheduled for Tuesday have been canceled and the hospital's retail pharmacies remain closed. Ambulatory practices and urgent care, meanwhile, will reopen on Tuesday, but hospital officials warn there may be some delays. 'We are working with external partners to investigate and restore operations as quickly as possible,' the hospital system said in a statement. Signature Healthcare and Signature Healthcare Brockton Hospital announced on Monday that it was responding to a cybersecurity incident affecting certain systems The second season of HBO's 'The Pitt' deals with the aftermath of a ransomware attack on two nearby hospitals The incident comes just months after a ransomware attack forced the University of Mississippi Medical Center to close dozens of its clinics across the state and cancel many patient procedures for over a week. Another attack on medical device provider Stryker knocked out its networks across the world in March, leading to disruptions of its electronic ordering system and a patient-data system used by first responders. The HBO show 'The Pitt' has touched on the threat of cyberattacks at hospitals in its second season. The fictional Pittsburgh Trauma Medical Center was forced to deal with the fallout of a ransomware attack that shut down operations at two nearby hospitals. As a result, an influx of patients is diverted to the hospital's already overcrowded emergency room, and soon the hospital's IT department shuts down its systems - including all internet-connected charting programs and medical devices - to protect its networks. 'Every day, hospitals are being targeted,' Cynthia Kaiser, a former top FBI cyber official and head of cyber firm Halcyon's Ransomware Research Center, told Politico. 'A lot of hospitals operate on thin margins and they think they have to choose between patient care and cybersecurity,' she noted. 'People need to care about this. Security officials need to care about this,' Kaiser argued. 'There needs to be more outrage across society about what these hackers are doing.' Hospitals are an attractive target for hackers because of all of the sensitive medical data housed on their servers, the outdated systems used to provide patient care and financial constraints that limit a hospital's investment in robust security protocols. The Trump administration has vowed to impose 'consequences' on hacking groups that target critical infrastructure, like hospitals The FBI has advised against paying a ransom to hackers, arguing that it would only encourage future hacking sprees, however, the choice for hospitals can be a matter of life and death for patients under their care. 'Hacking groups either want to get paid, want to collect data or they want to create chaos,' said Paul Connelly, former chief security officer at hospital system HCA Healthcare. By attacking a hospital, he noted, hackers can 'achieve at least one of those goals, or all three at once.' Amid the threat, lawmakers in Washington DC have pushed legislation to stem the barrage of attacks on healthcare systems and provide federal support to struggling hospitals and medical centers. The Trump administration also vowed to impose 'consequences' on hacking groups that target critical infrastructure, like hospitals, in its National Cyber Strategy, though its vague details do not address any plans to improve cyber security for the health care system.

CHAOS
Daily Mail Online20d ago
Read update
Chaos at Massachusetts hospital as it faces emergency

Anthropic strikes chips deal with Google and Broadcom

Anthropic will spend hundreds of billions of dollars on Google's chips and cloud services in a push to secure critical computing resources as surging demand for the company's tools pushes its annualised revenue to $30bn. The AI lab said on Monday it has committed to use "multiple gigawatts" of capacity from Google's TPU, a rival chip to Nvidia's dominant GPU, and the search giant's cloud services. Around 3.5GW of capacity on Google's hardware will come through a partnership with chipmaker Broadcom, starting from next year, according to a separate filing on Monday. In all, the deal would give Anthropic access to close to 5GW in new computing capacity over the coming years, according to a person with knowledge of the terms. The hardware and infrastructure required to develop a single gigawatt of capacity -- roughly equivalent to the power output of a nuclear reactor -- is estimated to cost from $35bn-50bn, with the bulk of that spent on chips. That suggests the lossmaking start-up's commitment could run to hundreds of billions of dollars. Anthropic executives are racing to secure enormous supplies of computing power in order to meet rapidly growing demand for the company's tools, particularly coding agent Claude Code, and to fund costly model training. The San Francisco-based group's annualised revenue has shot from $9bn at the end of last year to $30bn at the end of March, Anthropic said on Monday. The figure represents its revenues from the past 28 days extrapolated over a year. "We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development," said Krishna Rao, Anthropic's chief financial officer. Broadcom shares rose almost 3 per cent after the market closed on Monday. The company also announced that it would develop and supply custom TPUs for Google as part of a long-term agreement through 2031. Google is seeking to expand sales of its in-house chips, which have helped power its own Gemini AI models, bringing it into increasingly direct competition with Nvidia, the world's largest semiconductor group. Anthropic's rival OpenAI last year struck a string of computing deals with Broadcom, Nvidia, AMD and others, in a push to lock in as much capacity as possible to power its own AI tools. The deals have been criticised for their circularity, with Big Tech groups acting as customers, suppliers and investors in the AI labs. Google has invested billions into Anthropic, giving it a 14 per cent stake as of March last year, according to a legal filing. Both companies have faced scrutiny for their heavy outlay, having repeatedly returned to venture capital and sovereign wealth backers to raise tens of billions of dollars on the promise that they can become profitable if they build sufficient scale and market dominance. Anthropic raised $30bn in February in a deal valuing it at $380bn, including the new money. In its filing on Monday, Broadcom said the deal was "dependent on Anthropic's continued commercial success" and that the parties "are in discussions with certain operational and financial partners". Monday's deal expands on a partnership Anthropic announced with Google last October, which it said at the time was "worth tens of billions of dollars and is expected to bring well over a gigawatt of capacity online in 2026". In November, Anthropic also committed to spend $50bn on new data centres in Texas and New York with cloud computing group Fluidstack, and agreed to purchase $30bn of additional capacity from Microsoft and Nvidia.

Anthropic
Financial Times News20d ago
Read update
Anthropic strikes chips deal with Google and Broadcom

Prediction Markets Surge: Polymarket and Kalshi Drive $25.7B Volume in a Single Month - Crypto Economy

In terms of operational metrics, the frequency of use increased significantly in March, with 207 million transactions, surpassing the 155 million recorded in February. Currently, Kalshi dominates open interest with $487.21 million over Polymarket, which maintains $422.09 million. Other competitors such as Crypto.com and Predict.fun remained in a marginal position. Despite the financial success, the sector faces increasingly rigorous scrutiny from lawmakers and regulatory bodies. The platforms operate under a highly concentrated market structure at the top, where only the two leaders manage the vast majority of the capital at stake. On the other hand, tensions with the Commodity Futures Trading Commission (CFTC) and opposition from various political sectors present significant obstacles. The path of the legal agenda is marked by disputes over markets related to geopolitical conflicts and discrepancies between state and federal regulations. But the flow of capital does not stop. Diversification into areas such as climate, economy, and transportation allows these financial instruments to consolidate as robust hedging and speculation tools.

Polymarket
Crypto Economy20d ago
Read update
Prediction Markets Surge: Polymarket and Kalshi Drive $25.7B Volume in a Single Month - Crypto Economy

Anthropic Eyes Major Investment in Private Equity | Headlines

Anthropic is in discussions to invest $200 million in a new private-equity venture, according to the Wall Street Journal. This move highlights the company's strategy to expand its influence and portfolio within the private equity sector, as it aims to boost its financial growth and market position. Anthropic is reportedly in the midst of negotiations to make a significant investment in a new private-equity venture, with the proposed figure standing at $200 million. The Wall Street Journal has been the primary source of this information, though further details about the venture remain undisclosed at this time. This investment initiative underscores Anthropic's strategic intent to broaden its footprint within the realm of private equity. By channeling substantial funds into this venture, the company is aiming to reinforce its financial portfolio and capitalize on emerging market opportunities. The move is seen as a testament to Anthropic's confidence in long-term growth potentials within the private equity sector, as it seeks to establish a more diversified approach to its investment strategies and market engagement.

Anthropic
Devdiscourse20d ago
Read update
Anthropic Eyes Major Investment in Private Equity | Headlines

Exclusive | Anthropic in Talks to Invest $200 Million in New Private-Equity Venture

Anthropic plans to invest $200 million in a new venture with private-equity firms to sell AI tools to their portfolio companies. Anthropic is planning to invest $200 million in a new venture with major private-equity firms that aims to sell AI tools to their portfolio companies, continuing a push for business customers. General Atlantic, Blackstone, and Hellman & Friedman are among the private-equity firms in discussions to back the project, people familiar with the matter said. The startup is in talks to raise $1 billion for the effort, the people said. The new company would serve as a consulting arm for Anthropic that teaches businesses how to incorporate the startup's AI tools in their operations. Anthropic and OpenAI are locked in a race to capture revenue from business customers eager to use AI to boost productivity. Both startups believe they are well positioned to benefit financially from broader use of their tools in workplaces across the U.S. economy, and are pouring more and more resources into efforts to win over such customers. OpenAI is also in talks to form a rival joint venture with private-equity firms that spreads adoption of its own AI tools. It recently reassigned its chief operating officer to work on the project, internally called DeployCo, among other duties. Fidji Simo, a top OpenAI executive, wrote on X last month that the effort would involve sending engineers to work at these companies to teach them how to use the technology. The Information and Reuters earlier reported on some details of the planned Anthropic venture. Companies backed by private-equity firms are appealing customers in part because their owners are already trying to cut costs. Private-equity firms can also push technology decisions across their entire portfolios of investments. Some investment firms are separately investing hundreds of millions of dollars to buy up companies in industries such as accounting and customer service so they can automate them with AI. Anthropic is playing a more active role teaching customers how to use AI not only to improve employee productivity, but also to automate larger company functions. Last month, it announced a separate effort to spend $100 million to provide training and technical support for consulting firms that are helping enterprises adopt Claude. Anthropic generates the majority of its revenue from businesses that use its Claude chatbot and coding tools. The company recently announced that it was on pace to generate over $30 billion in annualized revenue, and is talking to banks about a potential initial public offering.

Anthropic
The Wall Street Journal20d ago
Read update
Exclusive | Anthropic in Talks to Invest $200 Million in New Private-Equity Venture

Chaos Labs Exits Aave Amid Risk Management Disputes and V4 Transition - TokenPost

Chaos Labs, one of the primary risk managers for Aave, the leading decentralized finance lending protocol, has announced its departure from the ecosystem -- adding to a growing list of high-profile contributor exits that have significantly reshaped Aave's operational leadership in recent months. The split follows the earlier departures of prominent contributors ACI (Aave Chan Initiative) and BGD Labs, pointing to deepening tensions over the protocol's strategic direction. Since joining in 2022, Chaos Labs played a pivotal role in scaling Aave's total value locked from approximately $5 billion to over $26 billion, all while maintaining a clean record of zero material bad debt across its markets. Despite that impressive performance, Chaos Labs CEO Omer Goldberg confirmed the exit via X, citing a "fundamental misalignment" with how Aave's evolving strategy approaches risk management. He stated the engagement no longer reflects how responsible risk oversight should function. A central point of contention is Aave's upcoming V4 upgrade, which introduces a revamped architecture that substantially broadens risk management scope and operational complexity -- without a proportional increase in support or resources. Goldberg also raised financial sustainability concerns, noting that even under a proposed $5 million budget, the firm has been running at a loss. An additional $1 million, he argued, would still leave margins in the negative. Beyond economics, Chaos Labs cautioned that the ongoing loss of experienced contributors elevates systemic risk, particularly during the sensitive transition between protocol versions. In response, Aave Labs founder Stani Kulechov reassured the community that operations would remain uninterrupted. He noted that Chaos Labs was one of two active risk providers alongside LlamaRisk, and that internal teams would step in to maintain continuous risk coverage going forward. The departure raises pressing questions about how Aave will navigate risk management through its next major growth phase.

CHAOS
TokenPost20d ago
Read update
Chaos Labs Exits Aave Amid Risk Management Disputes and V4 Transition - TokenPost

SpaceX IPO Relies on Elon Musk's Visionary Salesmanship

SpaceX is gearing up for a significant milestone as it moves toward an initial public offering (IPO). The company, led by Elon Musk, aims for a staggering valuation of $2 trillion. This valuation comes shortly after a merger with xAI, hinting at ambitious growth plans. Initiating Discussions for the IPO Investment bankers will conduct meetings to evaluate the feasibility of this aggressive target. They will assess whether the projected $2 trillion valuation is realistic and attractive to potential investors. Recent Valuation Changes There has been a noteworthy increase in SpaceX's market value. Originally, analysts expected the company's valuation to reach around $1.75 trillion. However, recent assessments indicate it may now exceed $2 trillion. Key Players in the IPO Process * SpaceX * Elon Musk - CEO of SpaceX * xAI - Recently merged company * Investment Bankers - Facilitating the IPO discussions This development marks a crucial phase for SpaceX as it navigates the complexities of the IPO process. Investors will be closely monitoring these discussions to gauge the market's confidence in the company's prospects.

xAISpaceX
El-Balad.com20d ago
Read update
SpaceX IPO Relies on Elon Musk's Visionary Salesmanship

Polymarket upgrades trading system and launches new token as US compliance push intensifies

The changes aim to attract institutional traders and comply with stricter U.S. regulations. Polymarket is rolling out the largest update to its business to date, moving from a retail prediction market to a more institutional trading platform. The company is redesigning its trading engine, adding a new order book, and issuing its own collateral token, Polymarket USD. All these changes aim to speed up execution, reduce costs, and better suit the platform for professional traders, bots, and brokers, especially as U.S. compliance expectations become stricter. The firm described the upgrade as its "biggest change to date" and said it's all about new smart contracts, a new trading system, and a stablecoin for collateral. The move comes only a few days after Intercontinental Exchange invested $600 million in the prediction markets platform. The funding was part of the exchange operator's previously announced plan to invest up to $2 billion in Polymarket, the company said. Rebuilt trading engine improves execution for pro traders At the center of the upgrade is a completely rebuilt order book and matching engine. This system determines how buy and sell orders interact, and improving it can significantly affect execution speed and price accuracy. Polymarket says the new design will allow trades to settle faster while lowering gas costs for users. The new system will also support EIP-1271, an Ethereum standard that allows smart contract-based wallets, such as multisigs and automated trading systems, to sign transactions, expanding compatibility beyond traditional wallets. These features are commonly required by professional trading desks that rely on bots, APIs, and multi-signature wallets. However, the transition will require some adjustments. Polymarket plans to cancel all open orders during the migration, though traders will be given several days' notice. This step is necessary to ensure that orders created under the old system do not conflict with the new matching engine. Power users may feel the impact more than casual traders. Those running automated trading bots will need to update their software development kits to work with the new order structure. Builders using APIs will also need to adapt their integrations before trading resumes under the upgraded infrastructure. Polymarket replaces bridged stablecoin with new collateral token Alongside the trading overhaul, Polymarket is introducing a new stablecoin called Polymarket USD. This token will be used as collateral across the platform instead of the previously used bridged USDC.e on Polygon. The shift positions Polymarket closer to the settlement standards that major financial institutions expect. With regulators watching crypto prediction markets more closely, even jurisdictions like Portugal ordering a stop to political betting, standardized stablecoin infrastructure provides the platform with a stronger foundation as it scales toward mainstream finance. Historically, Polymarket relied on USDC.e, a bridged version of Circle's USDC stablecoin. While functional, bridged assets can add complexity and additional risks. By introducing its own token backed one-to-one with USDC, Polymarket aims to simplify settlement and improve liquidity management. Users holding USDC or USDC.e will need to wrap their funds into Polymarket USD using a smart contract function. This process converts existing balances into the new collateral token so they can continue trading after the upgrade. The shift positions Polymarket closer to the settlement standards that major financial institutions expect. With regulators watching crypto prediction markets more closely, even jurisdictions like Portugal ordering a stop to political betting, standardized stablecoin infrastructure provides the platform with a stronger foundation as it scales toward mainstream finance. The move also sparked speculation among users. The platform could generate additional revenue by managing collateral flows internally. Others think the token could allow Polymarket to introduce incentives or yield-related features for users who keep funds on the platform. The timing of the upgrade comes as Polymarket continues to navigate regulatory scrutiny, particularly in the United States. Strengthening infrastructure, improving compliance readiness, and supporting institutional participants could help the platform operate more smoothly in stricter regulatory environments. By rebuilding its trading engine and introducing a dedicated collateral token, the prediction market platformis shifting from a simple retail prediction site into a more professional trading venue.

Polymarket
Cryptopolitan20d ago
Read update
Polymarket upgrades trading system and launches new token as US compliance push intensifies
Showing 8181 - 8200 of 11425 articles