The latest news and updates from companies in the WLTH portfolio.
Intel Corp. (NASDAQ: INTC) announced its participation in the Terafab project alongside SpaceX, xAI, and Tesla, according to a company statement on social media platform X. The collaboration aims to advance silicon fabrication technology, with Intel contributing its chip design, fabrication, and packaging capabilities. The project targets production of 1 terawatt per year of compute capacity to support developments in artificial intelligence and robotics. Intel stated that its experience in manufacturing ultra-high-performance chips at scale will support Terafab's objectives. The company also mentioned hosting Elon Musk at its facilities over the weekend. The announcement provides limited details about the project's timeline, structure, or specific technical specifications. Intel's involvement represents its continued focus on high-performance computing applications as demand for AI-related processing power grows across industries.

All information and data in this article is solely for informational purposes. For more information please view the Barchart Disclosure Policy here Chicago, Illinois--(Newsfile Corp. - April 7, 2026) - Samantha Flynn announces the release of her book, The EntrePReneur Advantage: Turning Chaos into Clarity. It is available on Amazon in both digital and print formats. This book provides entrepreneurs with a comprehensive framework for refining their messaging, understanding audience behavior, and creating resonant brands in a crowded business landscape. The EntrePReneur Advantage is available now on Amazon To view an enhanced version of this graphic, please visit: https://images.newsfilecorp.com/files/10373/290913_e6145779e4f5990a_001full.jpg Drawing from nearly two decades of experience in public relations, Flynn demonstrates how key PR principles -- clarity, consistency, and audience psychology -- can be applied to drive business growth. The book emphasizes that PR is not just for large corporations but is a vital tool for any entrepreneur seeking to establish a strong brand presence. In The EntrePReneur Advantage, Flynn blends memoir and business strategy, offering insights from her career while outlining actionable strategies for entrepreneurs. The book explores how clarity in messaging, consistency across channels, and an understanding of audience psychology can elevate a brand's identity and foster lasting connections with customers. A central theme of the book is the importance of leading with authenticity. Flynn discusses how entrepreneurs can break free from external expectations and market noise, embracing a clear and values-driven approach that allows for stronger, more meaningful engagement with their audience. "Clarity in communication is crucial for sustainable growth," Flynn notes. "When messaging is aligned with purpose, it becomes easier to build trust and consistency over time." The book presents practical steps for: * Clarifying their brand's message to communicate with their target audience effectively. * Understanding audience psychology to engage and connect with customers more deeply. * Building a resonant brand that aligns with their business goals and speaks to their ideal customers. * Leading with authenticity to foster trust and grow their business in an authentic way. The EntrePReneur Advantage targets entrepreneurs, founders, and small business owners who seek to clarify their messaging and build a recognizable, consistent brand. It aims to provide valuable insights into overcoming common challenges related to brand visibility, market differentiation, and message consistency. About the Author Samantha Flynn is the founder of Junipr Public Relations, a leading firm specializing in brand identity development, PR strategies, and helping businesses increase visibility. With over 20 years of experience in public relations, Flynn's approach has helped numerous companies grow their brands and build strong reputations. The EntrePReneur Advantage reflects her expertise and offers entrepreneurs the tools they need to succeed in today's fast-paced marketplace. To view the source version of this press release, please visit https://www.newsfilecorp.com/release/290913

By Shirin Ghaffary and Maggie Eastland, Bloomberg Rivals OpenAI, Anthropic PBC, and Alphabet Inc.'s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three tech companies founded with Microsoft Corp. in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by US AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings who described them on condition of anonymity. OpenAI confirmed it's part of the information sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to "free-ride on the capabilities developed by OpenAI and other US frontier labs." Google, Anthropic, and the Frontier Model Forum declined to comment. Distillation is a technique where an older "teacher" AI model is used to train a newer, "student," model that replicates the capabilities of the earlier system -- often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies. Yet distillation has been controversial when used by third parties -- particularly in adversary nations like China or Russia -- to replicate proprietary work without authorization. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen. Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use. That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they've spent on data centers and other infrastructure. Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek's surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese startup had improperly exfiltrated large amounts of data from the US firm's models to create R1, Bloomberg previously reported. In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot. Information-sharing by US AI companies about adversarial distillation echoes a standard practice in the cybersecurity industry, where firms regularly swap data on attacks and adversaries' tactics as a way to strengthen network defenses. By working together, the AI firms are similarly seeking to more effectively detect the practice, identify who's responsible and try to prevent unauthorized users from succeeding. Trump administration officials have signaled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by President Donald Trump last year called for the creation of an information sharing and analysis center, in part for this purpose. For now, information sharing on distillation remains limited due to AI companies' uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said. Distillation has ranked as a top concern among American AI developers since DeepSeek rattled global markets in early 2025 with its R1 release. Highly capable open-source models continue to proliferate in China, and many in the industry are watching closely for a major upgrade to DeepSeek's model. Last year, Anthropic blocked Chinese-controlled companies from using its Claude chatbot model, and in February it identified three Chinese AI labs -- DeepSeek, Moonshot, and MiniMax -- as illicitly extracting the model's capability via distillation. This year, Anthropic said the threat "extends beyond any single company or region" and poses a national security risk, since distilled models often lack safety guardrails designed to prevent bad actors from using AI tools for malicious activities. Google has published a blog saying it identified an increase in model extraction attempts. The three US AI labs have not yet provided evidence showing how much of China's model innovation is reliant on distillation, but they note that the prevalence of attacks can be measured based on volumes of large-scale data requests.
Polymarket is currently undergoing what leadership calls its "biggest change to date." As the dominant force in the decentralized prediction market space, the platform has recognized that maintaining its lead requires more than just high-profile betting markets; it requires a world-class trading engine. The transition to Version 2 (V2) is specifically designed to reduce friction for both retail users and the high-frequency traders who provide much-needed liquidity. By rebuilding the exchange contracts from the ground up, Polymarket is preparing for a future where prediction markets are as liquid and responsive as traditional equity exchanges. The core of this upgrade lies in the departure from legacy bridged assets. Historically, Polymarket utilized USDC.e -- a version of Circle's stablecoin bridged to the Polygon network. To gain more direct control over the settlement layer and mitigate bridge-related risks, the platform is introducing Polymarket USD. This new collateral token is fully backed 1:1 by USDC, ensuring that users have a stable, reliable medium for their event contracts. While retail users will see an automated transition through the UI, power users and those utilizing API-based trading strategies will need to cancel open orders and update their software development kits (SDKs) to remain compatible with the new order book structure. Beyond the collateral change, the V2 upgrade embraces the EIP-1271 standard. This is a technical milestone that allows smart contract-based wallets -- such as Safe (formerly Gnosis Safe) or multisigs -- to sign transactions directly. This functionality is essential for institutional participants and automated trading systems that require complex permission structures. By making the platform more "machine-friendly," Polymarket is positioning itself as the primary data source and execution venue for real-world event forecasting. The result is a cleaner foundation that offers lower gas costs and faster execution, solidifying the platform's role as the infrastructure of choice for the 2026 election cycle and beyond. This technical "spring cleaning" marks Polymarket's evolution from a niche crypto application to a professional-grade financial exchange. The shift to a native collateral token is a bold move that enhances both safety and scalability. Will I lose my funds during the V2 upgrade? No. Most users will only need to perform a one-time approval in the interface to migrate to the new collateral. What happens to my open orders? Polymarket will require users to cancel unfilled orders during the transition period, providing several days of advanced notice. What is EIP-1271? It is an Ethereum standard that enables smart contract wallets to sign transactions, allowing for more automated and secure trading.

By Shirin Ghaffary and Maggie Eastland, Bloomberg Rivals OpenAI, Anthropic PBC, and Alphabet Inc.'s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three tech companies founded with Microsoft Corp. in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by US AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings who described them on condition of anonymity. OpenAI confirmed it's part of the information sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to "free-ride on the capabilities developed by OpenAI and other US frontier labs." Google, Anthropic, and the Frontier Model Forum declined to comment. Distillation is a technique where an older "teacher" AI model is used to train a newer, "student," model that replicates the capabilities of the earlier system -- often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies. Yet distillation has been controversial when used by third parties -- particularly in adversary nations like China or Russia -- to replicate proprietary work without authorization. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen. Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use. That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they've spent on data centers and other infrastructure. Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek's surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese startup had improperly exfiltrated large amounts of data from the US firm's models to create R1, Bloomberg previously reported. In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot. Information-sharing by US AI companies about adversarial distillation echoes a standard practice in the cybersecurity industry, where firms regularly swap data on attacks and adversaries' tactics as a way to strengthen network defenses. By working together, the AI firms are similarly seeking to more effectively detect the practice, identify who's responsible and try to prevent unauthorized users from succeeding. Trump administration officials have signaled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by President Donald Trump last year called for the creation of an information sharing and analysis center, in part for this purpose. For now, information sharing on distillation remains limited due to AI companies' uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said. Distillation has ranked as a top concern among American AI developers since DeepSeek rattled global markets in early 2025 with its R1 release. Highly capable open-source models continue to proliferate in China, and many in the industry are watching closely for a major upgrade to DeepSeek's model. Last year, Anthropic blocked Chinese-controlled companies from using its Claude chatbot model, and in February it identified three Chinese AI labs -- DeepSeek, Moonshot, and MiniMax -- as illicitly extracting the model's capability via distillation. This year, Anthropic said the threat "extends beyond any single company or region" and poses a national security risk, since distilled models often lack safety guardrails designed to prevent bad actors from using AI tools for malicious activities. Google has published a blog saying it identified an increase in model extraction attempts. The three US AI labs have not yet provided evidence showing how much of China's model innovation is reliant on distillation, but they note that the prevalence of attacks can be measured based on volumes of large-scale data requests.

By Shirin Ghaffary and Maggie Eastland, Bloomberg Rivals OpenAI, Anthropic PBC, and Alphabet Inc.'s Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three tech companies founded with Microsoft Corp. in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by US AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings who described them on condition of anonymity. OpenAI confirmed it's part of the information sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to "free-ride on the capabilities developed by OpenAI and other US frontier labs." Google, Anthropic, and the Frontier Model Forum declined to comment. Distillation is a technique where an older "teacher" AI model is used to train a newer, "student," model that replicates the capabilities of the earlier system -- often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies. Yet distillation has been controversial when used by third parties -- particularly in adversary nations like China or Russia -- to replicate proprietary work without authorization. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen. Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use. That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they've spent on data centers and other infrastructure. Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek's surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese startup had improperly exfiltrated large amounts of data from the US firm's models to create R1, Bloomberg previously reported. In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot. Information-sharing by US AI companies about adversarial distillation echoes a standard practice in the cybersecurity industry, where firms regularly swap data on attacks and adversaries' tactics as a way to strengthen network defenses. By working together, the AI firms are similarly seeking to more effectively detect the practice, identify who's responsible and try to prevent unauthorized users from succeeding. Trump administration officials have signaled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by President Donald Trump last year called for the creation of an information sharing and analysis center, in part for this purpose. For now, information sharing on distillation remains limited due to AI companies' uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said. Distillation has ranked as a top concern among American AI developers since DeepSeek rattled global markets in early 2025 with its R1 release. Highly capable open-source models continue to proliferate in China, and many in the industry are watching closely for a major upgrade to DeepSeek's model. Last year, Anthropic blocked Chinese-controlled companies from using its Claude chatbot model, and in February it identified three Chinese AI labs -- DeepSeek, Moonshot, and MiniMax -- as illicitly extracting the model's capability via distillation. This year, Anthropic said the threat "extends beyond any single company or region" and poses a national security risk, since distilled models often lack safety guardrails designed to prevent bad actors from using AI tools for malicious activities. Google has published a blog saying it identified an increase in model extraction attempts. The three US AI labs have not yet provided evidence showing how much of China's model innovation is reliant on distillation, but they note that the prevalence of attacks can be measured based on volumes of large-scale data requests.

Anthropic's partnership with Google and Broadcom marks a shift in energy resource allocation, affecting Bitcoin miners Anthropic, an AI research company, announced its partnership with Alphabet's Google and Broadcom for gigawatts of tensor processing capacity to power its Claude models amid rising global demand. The new compute capacity is expected to come online in 2027. 'This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,' said Anthropic CFO Krishna Rao in a Monday press release. 'We are making our most significant compute commitment to date to keep pace with our unprecedented growth.' Anthropic added that demand from Claude clients continued to rise in 2026 as run-rate revenue surged over $30 billion compared with $9 billion at the end of 2025. Most of the new compute infrastructure will be in the US. The company highlighted that clients spending over $1 million annually on Claude doubled to 1,000 from 500 within two months. This mega deal amid the rising AI compute demand could mean that companies like Anthropic are directly competing with bitcoin miners for the same energy resources, such as grid connections, land permits, cooling infrastructure, and cheap electricity. Anthropic's latest deal on top of existing capacity across Amazon Web Services Trainium, Google TPUs, and Nvidia GPUs implies that these companies are outbidding Bitcoin miners for energy resources. For Anthropic, compute is a 'strategic moat' as it continues to expand its infrastructure portfolio covering multiple cloud providers and chip platforms. The total AI compute buildout now accounts for one of the top sources of new electricity demand in the US, leaving miners to decide whether to mine bitcoin or rent their infrastructure to AI companies. The Cambridge Bitcoin Electricity Consumption Index revealed that Bitcoin mining requires 9.13 to 29.88 gigawatts of constant energy. Revenue generated by a Bitcoin miner with 1GW of infrastructure depends on the digital asset's price per token and network difficulty. However, the same gigawatt rented to an AI firm earns a fixed contracted rate with cash flows that miners can forecast. Recently, Core Scientific also converted a considerable portion of Bitcoin-mining capacity to AI hosting. At the same time, companies like Riot Platforms and Genius Group recently sold over 19,000 Bitcoins, which could further indicate that crypto mining economics might not be sustaining operations at current prices. At $68,277 per Bitcoin, with difficulty at record highs amid rising energy costs and AI companies competing for the same energy resources, it could mean that renting infrastructure to AI firms could make more sense in terms of revenue. In all, crypto miners could appear more like infrastructure firms that mine Bitcoin as well as rent their compute power to AI firms that cannot develop data centres fast enough.

Chicago, Illinois-(Newsfile Corp. - April 7, 2026) - Samantha Flynn announces the release of her book, The EntrePReneur Advantage: Turning Chaos into Clarity. It is available on Amazon in both digital and print formats. This book provides entrepreneurs with a comprehensive framework for refining their messaging, understanding audience behavior, and creating resonant brands in a crowded business landscape. To view an enhanced version of this graphic, please visit: https://images.newsfilecorp.com/files/10373/290913_e6145779e4f5990a_001full.jpg

Revenue growth accelerates: The deal marks a significant expansion of Anthropic's infrastructure as demand for its Claude models increases. The company said its annualised revenue run rate (ARR) has crossed $30 billion in 2026, up from about $9 billion at the end of 2025. It also reported that more than 1,000 enterprise customers are now spending over $1 million annually, doubling in under two months. This growth places Anthropic ahead of rival OpenAI on reported annualised revenue. A Reuters report last month said OpenAI had crossed $25 billion in annualised revenue as of early 2026, highlighting the rapid expansion and intensifying competition in the enterprise AI market. ARR metrics under scrutiny: The surge in reported ARR across AI firms has also drawn scrutiny. Recent debates around startups like Emergent have raised questions about how "run rate" revenue is calculated, especially when based on short-term usage or token consumption rather than long-term contracts, making comparisons across companies less straightforward. Rising costs reshape pricing: At the same time, changes in pricing models point to rising infrastructure costs. Anthropic has begun charging separately for tools like OpenClaw, citing the high compute load from agent-based tasks, signalling a shift away from flat subscriptions as usage becomes more resource-intensive. "This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development," said Krishna Rao, CFO of Anthropic. Most of the new compute capacity will be located in the United States, extending the company's earlier commitment to invest $50 billion in US-based AI infrastructure. Anthropic currently relies on a mix of hardware platforms, including chips from Amazon Web Services, Google TPUs, and NVIDIA GPUs. The company said this multi-platform strategy helps optimise performance and reduce dependency on a single supplier. Despite the expanded partnership with Google, Amazon remains Anthropic's primary cloud and training partner, with ongoing collaboration on Project Rainier. The company said its Claude models are available across all three major cloud platforms, Amazon Web Services, Google Cloud, and Microsoft Azure.

Surging demand for advanced AI models is reshaping cloud infrastructure, energy markets, and Bitcoin mining economics, with anthropic revenue emerging as a key signal of this shift. Anthropic disclosed that its annualized revenue has climbed past $30 billion, a sharp acceleration from approximately $9 billion at the end of 2025. This jump in anthropic annual revenue reflects rapid uptake of its Claude AI models across enterprise clients and software developers. Moreover, the company is seeing strong demand from large organizations integrating Claude into workflows, products, and internal tooling. The firm also reported that the number of business customers spending more than $1 million per year on Claude has doubled in under two months, rising from 500 to more than 1,000. That said, Anthropic has framed this as an early stage of broader enterprise AI adoption, suggesting significant room for further growth as organizations scale deployments. To sustain this trajectory, Anthropic announced long-term infrastructure agreements with Google and Broadcom for several gigawatts of next-generation TPU (Tensor Processing Unit) compute capacity. The new infrastructure is expected to begin coming online in 2027 and will be used to train and operate future versions of Claude. However, the scale of the deal also underscores how dedicated AI hardware is now central to performance and competitive advantage. Anthropic revealed that it has secured roughly 3.5 gigawatts of next-generation Google TPU capacity through Broadcom starting in 2027. This commitment is in addition to about 1 gigawatt of Google compute that Anthropic is already slated to receive in 2026. Together, these agreements signal a multiyear build-out of specialized infrastructure to support increasingly capable models. Across the sector, major AI developers are racing to lock in long-term access to training and inference capacity. Moreover, the combination of Google's TPU ecosystem and Broadcom's semiconductor design and manufacturing capabilities positions them as critical suppliers in this expanding market. These moves highlight broader ai hardware partnerships trends that are reshaping the cloud and chip landscape. The surge in anthropic revenue is closely tied to the company's ability to secure massive cloud compute deals and scale next-generation models. With multi-year TPU capacity in place, Anthropic is positioning itself to expand Claude's capabilities while maintaining competitive performance. Furthermore, the agreements illustrate how compute availability is becoming a primary constraint on AI growth, rather than model design alone. Market implications include rising enterprise demand for AI services, increasing capital intensity, and the growing strategic importance of hardware suppliers. As AI adoption spreads, access to low-cost, high-density compute is emerging as a key differentiator between leading AI labs and smaller competitors. This dynamic is likely to influence anthropic revenue growth and shape industry structure over the next several years. The rapid build-out of AI infrastructure is directly competing with Bitcoin mining for scarce physical resources such as grid connections, land, cooling capacity, and low-cost electricity. According to Cambridge tracking data, global Bitcoin mining continuously consumes an estimated 13 to 25 gigawatts of power. However, a single Anthropic deal delivering multiple gigawatts of demand shows that AI has become one of the largest new electricity users in the United States. Several publicly listed Bitcoin mining companies are now pivoting toward ai hosting bitcoin miners strategies and high-performance computing to secure stable, contracted revenue. Examples include large conversions of mining facilities into AI data centers and long-term hosting agreements with Anthropic and other AI customers. Moreover, mining economics have come under pressure, with some operators facing loss-making conditions at current BTC prices, while AI hosting offers predictable cash flows backed by enterprise contracts. Analysts estimate that a substantial share of mining companies' revenue could come from AI and high-performance computing by the end of the year. In aggregate, more than $70 billion in cumulative AI and HPC deals has been announced across the public mining sector. This capital reallocation underscores how AI demand is reshaping the business models of traditional mining players. The power grid is coming under increasing stress from concentrated data center demand, including large AI clusters. Grid operators in the United States project capacity shortfalls in coming years, while industry studies forecast U.S. data center electricity demand rising sharply through 2030. Moreover, single facilities reaching 1 gigawatt of load can rival the consumption of small cities, intensifying local constraints. Many announced data center projects are already facing delays tied to power limitations and shortages of critical grid equipment. Anthropic's multi-gigawatt commitment to new AI capacity enters this constrained environment, heightening competition for grid access, substations, and transmission upgrades. As a result, data center power demand is increasingly a central issue for regulators, utilities, and technology firms. Over the past decade, Bitcoin miners have assembled portfolios of remote sites with favorable power purchase agreements, large grid connections, proximity to substations, and substantial cooling capacity and land. Now, these assets align closely with AI deployment needs. Consequently, many miners are converting mining facilities into data centers for AI customers and repositioning themselves as infrastructure landlords with long-duration leases and institutional tenants. This strategic shift carries important implications for the Bitcoin network. Large miners are monetizing BTC holdings to finance AI conversions, adding sell pressure to spot markets. Moreover, as mining capacity is redirected toward AI workloads, Bitcoin hash rate and mining difficulty can decline, at least temporarily, affecting short-term network security metrics. Over the longer term, the publicly listed mining sector may increasingly resemble diversified infrastructure operators. They could focus on leasing power, space, and uptime to AI companies while mining opportunistically when economics are favorable. That said, the pace and scale of this transition will depend on relative returns from AI hosting versus traditional block-reward mining. Broadcom separately announced an extended partnership with Google to design and supply future generations of specialized AI processors and related technologies through 2031. Broadcom has long manufactured Google's TPUs and confirmed that it is expanding deliveries. The company indicated it was already supplying approximately 1 gigawatt of computing power in 2026 and expects demand to exceed 3 gigawatts by 2027. Analyst estimates suggest a significant AI-driven revenue opportunity for Broadcom tied to these long-term agreements. Moreover, Broadcom is also involved in custom processor design programs with other major AI developers, extending its reach across the gpu tpu trainium hardware stack. Google's partnership with Broadcom on google broadcom tpu capacity reinforces the strategic importance of bespoke accelerators for leading cloud and AI providers. Anthropic has emphasized that it trains and runs Claude across a range of hardware platforms, including AWS Trainium processors, Google TPUs, and Nvidia GPUs. This diversified approach aims to optimize performance, cost, and resilience while tapping different cloud ecosystems. Furthermore, the company has signaled plans for substantial investment in U.S. computing infrastructure as it scales next-generation models. The combination of rapid revenue growth, large-scale TPU commitments, and a multi-vendor hardware strategy shows how compute capacity is becoming a core driver of growth and differentiation in the AI industry. In this environment, Anthropic's agreements with Google and Broadcom, alongside its broader cloud relationships, position the company to compete aggressively for enterprise AI workloads over the coming decade. In summary, Anthropic's soaring revenue, multi-gigawatt compute deals, and the sector-wide shift toward AI hosting highlight how advanced models are reshaping infrastructure, energy demand, and even the economics of Bitcoin mining.

Kraken Robotics Inc. announces the successful integration and demonstration of its KATFISH towed synthetic aperture sonar and autonomous launch and recovery system (LARS) from SEFINE's RD-22 unmanned surface vessel (USV) in coordination with SEFINE SISAM (Strategic Unmanned Systems Research Center). The demonstration took place in Q1 2026 off the coast of İstanbul, Türkiye. Kraken Robotics press release "Recent developments underscore the importance of safeguarding critical maritime transit routes and underwater infrastructure, and autonomous mine countermeasure capabilities like KATFISH can play an important role in helping navies efficiently detect and classify mine-like objects," said Bernard Mills, Executive Vice President, Defence at Kraken Robotics. "By combining SEFINE's multi-role USV with Kraken's cutting-edge KATFISH and USV LARS, navies can deploy advanced technologies faster and more efficiently, strengthening defence and maritime security in increasingly complex environments." The demonstration focused on rapid detection and classification of mine-like objects and critical underwater infrastructure and was attended by several navies and government organizations. KATFISH delivered 3 cm x 3 cm resolution data at a range of 200 meters per side which was live streamed to a command center onshore, enabling real-time classification of contacts by operators with SEFINE SISAM's mission planning software. The same KATFISH and USV LARS were demonstrated from a UK Royal Navy in-service 11-meter ARCIMS USV in November 2025. These joint integrations mark a major step forward in delivering agile, modular, and cost-effective mine countermeasure capabilities for modern naval operations.

Analyst's Disclosure: I/we have a beneficial long position in the shares of GOOG, AMZN either through stock ownership, options, or other derivatives. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article. Seeking Alpha's Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Individual investor allocation may hit 30%, dramatically exceeding the standard 5-10% range typical of major public offerings Elon Musk's aerospace venture is moving forward with preparations for what could become the most significant market debut in corporate history, and the company intends to offer individual investors an exceptionally generous portion of the opportunity. During a virtual conference with investment bankers this Monday evening, the rocket manufacturer detailed its public offering strategy. Company executives emphasized that everyday investors would receive a more substantial allocation than has been witnessed in any prior corporate listing. During the discussion, CFO Bret Johnsen articulated the company's commitment plainly. "Retail is going to be a critical part of this and a bigger part than any IPO in history," he stated, as reported by Reuters. Johnsen further explained that individual shareholders have consistently backed both the enterprise and Elon Musk over the years, and the company wished to acknowledge this loyalty through its offering framework. A senior underwriter informed the consortium of 21 financial institutions working on the transaction that both the volume of individual investor interest and the proportion allocated to them would represent something they had "never seen before." Earlier reports indicated that Musk was targeting an allocation of up to 30% for everyday investors. This figure significantly surpasses the typical 5% to 10% reserved for individual participants in large-scale public offerings. The aerospace manufacturer is pursuing a valuation exceeding $2 trillion, positioning it among the highest-valued enterprises to ever enter public markets. The company expects to secure approximately $75 billion through this capital raise. The investor roadshow is planned to commence during the week beginning June 8. One day prior, SpaceX will conduct a meeting with roughly 125 financial analysts representing the 21 banking institutions participating in the transaction. On June 11, the organization will welcome 1,500 individual investors to a substantial face-to-face gathering. Participants are anticipated from the United States, United Kingdom, European Union, Australia, Canada, Japan, and South Korea. The official IPO filing document is projected to become publicly accessible in late May. Specific details regarding the transaction architecture and precise retail investor allocation will be validated as the launch date approaches. Five prominent Wall Street institutions are functioning as primary underwriters for this public offering. The group consists of Morgan Stanley, Goldman Sachs, JPMorgan Chase, Bank of America, and Citigroup. In total, 21 banking institutions are participating in the transaction, underscoring the magnitude of this market event. The retail participation approach represents a significant shift from traditional large-scale IPO structures, where institutional investors conventionally secure the majority of available equity. SpaceX has not yet disclosed the precise percentage of shares designated for individual investors. This figure is anticipated to be determined as the IPO launch date draws nearer. The prospectus release in late May will provide investors with their initial official examination of the company's financial performance prior to the roadshow campaign.

* by Colin Kirkland , Yesterday Days after Elon Musk was found guilty of misleading investors prior to his takeover of Twitter in 2022, the world's richest man is reportedly instructing the banks backing the potential SpaceX IPO to invest in his AI company's Grok chatbot and advertise on his social media platform X. According to The New York Times, which spoke with four unnamed sources familiar with the matter, some of the banks behind the IPO have already agreed to spend tens of millions of dollars on Grok subscriptions and services, while even integrating the chatbot into their IT systems. SpaceX -- which acquired both xAI and X in February, forming a $1 trillion ecosystem dedicated to building AI infrastructure -- is expected to lead the largest initial public offering in history, with an evaluation as high as $1.75 trillion. As a result, the IPO could raise over $50 billion for SpaceX, generating over $500 million for the banks committed to investing in the deal, which may include Bank of America, Citigroup, Goldman Sachs, JPMorgan Chase, Morgan Stanley, as well as law firms Gibson Dunn and Davis Polk. Therefore, Musk is reportedly making demands that would benefit two initiatives that have faced both regulatory hurdles and financial pitfalls: xAI's Grok chatbot, which is currently facing multiple legal investigations due to the alleged unlawful processing of personal data and harmful sexualized content generation; and X, a social platform that has struggled since Musk's Twitter takeover four years prior. In addition to demanding banks' investment in Grok, The New York Times also reports that Musk has asked IPO investors to advertise on X "but was less adamant about that request." To boost the popularity of Grok and compete with AI giants like OpenAI, Meta, Google, and Anthropic, Musk has posted consistently about the SpaceX-owned chatbot on X. Last month, X also began experimenting with a new ad product on the social platform by testing promotions for SpaceX's highly profitable Starlink internet service directly in X posts.

Broadcom stock climbed over 3% in after-hours trading on the announcement The artificial intelligence company Anthropic has witnessed its revenue run rate skyrocket from $9 billion at 2025's conclusion to beyond $30 billion, marking one of the most dramatic growth trajectories in the AI sector. The surge reflects intensifying demand for the company's Claude AI platform throughout 2026. The number of enterprise clients investing over $1 million annually with Anthropic has crossed the 1,000 threshold. This figure represents more than double the count recorded just in February, highlighting the rapid commercial adoption of the company's AI solutions. To support this explosive growth trajectory, Anthropic has entered into a significant computing infrastructure agreement with Google and Broadcom. The multi-year arrangement will provide Anthropic with approximately 5 gigawatts of total computing capacity. Approximately 3.5 gigawatts of this infrastructure will utilize Google's proprietary tensor processing units (TPUs), manufactured by Broadcom. Deployment of this substantial capacity segment is scheduled to commence in 2027. Broadcom and Google have formalized a long-term supply contract for these specialized chips that extends through 2031. The arrangement also includes Anthropic developing and providing customized TPUs for Google's operations. Broadcom CEO Hock Tan previously referenced this partnership during the company's earnings presentation last month. He projected that Broadcom's AI-focused chip revenue would surpass $100 billion in the coming year. The overwhelming majority of this new computing infrastructure will be deployed within United States borders. According to Anthropic, this expansion reinforces the company's November 2025 pledge to invest $50 billion in domestic computing infrastructure. Anthropic finds itself embroiled in legal proceedings with federal authorities. The Department of Defense designated Anthropic as a supply-chain security risk after disagreements emerged regarding AI safety protocols. The company has cautioned that this classification could result in billions of dollars in forfeited revenue. According to company legal counsel, over 100 enterprise customers reached out to express concerns about maintaining their partnerships with Anthropic. Nevertheless, the organization reports that its expansion has remained robust. Paul Smith, Anthropic's chief commercial officer, noted that certain clients appreciate the company's willingness to "demonstrates its principles" when engaging with government entities. Google initially developed its TPU technology to enhance search engine performance. These processors have evolved into essential infrastructure for developing and operating sophisticated AI systems. Broadcom's role involves transforming Google's chip blueprints into production-ready components for manufacturing. This relationship establishes Broadcom as a competitive alternative to Nvidia, which maintains market dominance in AI computing hardware. Broadcom's stock price surged as much as 3.6% during after-hours trading following the filing disclosure. Alphabet, Google's parent company, experienced approximately 1.6% gains in premarket activity, while Nvidia dipped 0.4%. OpenAI, Anthropic's primary competitor, negotiated comparable computing arrangements last year with Broadcom, Nvidia, and additional partners to guarantee AI infrastructure access. Broadcom indicated in regulatory filings that Anthropic's utilization of the enhanced computing resources is contingent upon the company maintaining its commercial momentum.

Anthropic's shift to Google TPUs signals rising competition to Nvidia and a broader move toward diversified AI infrastructure strategies. In a move that could reshape the competitive landscape of artificial intelligence infrastructure, Anthropic has expanded its collaboration with Google and Broadcom to power its Claude AI models using Google's custom-built Tensor Processing Units (TPUs). The partnership marks a strategic shift away from an overwhelming dependence on Nvidia's GPUs, which currently dominate the AI hardware market. For years, Nvidia has remained at the center of the AI boom, supplying chips that power everything from model training to deployment across leading tech companies. Organizations such as OpenAI, Meta, and Anthropic have relied heavily on Nvidia's high-performance GPUs to run large-scale AI systems. However, Anthropic's latest decision suggests that the industry may be entering a phase of diversification in its hardware choices. Under the new agreement, Anthropic will gain access to multiple gigawatts of next-generation TPU capacity, with the infrastructure expected to go live starting in 2027. These chips will be used to run parts of the Claude AI ecosystem, enabling the company to scale its operations while reducing reliance on a single supplier. Google, which has long used its in-house TPUs alongside Nvidia hardware, is now opening up its infrastructure to external partners. This shift not only strengthens its cloud and AI offerings but also positions it as a viable alternative in a market largely controlled by Nvidia. Broadcom's involvement further highlights the growing collaboration between chipmakers and AI firms seeking to build more resilient supply chains. Despite this move, Anthropic is not cutting ties with Nvidia. Instead, it is adopting a multi-platform strategy that includes Nvidia GPUs, AWS Trainium chips, and Google TPUs. This diversified approach reflects a broader trend among AI companies aiming to mitigate risks associated with supply shortages, pricing pressures, and vendor dependency. The timing of this partnership is significant. Demand for AI hardware continues to surge, driven by rapid advancements in generative AI and increasing enterprise adoption. Anthropic itself has witnessed remarkable growth, with its annualised revenue reportedly surpassing $30 billion in 2026, a sharp rise from approximately $9 billion at the end of 2025. This growth underscores the need for scalable and cost-effective infrastructure solutions. The broader implications of this deal extend beyond Anthropic and Google. Nvidia, while still a dominant force, is beginning to face credible competition. Emerging initiatives, including large-scale chip manufacturing projects like TeraFab -- a vertically integrated facility backed by Elon Musk's ventures such as Tesla, SpaceX, and xAI -- signal that the industry is actively exploring alternatives. While Nvidia is unlikely to lose its leadership position overnight, developments like these indicate a gradual shift in the balance of power. As AI companies continue to expand and innovate, the demand for diverse, high-performance computing solutions will only intensify. Ultimately, Anthropic's partnership with Google and Broadcom reflects a calculated move to future-proof its infrastructure, while also contributing to a more competitive and dynamic AI hardware ecosystem.

Jeff Bezos' startup Project Prometheus has hired Kyle Kosic, a co-founder of Elon Musk's xAI who most recently worked at OpenAI, the Financial Times reports. Kosic led the infrastructure behind xAI's Colossus supercomputer and will continue working on AI infrastructure at Prometheus. The startup, led by Bezos and former Google executive Vikram Bajaj, is building AI systems designed to understand the physical world with a focus on tasks in areas like engine design and engineering. Prometheus has already hired hundreds of employees across San Francisco, London, and Zurich. According to the FT, Bezos and Bajaj are looking to raise tens of billions of dollars for a permanent investment vehicle that would acquire stakes in companies across industries like aerospace and architecture.

In short: Anthropic is in negotiations to anchor a new joint venture with Blackstone, Hellman & Friedman, and Permira that would embed Claude across private equity portfolio companies, investing roughly $200m of its own capital into a vehicle that could raise up to $1bn from buyout firms, and taking Palantir's forward-deployed engineer model as its template. Anthropic is in talks to invest roughly $200m in a new private-equity-backed joint venture designed to accelerate enterprise adoption of its Claude models, according to reporting from the Wall Street Journal. The proposed structure would see buyout firms, including Blackstone, Hellman & Friedman, and Permira, take equity stakes totalling approximately $1bn in the venture, which would operate as a consulting and implementation arm helping businesses integrate Claude into their operations. No final terms have been agreed and no timeline has been announced. But the talks represent the most aggressive step Anthropic has yet taken to turn its model leadership into a distribution network, and they arrive at a moment of intensifying competition with OpenAI for the enterprise clients who will ultimately determine whether the economics of frontier AI hold together. The proposed joint venture is modelled, by multiple accounts, on Palantir's forward-deployment playbook: engineers embedded inside customer organisations, driving not just adoption but workflow transformation. Rather than relying on software subscriptions alone, Anthropic would bundle model access with advisory and implementation services, the kind of hands-on work that drives the sticky, recurring revenue that AI companies need to justify their infrastructure commitments. The strategic logic of using private equity as the distribution layer is elegant. PE firms control thousands of portfolio companies. Rather than Anthropic approaching each enterprise independently, a joint venture with Blackstone or Hellman & Friedman gives it access to those entire portfolios in one negotiation. Each buyout firm becomes, in effect, a channel partner with both a financial incentive to see Claude adopted and direct operational influence over the companies in which it is being deployed. Blackstone already has skin in the game beyond this new venture: the firm holds approximately $1bn in Anthropic equity, having invested $200m at a $350bn valuation in February 2026 as part of Anthropic's Series G round. That position gives Blackstone both a strategic and a financial reason to want Claude embedded as widely as possible in the corporate world, a conflict of interest, if you want to call it that, or an alignment of incentives, depending on your vantage point. Anthropic is not alone in pursuing this model. OpenAI is in parallel discussions with Advent International, Bain Capital, Brookfield Asset Management, and TPG for a comparable enterprise AI venture, with total fundraising reportedly targeting approximately $4bn. The structural differences between the two pitches are telling. OpenAI is offering private equity firms a guaranteed minimum return of 17.5%, an incentive designed to make the investment proposition simpler for LPs and investment committees that might otherwise treat an AI joint venture as too speculative. Anthropic is offering ordinary equity in the venture, with no floor on returns. That difference is both a financial signal, Anthropic is more confident in the commercial upside, or less willing to subsidise investor risk, and a cultural one: a company founded explicitly around AI safety is unlikely to be comfortable packaging its equity in instruments that prioritise investor protection over genuine shared risk. The Axios description of the competitive dynamic cuts to the point: "It's a whole lot faster for OpenAI and Anthropic to partner with PE firms than to approach each of their portfolio companies independently." Enterprise AI adoption has entered a race in which distribution, not model quality alone, will determine market share. Both companies have concluded that private equity is the fastest route to scale. The new venture, if it closes, would be the third major enterprise initiative Anthropic has launched within a single quarter. In March 2026, the company committed $100m to Anthropic's $100M Claude Partner Network, a programme anchored by Accenture, Deloitte, Cognizant, and Infosys that provides implementation support, technical architects, and co-marketing for enterprise Claude deployments. Separately, Xero has integrated Claude directly into its accounting platform in what Xero's partnership with Anthropic brings Claude into small business finance , a model that illustrates how deeply Claude is being embedded into software products far beyond the chat interface. By April 2026, more than 1,000 businesses are spending over $1m per year on Anthropic services on an annualised basis, up from roughly 500 two months earlier. Enterprise customers now represent approximately 80% of Anthropic's revenue, according to reporting based on the company's internal figures. The proposed PE venture is designed to accelerate that concentration, and to reach the segment of the enterprise market (private-equity-owned mid-market companies) that the Claude Partner Network's large-system-integrator anchors do not typically serve. Context matters here: Anthropic is reported to be in discussions with Goldman Sachs and JPMorgan Chase about a public listing targeting October 2026, with estimates of a $60bn fundraise. At $380bn valuation, the post-money figure on the Series G, a successful IPO requires Anthropic to demonstrate not just model capability but durable, scalable enterprise revenue. A joint venture that deploys Claude across the portfolio companies of three or four major buyout firms creates both a revenue channel and a narrative: that Claude is not merely a model but an enterprise infrastructure layer. The financial architecture that underlies all of this is worth noting. SoftBank's $40bn bridge loan to fund its OpenAI commitment illustrated the lengths to which AI infrastructure finance has stretched; Anthropic's own $30bn Series G was the second-largest venture funding deal in history. The PE venture adds another dimension to this picture: Anthropic is not merely raising money from PE firms as passive investors but is now proposing to build distribution vehicles with them, a shift from capitalisation to commercialisation. The talks are ongoing and the structure is not final. Key open questions include which PE firms will ultimately participate, how governance of the joint venture will work, and whether Anthropic will retain pricing and access controls over Claude deployments made through the vehicle. That last question is not trivial: Anthropic has shown it is willing to set boundaries around how Claude is accessed, having recently moved to restrict access to Claude via certain third-party frameworks where it judged the cost and safety dynamics to be misaligned. A joint venture that gives PE firms commercial incentives to deploy Claude as broadly as possible across their portfolios introduces a tension with that kind of granular control. Whether Anthropic can maintain its approach to model governance independent of its commercial partners while simultaneously using those partners as its primary distribution channel is the most interesting unresolved question in this story, and perhaps the most important one for anyone who cares about what responsible AI deployment actually looks like at enterprise scale.

In Q1 2026, private markets went into overdrive while the exit window stayed selective. New Crunchbase data shows investors poured about $300 billion into roughly 6,000 startups globally in the quarter, up more than 150% both quarter over quarter and year over year. According to the report, it was the biggest quarter for venture on record. "AI is driving this whole venture investment cycle. So it's completely dominated," Gené Teare, research lead at Crunchbase, told Fortune. She noted that around 50% of global capital went to AI in 2025. That share jumped to about 80% in Q1 2026, powered by giant financings for OpenAI, Anthropic, and xAI. Q1's capital was heavily skewed to the top of the stack. Late-stage funding more than tripled year over year -- reaching $246.6 billion across 584 deals in Q1 -- with $235 billion funneled into just 158 rounds of $100 million or more. The Crunchbase Unicorn Board now includes roughly 1,700 companies. But, according to Teare, only around 40 of those have raised new funding or valuations since the beginning of 2024, leaving about 60% still priced off an earlier cycle. Teare said the market is "between two worlds": highly valued SaaS-era unicorns that look ready on revenue but can't convincingly prove AI-driven growth, and "new native AI companies" posting huge early numbers but still too early and volatile for the public markets. The backdrop for this standoff is the whiplash from the 2021 IPO boom when global IPO proceeds surpassed $600 billion across roughly 2,600 to 3,000 deals. From 2022 through 2024, however, IPO activity slowed significantly. The market began to stabilize in 2025: Global IPOs totaled 1,293 deals raising about $171.8 billion, a 39% increase in proceeds year over year, and the pipeline for 2026 turned more optimistic as larger tech names like Chime and Klarna finally got out. Coming into this year, Teare said that many bankers and issuers were betting on a more durable reopening -- until the SaaSpocalypse. With some prospective offerings pulled or delayed, early 2026 was "much slower than was expected" even as overall U.S. IPO proceeds and deal counts improved from deeply depressed 2024 levels. All of that leaves SpaceX, OpenAI, and Anthropic in a category of their own. OpenAI now tops Crunchbase's unicorn board, SpaceX is No. 2, and Anthropic is in fourth place. "There's going to be a huge appetite for these companies should they list," Teare said, adding that in past cycles, "when large companies go out, that creates a lot of energy in the markets for other companies to also go out." The catch this time is that the trio is "so huge, and they're very much outliers," raising the question of whether their eventual IPOs will jumpstart a broader backlog of SaaS and AI names -- or simply "suck a lot of the money and energy out of the room" while everyone else keeps waiting.

Anthropic has unveiled a major partnership with Google and Broadcom to secure "multiple gigawatts" of advanced TPU compute power, set to become operational by 2027. This agreement marks Anthropic's most significant commitment to date, coinciding with a dramatic revenue increase to a $30 billion annual run rate from $9 billion at the end of 2025. AI Compute Demand vs. Bitcoin Mining The surge in artificial intelligence (AI) compute demand is creating fierce competition with bitcoin mining for essential resources. Both industries require grid connections, land permits, cooling infrastructure, and affordable electricity. As a result, companies are reevaluating their strategies. Power Draw Estimates A Cambridge analysis indicates that bitcoin mining consumes approximately 13 to 25 gigawatts of continuous power globally, depending on the efficiency of the hardware used. Anthropic's latest deal illustrates the escalating rivalry within the energy sector for compute power. AI Sector Expansion Anthropic's move is just one example. OpenAI, which recently raised $122 billion, emphasizes the strategic importance of compute power. Their infrastructure spans five cloud providers and four chip platforms, reflecting the rapid growth of AI's demand for electricity. Electricity Demand Overview * The AI compute buildout is now one of the largest sources of new electricity demand in the United States. * This surge coincides with bitcoin miners contemplating whether to continue mining or lease their resources to AI enterprises. Shifts in Mining Operations In response to the changing landscape, some bitcoin mining firms have begun to pivot towards AI hosting. For example, Core Scientific has converted a substantial part of its mining capacity to support AI through a partnership with CoreWeave. Meanwhile, Iris Energy and Hut 8 are also expanding their AI-related revenue streams. Recent Market Developments Several companies, including Riot Platforms, MARA Holdings, and Genius Group, reported selling over 19,000 BTC last week. This trend suggests that current bitcoin prices and mining difficulty are unsustainable for many operations. Revenue Comparisons: Mining vs. AI A bitcoin miner with a gigawatt of power experiences revenue fluctuations based on the price of bitcoin and network difficulty. Conversely, that same gigawatt used by an AI company generates consistent income through contracts. Given the rising operational costs and the current bitcoin price of $69,000 with record high network difficulty, many miners find AI rental agreements to be more financially viable. Customer Growth and Future Outlook Anthropic recently reported a significant growth in its customer base, with businesses spending over $1 million annually on its Claude models doubling from 500 to over 1,000 in just two months. Conclusion Although bitcoin mining is not on the verge of extinction -- its hashrate continues to hit new highs exceeding 1 zetahash per second -- the future landscape may resemble more of an infrastructure model. Miners might pivot to become infrastructure providers, leveraging their resources to supply energy to the rapidly expanding AI sector.
