The latest news and updates from companies in the WLTH portfolio.
~Approximately 100 times faster, will accelerate solutions for drug discovery, finance, and other complex problems~ KAWASAKI, Japan--(BUSINESS WIRE)-- Toshiba Corporation has developed a breakthrough algorithm that dramatically boosts the performance of the Simulated Bifurcation Machine (SBM), its proprietary quantum‑inspired combinatorial optimization computer. The new algorithm significantly improves the probability of obtaining an optimal solution or a known best solution within a limited number of trials -- referred to as the success probability, a key benchmark for evaluating combinatorial optimization technologies. This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260407918434/en/ The SBM is designed to solve large‑scale combinatorial optimization problems in a wide range of fields, including new drug discovery, delivery route optimization, and investment portfolio design. While previous algorithms could find optimal or known best solutions with a sufficiently large number of trials, large‑scale problems often trapped the search process in local optima, significantly lowering success probability under practical constraints that limit the number of trials. Toshiba has overcome this challenge by developing a third‑generation simulated bifurcation (SB) algorithm. This ground-breaking advance builds on the original SB algorithm, announced in April 2019*1, and the second‑generation SB algorithm, released in February 2021*2, which delivered major boosts to computational speed and accuracy. The new algorithm expands the bifurcation parameter that triggers the bifurcation phenomena*3 -- a defining feature of the SB algorithm -- from a single global parameter to individual parameters assigned to each position variable*4. These bifurcation parameters are independently controlled according to the values of the corresponding position variables, enabling a more adaptive and effective solution search. With the introduction of this advanced control mechanism, the algorithm exhibits either regular or chaotic behavior*5, depending on conditions. Crucially, Toshiba discovered that by effectively harnessing chaos at the edge of chaos -- the boundary between regular dynamics and chaotic motion -- the algorithm can escape local optima far more efficiently. As a result, the success probability of reaching the global optimum increases dramatically, approaching 100%. The SBM based on the new algorithm is therefore much faster. It delivers a time to solution (TTS) required to obtain an optimal or known best solution that is approximately 100 times faster than the SBM based on the second‑generation algorithm. These advances are expected to accelerate the practical applications of combinatorial optimization across a broad range of challenges. The research results were published in the April 6, 2026 issue of Physical Review Applied, a peer‑reviewed journal of the American Physical Society*6. Note: *1 https://advances.sciencemag.org/content/5/4/eaav2372 *2 https://advances.sciencemag.org/content/7/6/eabe7953 *3 In nonlinear dynamical systems, a phenomenon in which changes in system parameters (bifurcation parameters) cause the number of stable points to change from one to multiple. *4 In the SB algorithm, the equations of motion of a classical dynamical system consisting of many oscillators are solved. A position variable represents the position of each oscillator, and these position variables correspond to the decision variables (discrete variables) of the combinatorial optimization problem. *5 In nonlinear dynamical systems, a phenomenon in which even slight differences in initial conditions cause the subsequent trajectories of motion to diverge significantly, resulting in disordered (chaotic) behavior. This sensitivity of chaos to initial conditions is known as the butterfly effect, and the upper panel of Figure 1 provides a quantitative evaluation of this effect. *6 https://doi.org/10.1103/2qd9-x6v8 About Toshiba For over 150 years, guided by its corporate philosophy, "Committed to People, Committed to the Future.," Toshiba Group has contributed to society through its business activities. Today, the Group continues to enhance its management structure, streamline operations, and invest in forward‑looking businesses in the energy, digital infrastructure, and electronic devices domains. Annual sales in fiscal year 2025 were 3.5 trillion yen, with 95,000 employees worldwide. Find out more on our website or follow us on LinkedIn. View source version on businesswire.com: https://www.businesswire.com/news/home/20260407918434/en/ Toshiba Corporation Media Relations Office Ryoji Shinohara/Naoko Oura [email protected]

The SBM based on the new algorithm is therefore much faster. It delivers a time to solution (TTS) required to obtain an optimal or known best solution that is approximately 100 times faster than the SBM based on the second‑generation algorithm. These advances are expected to accelerate the practical applications of combinatorial optimization across a broad range of challenges. The research results were published in the April 6, 2026 issue of Physical Review Applied, a peer‑reviewed journal of the American Physical Society Note: *1 https://advances.sciencemag.org/content/5/4/eaav2372 *2 https://advances.sciencemag.org/content/7/6/eabe7953 *3 In nonlinear dynamical systems, a phenomenon in which changes in system parameters (bifurcation parameters) cause the number of stable points to change from one to multiple. *4 In the SB algorithm, the equations of motion of a classical dynamical system consisting of many oscillators are solved. A position variable represents the position of each oscillator, and these position variables correspond to the decision variables (discrete variables) of the combinatorial optimization problem. *5 In nonlinear dynamical systems, a phenomenon in which even slight differences in initial conditions cause the subsequent trajectories of motion to diverge significantly, resulting in disordered (chaotic) behavior. This sensitivity of chaos to initial conditions is known as the butterfly effect, and the upper panel of Figure 1 provides a quantitative evaluation of this effect. *6 https://doi.org/10.1103/2qd9-x6v8 About Toshiba For over 150 years, guided by its corporate philosophy, "Committed to People, Committed to the Future.," Toshiba Group has contributed to society through its business activities. Today, the Group continues to enhance its management structure, streamline operations, and invest in forward‑looking businesses in the energy, digital infrastructure, and electronic devices domains. Annual sales in fiscal year 2025 were 3.5 trillion yen, with 95,000 employees worldwide. Find out more on our website or follow us on LinkedIn.

New Delhi [India], April 7 (ANI): Anthropic's run-rate revenue surpassed the USD 30 billion threshold, marking a substantial increase from the approximately USD 9 billion reported at the close of 2025, according to the company. 'Demand from Claude customers has accelerated in 2026. Our run-rate revenue has now surpassed $30 billion--up from approximately $9 billion at the end of 2025,' Anthropic said in a statement. The company noted that the surge in revenue followed an acceleration in demand from Claude customers throughout 2026. As per the company, the number of business clients spending over USD 1 million on an annualized basis doubled. While Anthropic reported 500 such customers during its Series G fundraising in February, 'today that number exceeds 1,000, doubling in less than two months.' This financial growth coincided with the signing of a new agreement with Google and Broadcom to secure multiple gigawatts of next-generation Tensor Processing Unit (TPU) capacity. 'This significant expansion of our compute infrastructure will power our frontier Claude models and help us serve extraordinary demand from customers worldwide,' Anthropic said in a statement. 'This ground breaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,' said Krishna Rao, CFO of Anthropic. 'We are making our most significant compute commitment to date to keep pace with our unprecedented growth.' The vast majority of the new compute capacity was slated for placement within the United States. This move represented an expansion of the company's November 2025 commitment to invest USD 50 billion in American computing infrastructure. The arrangement also deepened existing collaborations with Google Cloud, building on TPU capacity increases previously announced in October. Despite the expanded deal with Google and Broadcom, Anthropic maintained its multi-platform hardware approach. The firm continued to train and run Claude on a range of AI hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs. The company stated that this diversity of platforms allowed for better performance and greater resilience for customers who depended on the model for critical work. 'Amazon remains our primary cloud provider and training partner, and we continue to work closely with AWS on Project Rainier,' the company said. Claude also maintained its position as the only frontier AI model available to customers across the three largest cloud platforms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry). (ANI)

Anthropic's latest partnership with Google and Broadcom marks a pivotal shift in how artificial intelligence (AI) infrastructure is being designed, controlled, and scaled. At a time when the AI race is increasingly defined by compute power, this move signals a deeper realignment, one where custom silicon, strategic alliances, and long-term cost optimisation take centre stage. For years, AI innovation has largely been associated with software breakthroughs, models, algorithms, and applications. However, the current phase of AI evolution is as much about hardware as it is about intelligence. Anthropic's decision to lean into Google's Tensor Processing Units (TPUs), co-developed with Broadcom, reflects a calculated shift towards vertically optimised AI stacks. This is not just about accessing compute; it is about controlling it. By aligning closely with Google's TPU ecosystem, Anthropic is positioning itself within a tightly integrated infrastructure model where hardware and software co-evolve. This reduces dependency on generic GPU supply chains, while enabling tighter performance tuning for large language models (LLMs). TPUs are purpose-built accelerators designed specifically for machine learning workloads. Unlike traditional Graphics Processing Units (GPUs), TPUs are optimised for tensor operations, which are fundamental to deep learning. This makes them highly efficient for training and inference at scale. In practical terms, this translates into three strategic advantages. First, improved performance per watt, which directly impacts operational costs. Second, better scalability for increasingly complex AI models. Third, reduced latency in model deployment, a critical factor for real-time AI applications. Anthropic's adoption of TPUs suggests a long-term play, one that prioritises efficiency and scalability over short-term flexibility. Broadcom's role in this partnership is equally significant. As a key player in custom chip design, Broadcom enables the manufacturing and optimisation of TPUs at scale. This collaboration highlights a growing trend in AI, bespoke silicon tailored for specific workloads. Custom chips are no longer a luxury; they are becoming a necessity. As AI models grow in size and complexity, general-purpose hardware struggles to keep up both economically and technically. Broadcom's involvement ensures that TPU development remains aligned with hyperscale demands while maintaining cost efficiency. This partnership also has broader implications for the competitive landscape. It subtly challenges the dominance of GPU-centric ecosystems, particularly those led by Nvidia. While GPUs remain critical, the emergence of viable alternatives like TPUs introduces diversification in AI infrastructure. For startups and enterprises alike, this could mean more choice and potentially lower costs over time. However, it also raises new questions around vendor lock-in. Deep integration with a specific hardware ecosystem can limit portability, making strategic alignment decisions more consequential. Anthropic's move is not an isolated development; it is indicative of where the AI industry is heading. The future of AI will likely be defined by tightly coupled ecosystems where hardware, cloud, and models are deeply interconnected. For CXOs and technology leaders, the takeaway is clear: AI strategy can no longer be confined to software considerations alone. Infrastructure choices, compute architecture, chip partnerships, and cloud alignment will play an equally critical role in determining competitive advantage. In that sense, this partnership is more than a technical collaboration. It is a blueprint for the next phase of AI evolution, one where control over compute becomes as strategic as the intelligence it powers.

The SBM based on the new algorithm is therefore much faster. It delivers a time to solution (TTS) required to obtain an optimal or known best solution that is approximately 100 times faster than the SBM based on the second‑generation algorithm. These advances are expected to accelerate the practical applications of combinatorial optimization across a broad range of challenges. The research results were published in the April 6, 2026 issue of Physical Review Applied, a peer‑reviewed journal of the American Physical Society Note: *1 https://advances.sciencemag.org/content/5/4/eaav2372 *2 https://advances.sciencemag.org/content/7/6/eabe7953 *3 In nonlinear dynamical systems, a phenomenon in which changes in system parameters (bifurcation parameters) cause the number of stable points to change from one to multiple. *4 In the SB algorithm, the equations of motion of a classical dynamical system consisting of many oscillators are solved. A position variable represents the position of each oscillator, and these position variables correspond to the decision variables (discrete variables) of the combinatorial optimization problem. *5 In nonlinear dynamical systems, a phenomenon in which even slight differences in initial conditions cause the subsequent trajectories of motion to diverge significantly, resulting in disordered (chaotic) behavior. This sensitivity of chaos to initial conditions is known as the butterfly effect, and the upper panel of Figure 1 provides a quantitative evaluation of this effect. *6 https://doi.org/10.1103/2qd9-x6v8 About Toshiba For over 150 years, guided by its corporate philosophy, "Committed to People, Committed to the Future.," Toshiba Group has contributed to society through its business activities. Today, the Group continues to enhance its management structure, streamline operations, and invest in forward‑looking businesses in the energy, digital infrastructure, and electronic devices domains. Annual sales in fiscal year 2025 were 3.5 trillion yen, with 95,000 employees worldwide. Find out more on our website or follow us on LinkedIn.

~Approximately 100 times faster, will accelerate solutions for drug discovery, finance, and other complex problems~ KAWASAKI, Japan -- Toshiba Corporation has developed a breakthrough algorithm that dramatically boosts the performance of the Simulated Bifurcation Machine (SBM), its proprietary quantum‑inspired combinatorial optimization computer. The new algorithm significantly improves the probability of obtaining an optimal solution or a known best solution within a limited number of trials -- referred to as the success probability, a key benchmark for evaluating combinatorial optimization technologies. Breakthrough algorithm dramatically boosts SBM performance -- up to 100× faster. https://doi.org/10.1103/2qd9-x6v8 The SBM is designed to solve large‑scale combinatorial optimization problems in a wide range of fields, including new drug discovery, delivery route optimization, and investment portfolio design. While previous algorithms could find optimal or known best solutions with a sufficiently large number of trials, large‑scale problems often trapped the search process in local optima, significantly lowering success probability under practical constraints that limit the number of trials. Toshiba has overcome this challenge by developing a third‑generation simulated bifurcation (SB) algorithm. This ground-breaking advance builds on the original SB algorithm, announced in April 2019, and the second‑generation SB algorithm, released in February 2021, which delivered major boosts to computational speed and accuracy. The new algorithm expands the bifurcation parameter that triggers the bifurcation phenomena -- a defining feature of the SB algorithm -- from a single global parameter to individual parameters assigned to each position variable. These bifurcation parameters are independently controlled according to the values of the corresponding position variables, enabling a more adaptive and effective solution search. With the introduction of this advanced control mechanism, the algorithm exhibits either regular or chaotic behavior, depending on conditions. Crucially, Toshiba discovered that by effectively harnessing chaos at the edge of chaos -- the boundary between regular dynamics and chaotic motion -- the algorithm can escape local optima far more efficiently. As a result, the success probability of reaching the global optimum increases dramatically, approaching 100%. The SBM based on the new algorithm is therefore much faster. It delivers a time to solution (TTS) required to obtain an optimal or known best solution that is approximately 100 times faster than the SBM based on the second‑generation algorithm. These advances are expected to accelerate the practical applications of combinatorial optimization across a broad range of challenges. The research results were published in the April 6, 2026 issue of Physical Review Applied, a peer‑reviewed journal of the American Physical Society Note: *1 https://advances.sciencemag.org/content/5/4/eaav2372 *2 https://advances.sciencemag.org/content/7/6/eabe7953 *3 In nonlinear dynamical systems, a phenomenon in which changes in system parameters (bifurcation parameters) cause the number of stable points to change from one to multiple. *4 In the SB algorithm, the equations of motion of a classical dynamical system consisting of many oscillators are solved. A position variable represents the position of each oscillator, and these position variables correspond to the decision variables (discrete variables) of the combinatorial optimization problem. *5 In nonlinear dynamical systems, a phenomenon in which even slight differences in initial conditions cause the subsequent trajectories of motion to diverge significantly, resulting in disordered (chaotic) behavior. This sensitivity of chaos to initial conditions is known as the butterfly effect, and the upper panel of Figure 1 provides a quantitative evaluation of this effect. *6 https://doi.org/10.1103/2qd9-x6v8 About Toshiba For over 150 years, guided by its corporate philosophy, "Committed to People, Committed to the Future.," Toshiba Group has contributed to society through its business activities. Today, the Group continues to enhance its management structure, streamline operations, and invest in forward‑looking businesses in the energy, digital infrastructure, and electronic devices domains. Annual sales in fiscal year 2025 were 3.5 trillion yen, with 95,000 employees worldwide. Find out more on our website or follow us on LinkedIn.

SpaceX outlined details of its highly anticipated IPO at a meeting with its team of bankers last night. The company told them that it plans to earmark a large portion of shares for retail investors and will host 1,500 of them at an event in June following the IPO roadshow launch, according to two people familiar with the matter. "Retail is going to be a critical part of this and a bigger part than any IPO in history," chief financial pfficer Bret Johnsen said during the virtual meeting, the two people said, asking not to be identified because the discussion was private. Johnsen said the large retail component is by design as "those are folks that have been incredibly supportive of us and of Elon (Musk) for a long time, and we want to make sure that we recognise that." Reuters reported last month that SpaceX is rewriting the IPO playbook with a large retail portion in the offering. The meeting brought together the full syndicate for the first time as part of the process for what is expected to be the biggest initial public offering ever as the rocket maker seeks to raise $75 billion, valuing SpaceX at as much as $1.75 trillion, Reuters has previously reported. The Elon Musk-led company plans to launch its roadshow the week of June 8, when executives and bankers will pitch the IPO to investors, the people said. About 125 financial analysts from the 21 banks on the deal are scheduled to meet with the company the day before, they added. On June 11, SpaceX plans to host 1,500 retail investors at what the people described as a major investor event. In addition to the US, everyday retail investors in the UK, EU, Australia, Canada, Japan and Korea would have the opportunity to participate in the offering, the people added. One of SpaceX's lead underwriters told the group of 21 investment banks the retail demand and allocation will be something they've "never seen before," the two people said. The structure of the deal and precise amount of the retail allocation are expected to be finalised closer to the IPO launch, they said. Reuters previously reported that founder Elon Musk wanted to set aside up to 30% of the company's shares for smaller investors, compared with 5% to 10% for most companies. The company plans to make its IPO prospectus public in late May, they said. SpaceX did not immediately respond to a request for comment. Morgan Stanley, Bank of America, Citigroup, JP Morgan and Goldman Sachs are leading the deal as active bookrunners, with 16 other banks in smaller roles spanning institutional, retail and international channels, Reuters previously reported. The $1.75 trillion target represents a significant step up from the $1.25 trillion combined valuation set when SpaceX merged with Musk's artificial intelligence startup xAI in February. Typically, SpaceX's roughly twice-yearly tender offers - in which employees and investors are able to sell their existing shares, allowing them to cash out from a company that has remained private for nearly 25 years - have served as the primary valuation anchor. The most recent, in December 2025, valued the company at $800 billion, before the merger with xAI.

Four-step MCP integration flow: ask your AI agent a suburb question, the MCP routes to HTAG, the API responds with live data, and the AI delivers actionable insight. HTAG Analytics launches MCP connectors for Claude and Perplexity, giving AI agents live access to 40+ Australian property metrics across 5,000 suburbs. SYDNEY, NSW, AUSTRALIA, April 7, 2026 /EINPresswire.com/ -- HTAG Analytics Becomes First Australian Proptech to Launch MCP Integrations with Claude and Perplexity AI Sydney-based property intelligence platform HTAG Analytics has become the first proptech company in Australia to deploy native Model Context Protocol (MCP) integrations with both Claude by Anthropic and Perplexity AI -- enabling AI agents to query live Australian property market data directly inside user workflows. HTAG Analytics (htag.com.au), Australia's leading independent property intelligence platform, today announced the launch of the HTAG Intelligence MCP Connector -- making it the first proptech business in Australia to deliver native integrations with Claude (Anthropic) and Perplexity, the two most widely adopted AI agents globally. The HTAG Intelligence MCP Connector is available immediately at developer.htagai.com. What Is the HTAG Intelligence MCP Connector? The Model Context Protocol (MCP) is an open standard that allows AI agents to connect directly to external data sources and services. By publishing an MCP-compliant server, HTAG has made its entire property intelligence dataset accessible to any compatible AI agent -- no manual API calls, no copy-pasting, no separate dashboards. Users of Claude and Perplexity can now connect the HTAG Intelligence Connector and ask questions such as: "What is the market cycle stage and RCS score for Paddington QLD?" "Compare stock-on-market and days-on-market trends between Ballarat North and Alfredton VIC." "Find suburbs with gross yield above 5%, low vacancy rate, and RCS Capital Growth above 70." The AI agent resolves the query in real time by calling the HTAG Intelligence API and returning structured, interpreted answers -- drawing on 40+ live metrics across approximately 5,000 Australian suburbs and localities. The Data Behind the Integration The HTAG Intelligence API -- now powering these integrations -- exposes 34 endpoints across five core categories: Market Scores & Analysis: HTAG's proprietary Relative Composite Score™ (RCS) system rates every suburb from 1-100 across Capital Growth, Cashflow, and Lower Risk dimensions, calculated from 80+ underlying metrics. The composite Overall RCS provides a single investment-grade signal. Market Cycle Intelligence: The Growth Rate Cycle (GRC) indicator reveals whether a suburb's price momentum is accelerating, decelerating, or at a turning point -- critical context for timing entry. Growth Pattern Deviation (GPD) and Growth Spillover Potential (GSP) metrics quantify how a suburb is performing relative to its own historical trend and its broader Local Government Area respectively. Supply & Demand Analytics: Stock on Market (SoM), Inventory (months of supply), Days on Market (DoM), Vacancy Rate, Auction Clearance Rate, and Hold Period are all available as point-in-time snapshots and time-series trends -- with both long-term and short-term regression slopes for each metric. Fundamentals & Risk: IRSAD socio-economic decile, Renter-to-Owner ratio, Economic Diversity Index (EDI), Mining & Agriculture Dominance Index (MADI), Flood Risk Index, and Bushfire Risk Index are accessible through a single endpoint call. Property Valuation: Individual property estimates, address standardisation, geocoding, environmental overlays, and demographic profiles complete the data suite. Why This Matters for Buyers Agents and Property Investors Australian buyers agents and property investors have traditionally accessed property data through fragmented portals, manually compiled spreadsheets, and periodic reports. The HTAG Intelligence MCP Connector eliminates that friction entirely. "The biggest bottleneck in property research isn't data availability -- it's the time it takes to pull data from multiple sources, interpret it, and form a view," said Mat Djolic, Founder of HTAG Analytics. "By making the HTAG Intelligence layer natively accessible inside Claude and Perplexity, we've turned a research process that used to take hours into a conversation that takes seconds. This is what a buyers agent operating system looks like." HTAG's typical price methodology -- the proprietary alternative to median price used across all endpoints -- controls for compositional bias and outlier distortion by fitting data across the full historical period, prioritising accuracy for the most recent month. This makes HTAG yield calculations more current than providers relying on rolling median-to-median comparisons. Technical Specifications The server exposes tools mapped to HTAG's core market endpoints including: get_market_summary, get_market_scores, get_market_cycle, get_market_supply, get_market_demand, get_market_fundamentals, get_market_risk, get_market_growth_annualised, get_property_estimates, lookup_localities, and match_brief -- a natural language brief-matching tool that returns a ranked suburb shortlist from a structured investment brief. Full documentation is available at developer.htagai.com. The API Playground allows developers to test all endpoints without writing code. Availability The HTAG Intelligence MCP Connector and Developer API access is available via the HTAG Developer Portal at developer.htagai.com. MCP connector setup guides for Claude and Perplexity are obtained through [email protected] About HTAG Analytics HTAG Analytics (ACN 622 716 492) is Australia's leading independent property intelligence platform, providing suburb-level market data, composite investment scoring, and AI-powered research tools to buyers agents, property investors, and proptech developers. HTAG's dataset covers approximately 5,000 Australian suburbs across 34 API endpoints, with metrics spanning market cycles, supply and demand dynamics, fundamentals, risk, and individual property valuations. HTAG is headquartered in Sydney, New South Wales. EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

San Francisco - Rivals OpenAI, Anthropic, and Alphabet's Google have begun working together to try to clamp down on Chinese competitors extracting results from cutting-edge US artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that the three tech companies founded with Microsoft in 2023, to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by US AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. US officials have estimated that unauthorised distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings. OpenAI confirmed it's part of the information sharing effort on adversarial distillation through the Frontier Model Forum and pointed to a recent memo it sent to Congress on the practice, where it accused Chinese firm DeepSeek of trying to "free-ride on the capabilities developed by OpenAI and other US frontier labs." Google, Anthropic, and the Frontier Model Forum declined to comment. Distillation is a technique where an older "teacher" AI model is used to train a newer, "student," model that replicates the capabilities of the earlier system - often at a much lower cost than producing an original model from scratch. Some forms of distillation are widely accepted and even encouraged by AI labs, such as when companies create smaller, more efficient versions of their own models, or allow outside developers to use distillation to build non-competitive technologies. Yet distillation has been controversial when used by third parties - particularly in adversary nations like China or Russia - to replicate proprietary work without authorisation. Leading US AI labs have warned that foreign adversaries could use the technique to develop AI models stripped of safety guardrails, such as limits that would prevent users from creating a deadly pathogen. Most models made by Chinese labs are open weight, meaning that parts of the underlying AI system are publicly available for users to freely download and run on their own platforms, and therefore cheaper to use. That poses an economic challenge for US AI companies that have kept their models proprietary, betting that customers will pay for access to their products and help offset the hundreds of billions of dollars they've spent on data centers and other infrastructure. Distillation first drew significant scrutiny in January 2025 in the weeks after DeepSeek's surprise release of the R1 reasoning model that took the AI world by storm. Soon after, Microsoft and OpenAI investigated whether the Chinese start-up had improperly exfiltrated large amounts of data from the US firm's models to create R1. In February, OpenAI warned US lawmakers that DeepSeek had continued to use increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products. OpenAI claimed in its memo to the House Select Committee on China that DeepSeek was relying on distillation to develop a new version of its breakthrough chatbot. Trump administration officials have signaled their openness to fostering information sharing among AI companies to rein in adversarial distillation. The AI Action Plan unveiled by President Donald Trump in 2025 called for the creation of an information sharing and analysis centre, in part for this purpose. For now, information sharing on distillation remains limited due to AI companies' uncertainty about what can be shared under existing antitrust guidance to counter the competitive threat from China, according to people familiar with the matter. The firms would benefit from greater clarity from the US government, the people said. BLOOMBERG
Anthropic secures massive TPU capacity from Google and Broadcom to power Claude as enterprise demand doubles and AI infrastructure becomes the next battleground. The AI infrastructure race is entering a new phase, and Anthropic is making its biggest bet yet. The company has signed a long-term agreement with Google and Broadcom to secure multiple gigawatts of next-generation TPU compute capacity, expected to go live starting in 2027. At a time when computing has become the defining bottleneck in AI, this move signals a shift from experimentation to industrial-scale deployment. Anthropic's latest deal is less about future ambition and more about catching up with present demand. The company revealed its run-rate revenue has crossed $30 billion in 2026, a sharp jump from $9 billion just months ago. Equally telling is enterprise traction. Over 1,000 customers are now spending more than $1 million annually on Claude, doubling in under two months. This kind of acceleration is rare, even by AI standards, and underscores why securing compute at scale is now mission-critical. Krishna Rao, CFO, Anthropic, framed the move as a necessity rather than a strategic luxury: "We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development." The partnership hinges on Google's Tensor Processing Units (TPUs), purpose-built chips designed for large-scale AI workloads. Through its collaboration with Broadcom, Google will supply future generations of these chips, while also enabling Anthropic access to roughly 3.5 gigawatts of TPU-based compute. This is not a single-vendor bet. Anthropic continues to run a multi-hardware strategy, leveraging AWS Trainium, NVIDIA GPUs, and Google TPUs. The approach allows workload optimisation across architectures, improving both performance and resilience. In practical terms, this means enterprise customers using Claude across platforms, from Amazon Bedrock to Google Vertex AI and Microsoft Azure, can expect more consistent performance under heavy demand. What stands out is the scale. Multi-gigawatt compute commitments place AI infrastructure closer to energy and telecom-level planning than traditional cloud provisioning. Most of this new capacity will be based in the United States, aligning with Anthropic's earlier $50 billion commitment to domestic compute infrastructure. It also reflects a broader geopolitical push to localise critical AI supply chains. At the same time, the deal deepens Anthropic's ties with Google Cloud while maintaining Amazon as its primary training partner, highlighting a deliberate multi-cloud posture rather than platform dependency. The announcement comes amid growing regulatory and geopolitical scrutiny. Anthropic has already pushed back against its classification as a "supply chain risk" by the US administration, while continuing to assert limits on how its AI can be deployed, particularly in surveillance and autonomous weapons contexts. This creates a complex backdrop: rapid commercial scaling on one side and tightening oversight on the other. Anthropic's move is less an isolated deal and more a signal of where the AI economy is heading. Compute is no longer just an enabler; it is the product backbone. As foundation model companies compete not just on capability but on availability and latency, infrastructure partnerships like this will increasingly define market leaders. The question now is not whether demand will keep rising, but whether even multi-gigawatt bets will be enough to keep pace.

SpaceX has revealed its plans for an upcoming initial public offering (IPO), indicating a significant focus on retail investors. During a recent meeting with bankers, the company outlined its strategy, emphasizing a large allocation of shares for retail participation. Details of SpaceX's IPO Plans The meeting occurred on April 6, with SpaceX's Chief Financial Officer, Bret Johnsen, highlighting the importance of retail investors. He stated that this offering will feature a larger retail component than any previous IPO in history. This strategic move reflects the company's desire to acknowledge the long-standing support from its retail investor base and founder Elon Musk. Key Aspects of the IPO * Retail Investor Focus: SpaceX is committing a substantial portion of its shares specifically for retail investors. * Event for Retail Investors: The company plans to host an event in June, expecting to engage around 1,500 retail investors. * Funding Goals: SpaceX aims to raise approximately $75 billion through this IPO, which could value the company at up to $1.75 trillion. This IPO is anticipated to be the largest ever, setting new benchmarks for public offerings. It marks a significant shift in how SpaceX approaches fundraising, potentially rewriting the conventional IPO playbook. As preparations continue, the investment community eagerly awaits further updates on SpaceX's roadshow and other details surrounding the IPO.

OpenAI has confirmed its involvement in the information-sharing effort on adversarial distillation through the Frontier Model Forum. The company recently sent a memo to Congress accusing Chinese firm DeepSeek of trying to free-ride on the capabilities developed by OpenAI and other US frontier labs. Distillation is a process where an older "teacher" AI model trains a newer "student" model to replicate its capabilities. This is often cheaper than building an original model from scratch. While some forms of distillation are accepted and encouraged by AI labs, others have been controversial when used by third parties, especially in adversary nations like China or Russia, to replicate proprietary work without authorization.

While rivals zigged, Anthropic zagged, and the bet keeps paying off. On Monday, the company announced that its revenue run-rate, or its financial forecast based on current performance, has surpassed $30 billion. This figure is nearly triple the run rate at the end of 2025, which came in at $9 billion, and double that of mid-February, which the company reported was $14 billion. By the end of February, CEO Dario Amodei confirmed annual revenue had exceeded $19 billion. This exponential growth results from an equally notable increase in customer demand. In February, Anthropic disclosed that over 500 business customers were spending over $1 million, and now reports that the number exceeds 1,000 businesses, representing a two-fold growth in less than two months. The rapid growth can be mostly attributed to Anthropic's laser focus on enterprise. For instance, Claude Code, a go-to coding tool for many developers, alone generated a run-rate revenue of over $2.5 billion by February. In that same month, weekly active users have also more than doubled since January 1, and business subscriptions to Claude Code have quadrupled since the beginning of 2026. Those figures are likely much larger now. This success has allowed the company to bridge the gap with the much bigger OpenAI, despite the company kick-starting the AI race as we know it. At the end of February, OpenAI topped $25 billion in annualized revenue, according to a report from The Information, citing a person familiar with the figure, representing a 17% increase from the annualized revenue it generated at the end of the year. The comparison isn't perfectly apples-to-apples, as the two companies use different methods to calculate revenue, as noted by the WSJ. But with an IPO in the works, the ChatGPT maker has recently made moves to pivot towards enterprise, just like Anthropic. Last month, it shut down its Sora generative AI video platform, ended its $1 billion content partnership with Disney, and put its ChatGPT "adult mode" on hold, all while OpenAI CTO Sarah Friar acknowledged that enterprise "is a very profitable business at scale" and that it's how OpenAI will "build a sustainable business model." To meet growing demand and scale up the computing it needs to power it, Anthropic also announced an expansion of its existing partnership with Google Cloud and Broadcom by signing a new agreement. The resulting multiple-gigawatt TPU capacity is expected to come online beginning in 2027, according to the blog post. Anthropic has had a couple of challenging months, making headlines for reasons it has typically avoided in the past: controversy and accidental data exposures. Anthropic found itself in a battle with the Pentagon after the Pentagon demanded the ability to use Claude for all lawful purposes, including autonomous weapons and mass surveillance, and, upon Anthropic's refusal, labeled it a "supply chain risk." It also suffered two accidental data leaks within five days of each other in late March. Yet through it all, the company is reportedly considering going public as soon as October, putting it in a race with rival OpenAI for a public listing. Against that backdrop, it is a sound strategic move for Anthropic to continue investing in and reporting results from what it does best -- a focus on enterprise, which remains the core driver of its growth and revenue.

The update has added to hopes that Polymarket's long-awaited native token, POLY, may be launching soon. Leading prediction market platform, Polymarket, will undergo a substantial update in coming weeks that will include a new order book and the introduction of Polymarket's own collateral currency called Polymarket USD, the company announced April 7. Polymarket has described this update as the platform's "biggest infrastructure change since launch," which the platform says will result in "faster execution, lower gas, and a cleaner foundation going forward." "We've heard your feedback, and we're excited to announce Polymarket is getting a full exchange upgrade," Polymarket said through its official X / Twitter account. Polymarket plans to gradually roll out the update over a period of several weeks and expects the process to be smooth for most users, although it will require the cancellation of all open orders. The platform said it will give users several days' warning before closing all unfulfilled orders. More advanced users though, such as those using trading bots or accessing Polymarket's API, will notice more of a disruption and will be required to update their software development kits to match the new trading infrastructure. Polymarket USD will replace the platform's existing bridged collateral token. Some users may also need to bridge their USDC (or USDC.e) to Polymarket's new token. New Collateral Token Marks Significant Transition for Polymarket Until now, Polymarket has used a bridged version of Circle's USDC running on Ethereum layer-2 network, Polygon (Polymarket also runs on Polygon, hence the name). Polymarket's new collateral token, which is backed 1:1 by USDC, marks an important change, potentially allowing it to offer higher yield rates for users holding their assets on the platform and opening up significant new sources of revenue. There's also hopes among Polymarket users that the launch of the new collateral token signals Polymarket's planned native token POLY, might not be too far away. Polymarket's Chief Marketing Officer, Matthew Modabber, confirmed the platform's plans to launch their own token last October, saying during an interview on the Degenz Live podcast that "there will be a token, there will be an airdrop." "We could have launched a token whenever we wanted, and it's just how thorough we want to be about it. We want it to be a token with true utility, longevity, and to be around forever, right? That's what we expect from ourselves, and that's what I think everyone in the space expects from us," Modabber added. On February 4, Polymarket's parent company Blockratize filed trademark applications for the words POLY and $POLY, adding further to expectations the platform is set to launch its native token soon.

The latest instalment in VSL's Synchron Series line-up captures the sound of a faithful copy of a François Étienne Blanchet harpsichord, and the announcement comes alongside the launch of a special promotion that sees discounts of up to 45% applied across the company's range of piano libraries. Synchron Harpsichord (Blanchet) captures a two-manual instrument from the collection of Viennese doctor and musician Kurt Gold-Szklarski. Built by Eckehard Merzdorf in 2010, it is a faithful copy of a 1746 original by François Étienne Blanchet, the renowned French 'facteur des clavessins du Roi'. VSL say that the instrument reflects the golden age of Blanchet's harpsichord building, combining elegance with tonal richness. It was prepared and tuned at Vienna Synchron Stage by renowned Sicilian instrument maker and musician Sebastiano Calì, and offers a sound which the company say remains true to its historical roots while being perfectly suited to modern productions. The resulting library offers a versatile palette of six registrations derived from the instrument's authentic stop configuration. The lower manual features 8' and 4' stops, while the upper manual provides 8' and a distinctive lute stop, enabling registrations such as 8' low, 8' up, 4', 8'+8' (coupled), 8'+8'+4' (tutti) and lute. Each registration is instantly accessible, and is complemented by several mixer presets that range from intimate to ambient, allowing users to shape their sound with ease. The instrument has been recorded with an extensive multi-microphone setup, including condenser, ribbon and valve close mics, as well as Decca Tree and surround configurations. As always, Standard and Full versions of the library are available, with the latter offering the full complement of mic signals. Alongside the new instrument, VSL have released a free update that kits their Synchron Piano Player out with a MIDI recorder and player (standalone mode), MIDI demos by composers such as Mozart, Chopin, Beethoven and Schumann (also available in plug-in mode), velocity curve presets for a wide range of keyboard controllers, the option to lock a user's preferred pedal noise volume per piano, and other improvements. To mark the occasion, all of the company's Synchron and Studio Pianos are available at discounts of up to 45%. Flagship instruments and historic gems from renowned piano makers such as Bösendorfer, Fazioli, Steinway & Sons, Yamaha, Bechstein and Blüthner, all captured by VSL at their Synchron Stage Vienna facility. You can find out more via the link below. www.vsl.co.at/piano-promotion-2026 Synchron Harpsichord (Blanchet) runs in VSL's Synchron Player, which is supported on PCs running Windows 10 or 11, and Macs running macOS 10.14 and above. VST, VST3, AU and AAX versions are available. Synchron Harpsichord (Blanchet) is available now, and is currently (7 April 2026) being offered at the following introductory prices: www.ilio.com/synchron-harpsichord-blanchet
New Delhi [India], April 7 (ANI): Anthropic's run-rate revenue surpassed the USD 30 billion threshold, marking a substantial increase from the approximately USD 9 billion reported at the close of 2025, according to the company. 'Demand from Claude customers has accelerated in 2026. Our run-rate revenue has now surpassed $30 billion--up from approximately $9 billion at the end of 2025,' Anthropic said in a statement. The company noted that the surge in revenue followed an acceleration in demand from Claude customers throughout 2026. As per the company, the number of business clients spending over USD 1 million on an annualized basis doubled. While Anthropic reported 500 such customers during its Series G fundraising in February, 'today that number exceeds 1,000, doubling in less than two months.' This financial growth coincided with the signing of a new agreement with Google and Broadcom to secure multiple gigawatts of next-generation Tensor Processing Unit (TPU) capacity. 'This significant expansion of our compute infrastructure will power our frontier Claude models and help us serve extraordinary demand from customers worldwide,' Anthropic said in a statement. 'This ground breaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development,' said Krishna Rao, CFO of Anthropic. 'We are making our most significant compute commitment to date to keep pace with our unprecedented growth.' The vast majority of the new compute capacity was slated for placement within the United States. This move represented an expansion of the company's November 2025 commitment to invest USD 50 billion in American computing infrastructure. The arrangement also deepened existing collaborations with Google Cloud, building on TPU capacity increases previously announced in October. Despite the expanded deal with Google and Broadcom, Anthropic maintained its multi-platform hardware approach. The firm continued to train and run Claude on a range of AI hardware, including AWS Trainium, Google TPUs, and NVIDIA GPUs. The company stated that this diversity of platforms allowed for better performance and greater resilience for customers who depended on the model for critical work. 'Amazon remains our primary cloud provider and training partner, and we continue to work closely with AWS on Project Rainier,' the company said. Claude also maintained its position as the only frontier AI model available to customers across the three largest cloud platforms: Amazon Web Services (Bedrock), Google Cloud (Vertex AI), and Microsoft Azure (Foundry). (ANI)

CA - CriticalRiver Inc. and Anthropic today announced a partnership that positions CriticalRiver among a select group of global organizations chosen to help enterprises adopt and deploy Claude at scale. The partnership marks a significant milestone in CriticalRiver's evolution as an Agentic Enterprise enabler, bringing its deep implementation expertise to one of the most consequential AI programs launched in 2026. The AI economy represents a multi-trillion dollar opportunity in the making. Capturing it requires more than access to frontier AI models. For enterprises, it requires a partner who understands their systems, their constraints, and the outcomes they are accountable for. That is precisely the gap this partnership is designed to close, combining Anthropic's frontier capabilities with CriticalRiver's deep enterprise implementation expertise to deliver responsible AI where it matters most. Anthropic has assembled a select group of global partners to serve this role: organizations with the domain depth, engineering rigor, and enterprise reach to take Claude from proof of concept to production. CriticalRiver's inclusion in that group reflects a decade of outcomes-led work across global enterprises. This is not a badge. It is a mandate. Backed by an initial $100 million investment for 2026, the Anthropic Claude Partner Network provides enablement, resource acceleration and joint market development for partners helping enterprises adopt Claude, with Anthropic expecting to invest even more over time. Through the partnership, CriticalRiver gains access to Anthropic's Partner Portal, Anthropic Academy training materials, priority support, and the first Claude technical certification. For CriticalRiver's customers, this translates into deeper implementation expertise and faster deployment. Speaking at the launch of the Anthropic Claude Partner Network, Steve Corfield, Head of Global Business Development and Partnerships at Anthropic, set out the ambition behind the program: "This infrastructure is built so that any firm, at any scale, can build a Claude practice. Our partners are instrumental in getting enterprises from proof of concept to production with Claude, and we're making sure they have everything they need to do it," said Steve Corfield. At Anthropic's inaugural Partner Summit in Carlsbad, California, he added, "We really want to demonstrate that Anthropic is the most committed AI company in the world to the partner ecosystem." Among the organizations Anthropic has chosen to help deliver on that commitment, CriticalRiver brings the implementation depth to take Claude from pilot to production across some of the world's most complex enterprise environments. "Our partnership with Anthropic is a defining step in CriticalRiver's evolution as an Agentic Enterprise enabler. Anthropic's commitment to responsible, high-capability AI is exactly the foundation our clients need, and we are proud to be the trusted partner that makes that vision a reality," said Anji Maram, Founder and CEO, CriticalRiver Inc. Invited as part of the first cohort of global partners, the CriticalRiver team attended the Partner Kickoff Summit held earlier this month at Carlsbad, California. The invite-only event brought together select Hyperscaler, System Integrator, Services, and ISV partners for two days of executive keynotes, strategic vision sessions, and go-to-market alignment, themed Win Enterprise AI. Together. For Tarun Srivastava, Chief Customer Officer at CriticalRiver Inc., the summit reinforced the scale of what is now in motion: "Feels like one of those early moments you remember later. The focus is not on incremental use cases. It's on productivity at scale, agentic workflows, and building truly transformational products. There is strong, deliberate investment in the partner ecosystem. The intent is clearly to co-build and take these capabilities into real enterprise environments," said Tarun Srivastava. Enterprises that partner with CriticalRiver gain a direct path from AI ambition to AI impact. Enterprises working with CriticalRiver gain access to Claude's advanced language and reasoning capabilities across their core workflows, from optimizing existing systems and automating manual processes to deploying pre-built vertical solutions that deliver measurable results faster. Whether the goal is reducing operational overhead, accelerating software delivery, or building intelligent agents for complex workflows, CriticalRiver brings the implementation expertise to make it real. As Anthropic continues to scale the Anthropic Claude Partner Network globally, CriticalRiver is positioned to serve as the trusted implementation partner for organizations ready to move from AI ambition to AI reality.

Fluor Corporation has entered into a contract with X-energy to support the company's proposed advanced nuclear project at Dow's UCC Seadrift Operations in south Texas. Under the agreement, Fluor will initially deliver Front-End Loading Stage 2 (FEL-2) services. FEL-2 focuses on project definition, strategic planning, feasibility assessment, cost control and risk mitigation. Fluor will recognise the undisclosed contract value for this initial portion of work in the first quarter of 2026. The X-energy project proposes to develop four, 80-megawatt small modular reactor (SMR) units to supply Dow's Seadrift site with safe, reliable, carbon free electricity and industrial steam, replacing aging energy and steam infrastructure. The project is supported by the US Department of Energy's (DOE) Advanced Reactor Demonstration Program (ARDP), which accelerates the commercialisation of advanced nuclear technologies through cost shared partnerships with industry. A construction permit application was submitted in March 2025 and is currently being reviewed by the US Nuclear Regulatory Commission. "X‑energy's technology offers a powerful pathway for small modular reactors to deliver safe, reliable and fit‑for‑purpose baseload power in an industrial setting," said Pierre Bechelany, Fluor's Business Group President of Energy Solutions. "With eight decades of nuclear experience, Fluor brings the proven expertise and disciplined execution required to help advance this landmark project." X-energy was selected by the DOE in 2020 to develop, license and build its XE 100 advanced SMR and a first TRISO-X fuel fabrication facility. Since then, the company has completed engineering and preliminary reactor design, advanced development and licensing of its fuel facility in Oak Ridge, Tennessee. The Seadrift project is expected to become the first grid scale advanced nuclear reactor deployed to serve an industrial facility in North America. Dow's UCC Seadrift Operations span 4,700 acres and produce more than 4 billion pounds of materials annually for applications including food packaging, footwear, wire and cable insulation, solar cell components, and medical and pharmaceutical packaging. -OGN/TradeArabia News Service

The launch took off from Space Launch Complex 4 East at Vandenberg Space Force Base after 8 p.m. SpaceX's Falcon 9 rocket lit up the sky across Southern California Monday night, carrying 25 Starlink satellites into orbit. The launch took off from Space Launch Complex 4 East at Vandenberg Space Force Base after 8 p.m. Originally scheduled for Sunday evening, the launch was delayed due to weather conditions. The satellites are targeted for low-Earth orbit. "Following stage separation, the first stage will land on the Of Course I Still Love You droneship, which will be stationed in the Pacific Ocean," SpaceX said on its website. The company warned residents in Santa Barbara, San Luis Obispo, and Ventura counties that they might hear one or more sonic booms. SpaceX operates a Starlink constellation of satellites orbiting about 340 miles above Earth. The network is designed to deliver high-speed internet around the globe. Under the right lighting conditions, the satellites can appear in a train as they move across the night sky.

Leading artificial intelligence companies including OpenAI, Google and Anthropic have begun coordinating efforts to counter what they describe as unauthorised extraction of their model capabilities by overseas competitors. The companies are sharing intelligence through the Frontier Model Forum, a nonprofit body they co-founded with Microsoft in 2023. The initiative is aimed at identifying and preventing "adversarial distillation" -- a technique where outputs from advanced AI systems are used to train competing models without permission. The collaboration marks an unusual alignment among rivals, reflecting growing concern that some actors, particularly in China, may be replicating proprietary AI systems at significantly lower cost. Industry estimates suggest such practices could be costing US companies billions of dollars annually in lost revenue. Also read: OpenAI outlines AI economy vision with 4-day work weeks, robot tax proposals Distillation itself is a widely used method in AI development, where a smaller "student" model learns from a larger "teacher" model. However, companies argue that unauthorised use of this technique to recreate commercial systems crosses legal and ethical boundaries. OpenAI has publicly raised concerns about Chinese startup DeepSeek, alleging that it attempted to replicate capabilities developed by US firms. The issue gained prominence following the release of DeepSeek's R1 reasoning model in early 2025, which triggered scrutiny across the industry. Also read: Google launches offline AI dictation app to rival speech-to-text tools US AI firms have also warned that such practices could lead to the creation of powerful models without built-in safety safeguards, increasing the risk of misuse. Concerns range from economic competition to broader national security implications. The rise of open-weight models in China, which make parts of their systems publicly accessible, has added to the pressure on US companies that rely on proprietary models and paid access to recover massive infrastructure investments.
