The latest news and updates from companies in the WLTH portfolio.
Anthropic has published a detailed technical guide outlining five distinct coordination patterns for multi-agent AI systems, providing developers with a practical framework for building autonomous applications that require multiple AI agents working together. The guide, released through Claude's official blog, addresses a growing pain point in AI development: teams choosing overly complex architectures when simpler solutions would suffice. Anthropic's recommendation is blunt -- start with the simplest pattern that could work and evolve from there. The framework breaks down multi-agent coordination into five approaches, each suited to different use cases: Generator-verifier pairs one agent that produces output with another that evaluates it against explicit criteria. Think code generation where one agent writes code while another runs tests. Anthropic warns this pattern fails when teams implement the loop without defining what verification actually means -- creating "the illusion of quality control without the substance." Orchestrator-subagent uses a hierarchical structure where a lead agent delegates bounded tasks. Claude Code already uses this approach, dispatching background subagents to search large codebases while the main agent continues primary work. Agent teams differ from orchestrator-subagent in one critical way: worker persistence. Instead of terminating after each task, teammates stay alive across assignments, accumulating domain knowledge. This works well for large-scale migrations where each agent develops familiarity with its assigned component. Message bus architecture suits event-driven pipelines where workflow emerges from events rather than predetermined sequences. Security operations systems exemplify this -- alerts route to specialized agents based on type, with new agent capabilities plugging in without rewiring existing connections. Shared state removes central coordinators entirely. Agents read from and write to a persistent store directly, building on each other's discoveries in real time. Research synthesis systems benefit here, where one agent's findings immediately inform another's investigation. Anthropic doesn't shy away from documenting failure modes. Generator-verifier loops can stall indefinitely if the generator can't address feedback -- maximum iteration limits with fallback strategies are essential. Orchestrator-subagent creates information bottlenecks; critical details often get lost when routing through a central coordinator. Agent teams struggle when work isn't truly independent. Shared resources compound problems -- multiple agents editing the same file creates conflicts requiring careful partitioning. Message bus architectures make debugging harder since tracing event cascades across five agents requires meticulous logging. Shared state risks reactive loops where agents keep responding to each other's updates without converging, burning tokens indefinitely. The solution: first-class termination conditions like time budgets or convergence thresholds. For most applications, Anthropic recommends beginning with orchestrator-subagent. It handles the widest range of problems with minimal coordination overhead. Production systems often combine patterns -- orchestrator-subagent for overall workflow with shared state for collaboration-heavy subtasks. The company plans follow-up posts examining each pattern with production implementations and case studies. For developers building AI applications that require multiple agents -- whether for code review, security operations, or research synthesis -- this framework provides concrete guidance on matching architecture to actual requirements rather than perceived sophistication.

A Frontier jet to Atlanta slammed on its brakes at Los Angeles International Airport late Wednesday, narrowly avoiding disaster after two service trucks suddenly crossed in front of the plane. The incident gave the pilot a fright, and he immediately reported it to air traffic control. "Hey ground... did you see this?" the pilot said. Air traffic control said that it had not seen the trucks cross in front of them. "We just had two trucks just cut us off. We had to slam on the brakes not to hit [them]," he told LAX ground ATC. "It happened so fast, both of us are just like holy sh*t...I might have to call the flight attendants, make sure everybody's all right in the back," he added. "It was real close. Close as I've ever seen." The plane was going around 10-15 miles per hour when it braked, bringing the massive Airbus A321neo to a halt. 217 passengers and 7 crew members were on board, CBS News reported. It's unclear why the trucks were there. The maligned flight later arrived in Atlanta around 6:30 a.m. Eastern Time, about an hour late, according to flight tracking records. Frontier Airlines, a budget airline known for their green planes with wild animals on them, is aware of the incident. "We are aware of the incident. No injuries were reported to passengers or crew. We thank our crew for their vigilance and professionalism," they told CBS News. The Federal Aviation Administration is investigating. The near-accident comes after a plane collided with a fire truck at New York's LaGuardia Airport last month, killing two pilots and hospitalizing dozens.

SpaceX posted a nearly $5 billion loss in 2025 on revenue of more than $18.5 billion, according to a report cited by Reuters. The reported result includes Elon Musk's artificial intelligence company xAI after its acquisition by SpaceX in February. Reuters said it could not immediately verify the figures, while SpaceX did not respond to a request for comment outside regular business hours. The come at a key time for the company. SpaceX confidentially filed for a US listing in March and is seeking a valuation of more than $1.75 trillion. The company is also pushing ahead with launch operations, satellite services, and new artificial intelligence plans. The update marks a sharp change from the prior year. SpaceX had generated about $8 billion in profit in 2024 on revenue of $15 billion to $16 billion. Against this backdrop, the reported 2025 loss adds fresh attention to the company's finances as it moves toward a possible public debut.

* Anthropic claims its Mythos security software is so powerful and dangerous that it will release it to only a few select partners * Is Mythos as dangerous as Anthropic claims? Or maybe it's an example "criti-hype" * AI hype and criti-hype make it much harder to discuss real benefits and costs Tuesday was a good day to be afraid that the world was coming to an end. President Donald Trump was threatening to rain destruction on Iran, making World War III seem likely. And Anthropic announced Mythos, security software it claimed was so powerful and dangerous that it would release the tool to only a few select partners. I was not afraid. The world has been on the verge of World War III my whole life. And Tuesday night is trash night for us, so if the world ended then, at least I wouldn't have to take out the trash. As for Mythos -- maybe it is as dangerous as Anthropic claims. Or maybe it's an example of what Lee Vinsel, associate professor of science, technology and society at Virginia Tech, called "criti-hype." What is criti-hype -- and why does it matter for AI? Credit goes to my friend Cory Doctorow for introducing me to criti-hype, a word and concept he uses often in his writing, though it's not as well-known as the word and concept he himself coined. Criti-hype is the other side of hype. Where hype makes glorious promises for the benefits of technology, criti-hype warns the technology is an existential threat. Hype and criti-hype serve the same goal: Give the people delivering the message money, and plenty of it, to either deliver on the promise or protect you from the threat. And if a new technology is powerful enough to pose a catastrophic danger, gosh, wouldn't it be great if you licensed that technology so it was working for you? Vinsel cites several examples of criti-hype, including this outstanding example of sober science journalism: AI traffics in both flavors of hype. Champions claim AI will give us eternal life, cure cancer or deliver fully autonomous self-driving telco networks (granted, two of these things are more important than the third). The most extreme warnings, meanwhile, claim that AI threatens to take all our jobs or kill everyone. AI has real benefits and costs. At Fierce Network, we've written at length about AI's vast potential for telco network automation and customer service. AI is great for BSS/OSS and other back-office work. Agentic AI is potentially revolutionary, disrupting business the way the internet and mobile did. And AI also carries real threats -- job displacement and driving down wages. AI drives a data center boom, bringing pollution, wasted power and water, and community backlash. Both hype and criti-hype make it much harder to discuss real benefits and costs. You can't hear yourself think over the shouting. What Anthropic claims about Mythos and Project Glasswing Anthropic's Project Glasswing, announced Tuesday, smells like both hype and criti-hype. Glasswing is an initiative comprising a dozen top companies in the tech industry: Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia and Palo Alto Networks "in an effort to secure the world's most critical software." Anthropic explains: "We formed Project Glasswing because of capabilities we've observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The company continues, "Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety and national security -- could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes." Anthropic extended access to "over 40 additional organizations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems." And the AI provider will provide up to $100 million usage credits for Mythos Preview and $4 million in direct donations to open-source security organizations. Anthropic concludes its short announcement with a rallying cry to the world to rise to the task of "defending the world's cyber infrastructure" against threats. Cisco picks up the torch in a blog post signed by Anthony Grieco, SVP & chief security and trust officer: "Today, I'm proud to share that Cisco is joining the world's most critical cyber defenders to confront the most consequential shift in the history of our industry." Yipes! Fortunately, we have the good gray New York Times to provide a calm perspective. What does the Times have to say? Wow. That's a lot. I need to lie down a minute. Analyst pushes back on Mythos hype Mobile analyst Richard Windsor is skeptical. "The real danger from Mythos is that it does something really stupid, such as releasing corporate secrets or posting valuable source code online, as opposed to wiping out humanity," Windsor writes. He continues, "Anthropic is pumping the hype yet again by stating that its Mythos model is so good that it is too dangerous to make it generally available, and it is only allowing a few pre-vetted companies to have access to it. My guess is that the reality is that Mythos is in beta and is being soft-launched to a few trusted partners so that the kinks and bugs can be worked out of it before it goes on general release." Both OpenAI's GPT-2 in 2019 and its o1 model were considered too dangerous to release, but were released anyway, and the world did not end. Neither model "was the huge leap forward in performance towards artificial superintelligence that was cited as the reason for making them so dangerous, but this commentary did help stoke hype, speculation and probably the ability to raise money," Windsor writes. He made a prediction: "The net result is that right before the next time Anthropic needs to raise money, Mythos will be deemed to be safe and will be made generally available to anyone who wants it." AI is useful and transforming business and "Anthropic has an edge in being more focused on the enterprise than on the consumer," Windsor says. He adds: "Hence, I think this commentary is just more of the usual hype and that Mythos will be released when it is market-ready, as there is no chance of it commanding a robot army to wipe out humanity. In fact, it is more likely to do something irretrievably stupid that harms its user through data loss or a hack, and it is this danger that Anthropic is working on fixing." What telcos should do about Mythos -- now and when it's available What should telcos do? What they've been doing all along -- or what they should have been doing -- but more so: Maintain good security practices, with greater urgency. "Telcos should treat security with the same rigor as other critical infrastructure enterprises, such as fintech or biotech," Roy Chua, AvidThink founder and principal analyst, told Fierce. Once Mythos becomes generally available, it should be used to review all critical and sensitive software soft code, using Mythos like other AI-powered analysis tools, including static and dynamic application security testing (SAST and DAST) and advanced vulnerability scanning. Lacking direct access to Mythos, at least in the short term, telcos will need to rely on security partners. "Telcos not yet implementing best practices -- such as SBOM [software bill of materials] management, pre-check-in vulnerability scans and SAST/DAST -- must close these gaps immediately. Rapid adoption of these practices is essential to staying secure," Chua said. And telco operators need to prioritize modernizing legacy software stacks, improving software supply-chain visibility and integrating cyber resilience into network strategies. "Rapid advancements in AI-assisted vulnerability discovery heighten the urgency for these steps," Chua said. Telcos' limited access to Mythos will not put them at a competitive disadvantage, Jack Gold, founder and principal analyst of J. Gold Associates, told Fierce. Telcos already have major initiatives to find vulnerabilities on their networks, and they work with major security companies. Major telco providers, such as Ericsson, Nokia, Samsung and Cisco, have their own AI initiatives in security. Glasswing and Mythos are focused on securing cloud infrastructure, he said. And Anthropic's claim to have discovered a 27-year-old vulnerability in OpenBSD and multiple Linux kernel flaws should not be alarming. Software has always had built-in vulnerabilities, many undiscovered or undisclosed. "It's not surprising that there are vulnerabilities in old code," Gold said. "But just because there is a vulnerability doesn't mean it's easy to exploit." Even though telcos are critical infrastructure providers, they should not seek direct access to Mythos and Glasswing, Gold said. They should let hyperscalers and Anthropic take the lead, at least at first. Mythos will likely reach general availability eventually. When it does, telcos should be ready to put it to work alongside existing security tools. Until then, the advice from analysts is clear: don't wait for Anthropic's AI to save you. The work of hardening telco networks against AI-assisted attacks is already overdue -- and no amount of hype changes that.

SpaceX IPO narrative intensifies after massive 2025 loss and soaring revenue expansion reportedly. Investors weigh $1.75 trillion SpaceX IPO valuation backed by Starlink and xAI assets. NASA contracts and global launch dominance strengthen SpaceX IPO long term investor confidence. Rising operational costs and acquisitions including xAI contribute to financial loss concerns debate. Starlink satellite internet growth continues driving recurring revenue stability across global markets rapidly. SpaceX IPO could reshape space economy and influence global stock markets significantly. Musk vision for Mars colonization and orbital AI data centers expands rapidly.
Coreweave shares popped 13% after announcing a deal with Anthropic on Friday to power its AI model Claude, following a $21 billion partnership with Meta announced Thursday. Key Facts Key Background CoreWeave primarily generates revenue by building and renting out data centers packed with Nvidia GPUs that provide the energy and processing power to train and run AI models. Demand for infrastructure to develop AI has exploded since the release of ChatGPT in 2022, with Alphabet, Microsoft, Meta and Amazon committing a combined $700 billion just this year in a race to build the most sophisticated and advanced models. On Tuesday, Anthropic announced that its leaked Mythos model was so powerful that they would be holding back from releasing it to the public because of its ability to find vulnerabilities in software programs. The Claude maker said it would instead provide the model to 40 select companies including Apple, Amazon, Google and Microsoft in a cybersecurity initiative dubbed Project Glasswing. Anthropic was founded in 2021 by siblings Dario and Daniela Amodei and several former OpenAI employees who departed the ChatGPT maker over concerns about the company's direction with AI safety. Anthropic is now valued at $380 billion and announced it had reached an annual revenue run rate of $30 billion Monday, surpassing OpenAI's $25 billion annualized revenue as of February. OpenAI is now valued at $852 billion. Big Number $2.5 trillion. That's how much research firm Gartner expects global spend to build AI will reach in 2026, up 44% from last year. AI infrastructure will drive the spend, making up more than half of that figure, the firm estimates. Tangent The deals come as CoreWeave is simultaneously on an aggressive financing spree. The company is targeting $30 billion to $35 billion in capital expenditures for 2026, up from roughly $15 billion in 2025. Billionaire CEO Mike Intrator defended the spending strategy after the company's February earnings report drew criticism for the increase. "I understand the concerns that people have as they see us allocating a massive scale of money to this market, but the truth of the matter is, our backlog is enormous," he told CNBC at the time. Since going public in March 2025, the stock is up 160%, but is down nearly 45% from its peak last June. This year, the stock has been volatile, up 30% since January. What To Watch For With the financial terms of the Anthropic deal undisclosed, investors will be watching for any additional color on the contract's scope and duration in CoreWeave's next quarterly earnings report scheduled on May 13. Investors will also be watching how and if CoreWeave will be able to turn its debt pile into sustainable, high-margin cash flows. CoreWeave reported $5.1 billion in 2025 revenue -- nearly triple the $1.9 billion revenue in 2024 -- though it was not profitable with $1.1 billion in net losses.

The AI firm has restricted the third-party agent tool OpenClaw from connecting to Claude models under standard plans. Developers would now need to shift to API-based, usage-billed access.Artificial intelligence (AI) major Anthropic has blocked its Claude models from being used with OpenClaw, a third-party agent tool built by developer Peter Steinberger. In a post on X, Steinberger wrote, "It's gonna be harder in the future to ensure OpenClaw still works with Anthropic models." The post included a screenshot showing that OpenClaw can no longer reliably connect to Claude under standard subscription plans, in line with Anthropic's announcement on April 4. OpenClaw allows users to run autonomous AI agents that can interact with apps and services, often requiring continuous model access. Steinberger's tool had integrated Claude as one of its primary model options. Anthropic has indicated that such usage patterns place heavier demands on its infrastructure than typical chatbot interactions. The company has been tightening controls on how its models are accessed, particularly through automated or agent-based systems. Under the updated setup, developers seeking to use Claude with tools like OpenClaw may need to rely on separate API access and usage-based billing instead of standard subscriptions. Meanwhile, OpenClaw recently introduced a plugin marketplace called ClawHub, alongside a slate of new AI features aimed at accelerating the development of autonomous agents. The development comes on the sidelines of US Treasury Secretary Scott Bessent and Federal Reserve chair Jerome Powell holding an urgent meeting with major bank CEOs to alert them about cyber risks associated with Anthropic's latest AI model, Mythos, per a report by Reuters on Friday. Also Read: OpenClaw is the next ChatGPT: Nvidia CEO Jensen Huang
Formula 1's radical 2026 reset is already under review. After just three races, the FIA and F1 are locked in urgent talks to recalibrate the sport's new power unit regulations, with "energy management" emerging as the central fault line. The shift to a 50/50 split between internal combustion engines and electrical power, headlined by a near threefold increase in MGU-K output to 350kW, has created unintended consequences on the track. Chief among them is "clipping," where drivers run out of battery energy mid-straight, leading to sudden and significant power loss. The effect is not just tactical, it is potentially dangerous, an issue that has been flagged by multiple drivers. Cars at full throttle are abruptly slowing while rivals are still moving at top speeds, creating unpredictable speed differences on straights. ALSO READ | Real Madrid's Shock Move: Didier Deschamps Emerges As Next 'Galactico Fixer' Amid Summer Manager Hunt The FIA's immediate response has been to target how energy is harvested and deployed within the regulations. A short-term fix was already implemented at Suzuka, where the maximum energy recharge allowed in qualifying was reduced from 9MJ to 8MJ. This was done with the aim to limit excessive lift-and-coast phases and restore qualifying laps as pure performance runs rather than energy-saving exercises. But deeper structural changes are now on the table. Drivers across the grid have been unusually aligned in their criticism. Max Verstappen has flagged the "yo-yoing" effect on straights, calling out the artificial nature of cars abruptly losing power. The four-time champion has gone as far as labelling the new regulations as "Formula E on steroids." Lando Norris went further, warning that the extreme speed differentials between cars using boost modes and those "clipping" could trigger a "big accident," particularly in slipstream-heavy zones. Lewis Hamilton, who has been positive about the lighter and more agile 2026 cars, noted that reduced downforce has made energy deployment even more critical for stability. The FIA has now set a tight timeline to respond. ALSO READ | Max Verstappen's Race Engineer Gianpiero Lambiase Set To Leave Red Bull To Join McLaren: Report A meeting between the sport's governing body and technical experts from teams and power unit manufacturers was held on Thursday, April 9 to hold preliminary talks about the proposed changes. The next meeting is scheduled for April 15 which will address specific sporting regulation changes, followed by technical discussions a day later. A high-level meeting involving team principals and F1 leadership is scheduled for April 20, where a consensus solution is expected to be finalised. Any agreed changes will still require approval from the FIA's World Motor Sport Council, though that step is typically procedural once alignment is reached. Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories -- On NDTV Profit.

The token provides economic exposure without equity ownership, voting rights, or company endorsement. Cryptocurrency exchange Bitget launched IPO Prime on Friday, debuting the platform with preSPAX -- a token that provides retail investors exposure to SpaceX's future public market performance. The Republic-issued token offers economic upside tied to SpaceX's eventual IPO or acquisition, marking a new intersection between crypto infrastructure and traditional pre-IPO investing. The preSPAX token mirrors potential economic gains from SpaceX upon a qualifying event like an IPO, but grants no equity, voting rights, or ownership in the company. SpaceX has not endorsed or authorized the offering, the same report notes. The subscription window will open from April 18-21, with token distribution and OTC trading scheduled to begin once it closes. "IPO Prime allows users to participate earlier in a company's growth cycle, with the flexibility of continuous trading," said Bitget CEO Gracy Chen, in a statement. "This shifts how and when investors can engage with emerging companies, which gives retailers and new investors a chance to buy in early." The token launch comes as SpaceX moves toward a public listing. The company confidentially filed with the SEC on April 1, targeting a June 2026 IPO with a valuation of $1.75 trillion while seeking to raise over $75 billion. SpaceX currently trades at a $1.43 trillion valuation on the Nasdaq Private Market, a secondary venue for private company shares. Traders on Myriad -- a prediction market platform operated by Decrypt's parent company, Dastan -- strongly believe that SpaceX's IPO will yield a market cap above $1.3 trillion at the end of the first day of trading, currently penciling in 88% odds. Bitget's entry into tokenized pre-IPO investing reflects broader convergence between crypto and traditional markets. The Seychelles-based exchange, which claims 125 million users, already offers tokenized stocks, ETFs, commodities, and forex alongside cryptocurrencies. Republic previously launched its own rSPAX Mirror Tokens on Solana, offering similar SpaceX exposure. The space faces growing competition from both crypto and traditional players. Solana-based PreStocks offers comparable pre-IPO tokens, while established venues like Nasdaq Private Market and Forge Global dominate traditional secondary trading. Major exchanges are expanding their offerings, too, with Coinbase and Kraken offering stock trading options.

BNP Paribas analyst Nick Jones said accelerating AI adoption trends and rising infrastructure demand are reinforcing the strong positioning of major cloud and platform players. Gemini And Claude Gain Momentum He added that while OpenAI's ChatGPT remains the category leader, it lost share across web and mobile during the same period. AWS Positioned As A Key AI Beneficiary Jones said data points, including comments from Andy Jassy, support the view that Amazon.com Inc (NASDAQ:AMZN) is well-positioned in AI. He highlighted Amazon Web Services' growing AI and chip-related revenue streams, alongside strong demand signals and expanding backlog, as key drivers of its outlook. Infrastructure Demand Supports Big Tech Investment Price Action: Alphabet shares were down 0.53% at $316.78, and Amazon.com shares were up 2.09% at $238.53 at the time of publication on Friday, according to Benzinga Pro data. Image via Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called the CEOs of every major systemically important U.S. bank to Treasury headquarters on Tuesday to warn them about cybersecurity risks from Anthropic's Claude Mythos model, Why Banks Are Exposed Mythos found thousands of zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old bug in OpenBSD that survived 5 million passes by automated testing tools. That's a problem for Wall Street, where a single breach at a systemically important bank could cascade through the financial system. Many of the largest U.S. banks still run core systems on legacy code dating back decades. JPMorgan's retail banking core reportedly still uses elements of the Hogan system from the 1980s. If Mythos can surface flaws that every existing security tool missed, banks might be one of the more vulnerable sectors. The Cybersecurity Trade Anthropic launched Project Glasswing this week, giving roughly 40 companies access to Mythos for defensive security work. The market appears to be repricing the entire sector around a simple question: if an AI model can find vulnerabilities that elite human researchers missed for decades, what are cybersecurity companies actually selling? Anthropic Keeps Pulling Away Anthropic's annualized revenue reportedly hit $30 billion in April, surpassing OpenAI's roughly $24 billion run rate. The company went from $9 billion at year-end 2025 to $30 billion in about four months, with over 1,000 companies now spending more than $1 million annually on Anthropic products. Polymarket traders give Anthropic a 63% chance of going public before OpenAI, with an October 2026 listing at a $380 billion valuation considered the base case. A separate contract prices a 94% chance Anthropic tops $500 billion in valuation by year-end. Image: Shutterstock Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Shares of CoreWeave, Inc. (CRWV) gained over 12% on Friday after it announced a multi-year agreement with Anthropic. The stock is currently trading at $103.20, up $11.20 or 12.17%, on the Nasdaq. It opened at $93.44 after closing the previous session at $92.00. The stock has traded between $33.51 and $187.00 over the past 52 weeks. CoreWeave signed a multi-year agreement with Anthropic to support the deployment of its Claude AI models, with compute capacity coming online later this year. The phased rollout could expand over time, highlighting strong demand for large-scale AI compute services.

CoreWeave will use its cloud infrastructure to help run Anthropic's Claude artificial-intelligence models as part of a new partnership between the two companies. ---- TotalEnergies, Aramco Refinery Shut After Being Damaged The Saudi Aramco Total Refining and Petrochemical Co., or Satorp, is a joint venture with TotalEnergies, which holds a 37.5% interest. ---- Alibaba's New AI Video-Generation Model Tops Global Ranking The model, called HappyHorse 1.0, is leading a global ranking that tracks AI models' abilities. ---- Meta Banks on AI to Clear the Smoke of Social-Media Lawsuits While the tech giant has the means to fight in court, ongoing legal battles could temper a long-term recovery in its shares. ---- Insurers Take Bigger Risks Than Before 2008-09 Crisis, Report Warns An industry praised for its resilience is now "significantly worse off" as it invests huge sums in private credit, A.M. Best said. ---- Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting Florida Attorney General James Uthmeier raised concerns that OpenAI's models or data could be used by adversaries of America, namely China. Apple has been posting a variety of new TikTok videos. They could be just what's needed to lift the Neo. ---- SpaceX Is Going Public. Why a Tesla Merger Could Be Musk's Real Endgame. From the biggest IPO on record to the largest M&A deal in history? ---- Is MSG Maker Ajinomoto Sitting on an AI Goldmine? This Investor Thinks So An activist investor has built a stake in a Japanese company it says has a lucrative monopoly on a material vital to artificial-intelligence infrastructure. ---- Sazerac Eyes Deal With Jack Daniel's Maker Brown-Forman Interest comes as Pernod Ricard has been discussing a mostly stock deal with Brown-Forman. ---- CarMax Will Add Two Board Members After Talks With Activist Investor Starboard The move comes after Starboard urged the used-car retailer's new chief executive officer, Keith Barr, to revamp its pricing framework, streamline its digital processes, and cut costs.

President Donald Trump singled out Palantir Technologies Inc. for its "great war-fighting capabilities and equipment" in a Truth Social Post on Friday. Trump's tweet came minutes after a segment on Fox Business discussing Michael Burry's bear case against Palantir, in which the money manager made famous by The Big Short has argued that rival Anthropic PBC is winning the race for enterprise AI spending. Trump's social media post echoed comments made by guest Michael Lee, who was championing the role Palantir's technology played in the raid to capture Venezuelan strongman Nicolas Maduro. Last month, the Pentagon notified Anthropic that it had determined its products pose a risk to the US supply chain, after the startup demanded assurances that its AI wouldn't be used for mass surveillance of Americans or autonomous weapons. Anthropic is challenging the Pentagon designation in court. Palantir was set up in the early 2000s by Thiel and four other co-founders, including Chief Executive Officer Alex Karp, and secured early investment from In-Q-Tel, the CIA's investment arm. The company relies on the US government for the largest share of its revenue, with contracts worth close to $900 million with the Pentagon last year, along with smaller contracts for ICE, as well as Treasury and other government agencies, according to data compiled by Bloomberg Government. The Pentagon is planning to make Palantir's Maven Smart System a so-called program of record, which will provide "will provide the stable funding and resourcing necessary" for development, integration and for commanders to fight and win wars, Deputy Defense Secretary Steve Weinberg said in the memo. The US military already uses the Maven digital mission control platform in every combatant command, or regional theater. The system provides a digital map display of battlefields, helps identify and select targets and pairs them with weapons systems. The system is widely in use in US operations against Iran. Gregory Allen, a senior adviser at the Center for Strategic & International Studies, said he found it unusual that Trump praised the company instead of a specific technology. "It's like a partial oddity, praising Palantir the company as opposed to praising Maven Smart System," he said. "The analogy would be the president praising Lockheed Martin as opposed to praising the F-35 fighter jet." The defense software's shares erased losses after Trump's post. The stock was down about 1.5% at 11:30 a.m. in New York after declining as much as 7.3% earlier on Friday.

Arnold Davick, host of 2-Minute Tech Briefing, is a journalist and multimedia storyteller with more than a decade of experience reporting in the New York market. He has covered breaking news, politics, culture, and entertainment for NY1 Spectrum News, Access Hollywood, News 12, and Verizon FiOS1 News. Fluent in Spanish and skilled in live reporting, interviewing, and digital production, Arnold has interviewed industry leaders, public officials, and cultural figures. His storytelling bridges the gap between complexity and connection, bringing clarity and humanity to the stories he covers. Hello and welcome to your 2-Minute Tech Briefing from ComputerWorld. I'm your host, Arnold Davick, reporting from the floor of the New York Stock Exchange. Here are the top IT stories you need to know for Wednesday, April 8 from CSO Online A leaked draft revealed Anthropic's mythos, a powerful AI model with stronger reasoning and coding. Details of what it calls its most capable AI model yet surfaced through a data leak in its content management system. A revealing a LLM with sharply improved reasoning and coding skills. Anthropic reportedly plans a cautious rollout, starting with enterprise security teams and early access customers while studying near term cybersecurity risks. From NetworkWorld, Amazon is waiving a month's AWS usage charges for customers using two Middle Eastern data centers after drone strikes damaged facilities and disrupted services.

Arnold Davick, host of 2-Minute Tech Briefing, is a journalist and multimedia storyteller with more than a decade of experience reporting in the New York market. He has covered breaking news, politics, culture, and entertainment for NY1 Spectrum News, Access Hollywood, News 12, and Verizon FiOS1 News. Fluent in Spanish and skilled in live reporting, interviewing, and digital production, Arnold has interviewed industry leaders, public officials, and cultural figures. His storytelling bridges the gap between complexity and connection, bringing clarity and humanity to the stories he covers. Hello and welcome to your 2-Minute Tech Briefing from ComputerWorld. I'm your host, Arnold Davick, reporting from the floor of the New York Stock Exchange. Here are the top IT stories you need to know for Wednesday, April 8 from CSO Online A leaked draft revealed Anthropic's mythos, a powerful AI model with stronger reasoning and coding. Details of what it calls its most capable AI model yet surfaced through a data leak in its content management system. A revealing a LLM with sharply improved reasoning and coding skills. Anthropic reportedly plans a cautious rollout, starting with enterprise security teams and early access customers while studying near term cybersecurity risks. From NetworkWorld, Amazon is waiving a month's AWS usage charges for customers using two Middle Eastern data centers after drone strikes damaged facilities and disrupted services.
Coreweave shares popped 13% after announcing a deal with Anthropic on Friday to power its AI model Claude, following a $21 billion partnership with Meta announced Thursday. Key Facts Financial terms of the multi-year Anthropic agreement were not disclosed, though the deal marks the ninth of ten leading AI providers -- including OpenAI, Google, Microsoft and Meta -- using CoreWeave's platform, according to a Friday press release. The Anthropic news comes one day after CoreWeave announced a $21 billion deal to supply Meta with AI cloud capacity through December 2032, delivered from multiple data centers powered in part by Nvidia chips. The back-to-back announcements pushed CoreWeave's total contracted commitments with Meta alone to $35 billion, with the new Meta pact building on a prior $14.2 billion arrangement. Anthropic is the latest AI model developer to become a customer, highlighting the scramble among tech companies to secure more hardware, processing power and energy -- key for training and deploying increasingly complex AI models. Key Background CoreWeave primarily generates revenue by building and renting out data centers packed with Nvidia GPUs that provide the energy and processing power to train and run AI models. Demand for infrastructure to develop AI has exploded since the release of ChatGPT in 2022, with Alphabet, Microsoft, Meta and Amazon committing a combined $700 billion just this year in a race to build the most sophisticated and advanced models. On Tuesday, Anthropic announced that its leaked Mythos model was so powerful that they would be holding back from releasing it to the public because of its ability to find vulnerabilities in software programs. The Claude maker said it would instead provide the model to 40 select companies including Apple, Amazon, Google and Microsoft in a cybersecurity initiative dubbed Project Glasswing. Anthropic was founded in 2021 by siblings Dario and Daniela Amodei and several former OpenAI employees who departed the ChatGPT maker over concerns about the company's direction with AI safety. Anthropic is now valued at $380 billion and announced it had reached an annual revenue run rate of $30 billion Monday, surpassing OpenAI's $25 billion annualized revenue as of February. OpenAI is now valued at $852 billion. Big Number $2.5 trillion. That's how much research firm Gartner expects global spend to build AI will reach in 2026, up 44% from last year. AI infrastructure will drive the spend, making up more than half of that figure, the firm estimates. Tangent The deals come as CoreWeave is simultaneously on an aggressive financing spree. The company is targeting $30 billion to $35 billion in capital expenditures for 2026, up from roughly $15 billion in 2025. Billionaire CEO Mike Intrator defended the spending strategy after the company's February earnings report drew criticism for the increase. "I understand the concerns that people have as they see us allocating a massive scale of money to this market, but the truth of the matter is, our backlog is enormous," he told CNBC at the time. Since going public in March 2025, the stock is up 160%, but is down nearly 45% from its peak last June. This year, the stock has been volatile, up 30% since January.
Anthropic's AI has just triggered an unprecedented alert and caused a secret meeting between the Fed and the U.S. Treasury. At stake? Global financial stability, threatened by the destructive capabilities of its artificial intelligence models. A crisis that exposes the flaws of a system ill-prepared for the era of advanced AI. In April 2026, Jerome Powell (Fed) and Scott Bessent (U.S. Treasury) urgently summoned the CEOs of the largest American banks, including Citigroup, Morgan Stanley, and Goldman Sachs. The goal? To assess the risks posed by Anthropic's Mythos AI model, capable of identifying and exploiting critical vulnerabilities in computer systems. Indeed, authorities fear that this tool could be diverted for targeted cyberattacks against financial infrastructures. A scenario that could destabilize the global economy. While AI is often presented as a growth lever, it is now also becoming a systemic risk. In this regard, banks have been invited to strengthen their security protocols and to collaborate with regulators to anticipate threats. But the question remains. How to protect a financial system designed before the era of advanced AI? A few weeks before the Fed meeting, the leak of Anthropic's advanced AI model, Mythos, exacerbated fears. Indeed, this model capable of detecting vulnerabilities in operating systems and web browsers was accidentally exposed online, revealing critical cybersecurity flaws. The consequences were immediate! Cybersecurity companies' stock prices dropped on the stock market, also impacting BTC. This leak demonstrated that the capabilities of Anthropic's Mythos AI are not only theoretical but real and exploitable. Moreover, it shows that companies and governments still underestimate the risks related to AI despite warnings. Faced with these challenges, Anthropic announced a colossal investment plan of 30 billion dollars to secure its infrastructures, in collaboration with Google and Broadcom. The goal? The Fed meeting and the leak of Anthropic's Mythos model reveal that AI has become a national security issue. Between technological investments and regulation, the world must now decide... Let innovation run free, or impose safeguards to avoid an unprecedented cyber crisis.

The meeting, held at the Treasury Department in Washington, was aimed at ensuring banks were aware of the risks posed by Mythos US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened an urgent meeting with bank CEOs this week to warn of cyber risks posed by Anthropic's latest AI model, two sources familiar with the matter said on Thursday. Anthropic launched the powerful Mythos model earlier this week but stopped short of a broad release, citing concerns it could expose previously unknown cybersecurity vulnerabilities. The company has said the model is capable of identifying and exploiting weaknesses across "every major operating system and every major web browser". The meeting, held at the Treasury Department in Washington on Tuesday, was aimed at ensuring banks are aware of the potential risks posed by Mythos and similar models, and are taking steps to defend their systems, one of the sources said. The invitation came while most CEOs of the largest US banks were already in Washington to attend other meetings, one of the sources said. Access to Mythos will be limited to about 40 technology companies, including Microsoft and Google, and Anthropic has been in ongoing talks with the US government about the model's capabilities, the startup has said. More From This Section US, Iran prepare for ceasefire negotiations as Vance heads to Pakistan Gaza marks 6 months of ceasefire that may offer lessons for Iran war US to automatically register eligible men for military draft by December How the West Asia conflict could leave Wall Street facing lasting risks US-Iran set for talks as Netanyahu authorises negotiations with Lebanon

AI developer Anthropic is acquiring New York-based biotech startup Coefficient Bio for approximately US$400 million, according to The Information. Founded just last fall and operating largely in stealth, Coefficient Bio will be absorbed directly into Anthropic's healthcare life sciences division. Despite having fewer than 10 employees, the startup commanded a massive valuation driven by its specialized platform, which utilizes AI to map drug research and development, identify novel drug targets, and manage clinical regulatory strategies. The acquisition acts as a highly lucrative acqui-hire, securing top-tier talent operating at the increasingly competitive intersection of machine learning and biology. Coefficient's founding team includes CEO Aris Theologis, a veteran of Evozyne and Paragon Biosciences, alongside CTO Nathan Frey and co-founder Samuel Stanton, both of whom previously worked on machine learning at Roche's Genentech. After aggressively staffing up its corporate development team last year, Anthropic has been actively hunting for data licensing deals and strategic acquisitions to deepen its vertical market penetration. The integration of Coefficient Bio's team, reporting to Anthropic healthcare lead Eric Kauderer-Abrams, will bolster a unit that already services pharmaceutical heavyweights including Sanofi (NASDAQ:SNY), Novo Nordisk (NYSE:NVO), AbbVie (NYSE:ABBV), and Genmab (NASDAQ:GMAB). Anthropic has steadily laid the groundwork for this expansion. Last October, the company launched Claude Life Sciences, upgrading its flagship model to integrate with industry-standard scientific tools like Benchling and BioRender. By January, Anthropic expanded into a HIPAA-compliant healthcare environment, introducing specialized features capable of drafting clinical trial protocols and preparing regulatory submissions. Amidst the acquisition, major pharmaceutical players are also currently scrambling to secure access to advanced algorithms as part of critical infrastructure for the next generation of therapeutics. Securities Disclosure: I, Giann Liguid, hold no direct investment interest in any company mentioned in this article.
