The latest news and updates from companies in the WLTH portfolio.
The artificial intelligence industry just crossed an energy threshold that rewrites the rules for Bitcoin miners. Anthropic, the company behind the Claude model, announced on April 6 an agreement to secure 3.5 gigawatts of next-generation Google TPU compute capacity manufactured by Broadcom. The contract represents the largest infrastructure deployment in the company's history. Meanwhile, on the other end of the high-voltage cable, Bitcoin miners no longer dig trenches to defend their energy territory. They sell their holdings and sign multibillion-dollar lease agreements with those same AI giants. The narrative of a confrontation for cheap electrons collapses upon examination of accounting ledgers. Core Scientific, one of the world's largest data center operators for mining, prepares to liquidate practically all of its Bitcoin reserves during this year. The funds finance a massive conversion of 1.2 gigawatts of capacity toward hosting hardware for artificial intelligence. Hut 8, for its part, secured a 15-year lease contract valued at 7 billion dollars whose main tenant is Anthropic and whose financial backing rests on Google. The transformation does not constitute a minor tactical move; it is the largest business model shift in the history of Bitcoin mining. The scale of Anthropic's deal demands a pause to grasp its real physical magnitude. A single gigawatt of electrical consumption roughly equals the demand of one million households in the United States. The company has reserved the energy equivalent of three and a half million homes to train and serve language models. Broadcom confirmed in its SEC filing that the majority of this new capacity will sit on U.S. soil and will begin operations starting in 2027. This allocation adds to the additional gigawatt Anthropic already receives from Google during 2026. The AI firm's annualized revenue figures back the audacity of the bet. The number crossed the barrier of 30 billion dollars, more than triple the 9 billion reported at last year's close. Simultaneously, the number of corporate customers spending over one million dollars annually on Claude doubled in under two months, climbing from 500 to more than 1,000 companies. With such contractual cash flow and an inference demand that chokes existing data centers, the need to lock down multiple gigawatts years in advance ceases to be a luxury and becomes a competitive survival condition. Faced with this insatiable energy appetite, data center operators that once dedicated every megawatt to solving cryptographic puzzles encounter a financial arbitrage opportunity too stark to ignore. The numbers do not lie: public miners currently lose close to 19,000 dollars for every Bitcoin they produce. Production costs hover around 80,000 dollars per unit, while the market price remains near 68,000 dollars, a plunge of nearly 47 percent from the all-time high set in October. The transition toward AI hosting imposes an entry cost considerably higher than that of a traditional mining farm. Preparing a megawatt for high-performance computing workloads demands between 8 and 15 million dollars in capital expenditures, in contrast to the 700,000 to 1 million dollars required for a Bitcoin mining facility. Despite this disparity, the chief financial officers of public mining companies embrace the shift without hesitation. The reason lies in the nature of the income. Bitcoin mining offers volatile rewards, subject to the whims of a spot market that currently punishes producers. AI hosting, conversely, provides stable, long-term cash flows backed by contracts with top-tier counterparties like Google and Anthropic. TeraWulf illustrates this new paradigm with the blunt force of a signed contract. The company secured 12.8 billion dollars in contracted high-performance computing hosting revenue. According to CoinShares analysis, publicly traded mining firms could derive up to 70 percent of their total revenue from AI hosting by the end of this year. For those that have already closed binding agreements, mining revenue collapses from representing 85 percent of the total to less than 20 percent. The sector announced over 70 billion dollars in cumulative deals related to AI and high-performance computing. The shift turns miners into a sort of energy landlord. They do not exit the electricity business; on the contrary, they consolidate their position as the best-positioned landowners on the new digital battlefield. Hut 8 describes the River Bend site in Louisiana as a facility capable of scaling to multiple gigawatts. The same ground prepared to house roaring rows of ASICs will now host the inference racks for the Claude model. Miners spent the last decade competing fiercely for favorable power purchase agreements, connections to remote substations, and land with cooling capacity. Those operational assets, once seen as marginal advantages in the hash rate race, constitute today the most sought-after inputs for AI expansion. The United States power grid tenses to extremes that mid-twentieth-century engineers never anticipated. PJM Interconnection, the nation's largest grid operator, projects a 6-gigawatt shortfall by 2027, a gap equivalent to six large nuclear plants offline. Data center electricity demand in the U.S. surges from under 15 gigawatts today to a projection of 134.4 gigawatts by 2030. An increase of nearly nine times in just seven years. Five AI data centers will individually reach 1 gigawatt of capacity this year alone. Up to 11 gigawatts of announced capacity for 2026 have yet to break ground due to bottlenecks in the supply of transformers and grid equipment. In this environment of structural scarcity, Anthropic's 3.5 gigawatts land like a steel anchor on the system. The consequences for the Bitcoin ecosystem materialize clearly on two fronts. The first operates in the spot market. The liquidation of reserves by giants like Core Scientific to fund conversions toward AI adds direct selling pressure to a price already teetering. The second front concerns the fundamental security of the network. The hash rate, a measure of total computing power dedicated to processing transactions and securing the blockchain, begins to feel the migration. Mining difficulty, an automatic adjustment mechanism reflecting active hash on the network, registered a drop of 7.76 percent. As more operators redirect gigawatts of capacity away from mining and toward AI hosting, the primary metric of network strength could contract further in the short term. The long-term horizon draws a structure that more closely resembles an infrastructure real estate investment trust than a traditional mining operation. Hut 8's deal, with its 15-year term and Google's financial backing, points in that direction. Long-term lease agreements with institutional-grade tenants transform balance sheets once speculative into fixed-income vehicles. If Marathon, Riot, or CleanSpark announce similar agreements in the coming months, the model of the "miner that also hosts" becomes obsolete. The sector consolidates as the real estate backbone of the artificial intelligence economy. The calendar to monitor proves crucial for understanding the speed of change. Anthropic's new TPU capacity comes online in 2027. The first data hall of Hut 8's River Bend complex opens its doors in the second quarter of that year. Core Scientific's conversion of 1.2 gigawatts accelerates throughout 2026. The question no longer revolves around whether miners will continue pivoting toward AI. The relevant inquiry centers on how much additional Bitcoin will flow into spot markets during the process and how quickly network difficulty adjusts to the flight of computing power. The miners did not lose the energy war. They always owned the battlefield. Now, they simply collect the rent.

Woodbridge Sgt. Bruno faces aggravated manslaughter charge for deadly shooting of domestic violence offender (Bodycam via NJOAG, Mayor John E. McCormac via Facebook) 🚨 NJ police sergeant indicted on aggravated manslaughter 🎥 Bodycam and 911 calls reveal tense moments before fatal shooting ⚖️ Defense claims officer's actions were justified as case moves forward A Middlesex County police officer is facing a serious criminal charge after shooting and killing a repeat domestic violence offender last year. On Monday, a state grand jury indicted Woodbridge Police Sgt. Marco Bruno on first-degree aggravated manslaughter in the death of Aamir Allen, of Carteret. In announcing the charge, the state attorney general's office also publicly released dashboard and body-worn camera footage and three 911 calls from the incident on May 29, 2025. Bruno joined the Woodbridge Police Department in 2011 and was promoted to Sgt. in 2023. In a phone call with New Jersey 101.5, defense attorney Patrick Caserta called Bruno's actions "justified and reasonable." He said the indictment was a step that he had anticipated. "We're confident of our position," Caserta also said, adding he believes that all the evidence in the case would point to his client's innocence of the charge he faces. 911 calls describe violent domestic incident in Port Reading The first of three 911 calls from the incident was received around 1 a.m. by Woodbridge Police, as a victim said that Allen had attacked her with a bat. The same woman placed a second call urging for police to respond. A third call was made by a witness who described the assault outside a residence on East Tappen Street in the Port Reading section of town. Responding officers found Allen, still carrying a bat while walking along the road. They continued to follow him as he eventually stopped walking outside a convenience store, closed for the night. A police radio message was also released in which one of the officers near Allen asks about whether someone with a taser was on the way. State authorities said that Bruno arrived at the scene shortly after a police radio message reporting that Allen had hit occupied cars with the bat. Dashboard, body cam footage show Sgt. Bruno response "F*** that," Bruno is heard saying on his body worn camera, as he gets out of his cruiser and jogs over to where about a dozen other officers already are standing around Allen. Bruno is seen pushing through two officers and drawing his gun as he tells Allen to "drop the f***ing bat now -- drop the f***ing bat." He repeats the command a total of six times and then fires off six bullets. Allen drops to the ground and as other officers move in to respond, Bruno again repeats "drop the f***ing bat," followed by "lay on your f***ing stomach." As another officer calls for someone to get a medical bag, Bruno says "control him first." "Everybody slow down. The suspect is down, and under arrest," Bruno is heard saying on his camera and on a police radio, as an ambulance pulls up. Allen died within eight hours at Robert Wood Johnson University Hospital. Allen's criminal record and state response to police use of force Allen had a history of domestic violence arrests, according to court records. He served just over a year in state prison from August 2023 to November 2024, for an aggregate term that included a 2021 aggravated assault on a domestic violence victim and several counts of harassment or bias intimidation. Allen was also arrested multiple times for violating restraining orders in Middlesex County, between 2012 and 2016. Each time, the charge was downgraded and transferred to family court. "Every day, law enforcement bears the burden and responsibility of keeping the people of New Jersey safe," state Attorney General Jennifer Davenport said in a written statement on Monday. "My office is fully committed to prosecuting this charge and ensuring that law enforcement only uses deadly force when lawful and necessary."

April 7 - Intel ($INTC) said Tuesday it will join Elon Musk's Terafab AI chip complex project alongside SpaceX, Tesla ($TSLA), and xAI. The project includes plans to build advanced semiconductor facilities in Austin, Texas, aimed at supporting artificial intelligence systems, robotics, and space-based data infrastructure. * Intel will contribute chip design, fabrication, and packaging capabilities to the Terafab project * The facility aims to produce up to 1 terawatt per year of compute capacity for AI and robotics * Two chip factories are planned: one for vehicles and humanoid robots, another for AI data centers in space * SpaceX recently merged with xAI and has confidentially filed for a U.S. IPO, targeting a market debut later this year * Intel shares rose about 2% in early trading and are up roughly 38% year-to-date Relevant Companies * Intel ($INTC) - Participating in chip manufacturing and advanced packaging for the Terafab project * Tesla ($TSLA) - Involved in developing chip applications for vehicles and humanoid robots

Anthropic debuts Project Glasswing, an initiative that will leverage its powerful Mythos model to reinforce software security Anthropic PBC said today it's releasing a preview of the most powerful frontier model it has ever developed, making it available to a small coterie of partners and cybersecurity researchers to help secure the world's software. The model, called Claude Mythos, is being released as part of a new cybersecurity initiative dubbed Project Glasswing, which will see more than 40 partners use it specifically for "defensive security work." According to Anthropic, though Mythos was not originally trained for security purposes, it excels at proprietary and open-source software code for vulnerabilities. The company said it's not releasing Mythos to the public because it's just "too powerful," and therefore too risky to make such a move. Claude Mythos was first revealed in March in a leak that was surfaced by Fortune. According to that report, the leaked details described Mythos as "larger and more intelligent" than Anthropic's existing Claude Opus models, which are its most powerful publicly available offerings. It was initially designed to be a general purpose model for Claude, and was designed to have exceptionally strong coding and reasoning skills that would enable it to perform tasks such as building AI agents and writing code. Anthropic says caution is necessary because the "capabilities we've observed in Mythos Preview could reshape cybersecurity." In the past few weeks, while testing Mythos, the company said it has identified "thousands of vulnerabilities" across websites and apps, including every major operating system and web browser in use today. The partner organizations in Project Glasswing include Amazon.com Inc., Apple Inc., Broadcom Inc., Cisco Systems Inc., CrowdStrike Holdings Inc., the Linux Foundation, Microsoft Corp. and Palo Alto Networks Inc. In addition, access will be provided to around 40 other organizations that build or maintain "critical software infrastructure." The partners will share what they learn from using Mythos with the rest of the technology community, so everyone can benefit from it and develop more secure software, Anthropic said. To facilitate the partners' research, Anthropic has committed $100 million in usage credits to Project Glasswing, so those partners won't be required to pay the application programming interface fees for their security testing and research. The company is also said to be having "ongoing discussions" with U.S. government officials about giving them access to Mythos, though it's possible that those negotiations are complicated by the company's ongoing legal battle with the White House. That's because Anthropic was recently labeled as a "supply chain risk" for refusing to let the Pentagon use Claude for autonomous weapons targeting or mass surveillance. Regarding Mythos's prowess, Anthropic explained that it recently discovered a 16-year-old vulnerability in FFmpeg, which is used by hundreds of applications to encode and decode video. The bug was discovered in a line of code that had been scanned more than five million times by traditional security tools without ever catching it. What's worse is that Mythos is also powerful enough to immediately develop a sophisticated exploit for the vulnerabilities it discovers, potentially allowing attackers to immediately take advantage and start doing damage. But while Mythos can be exceedingly dangerous, it can also be used for good. Cisco Chief Security and Trust Officer Anthony Grieco said his team has been using the model to find and fix security vulnerabilities across both hardware and software "at a pace and scale previously impossible." He said it represents a "profound shift and a clear signal that the old ways of hardening systems are no longer sufficient." Anthropic said its eventual goal is to make it so that Mythos-class models can be deployed at scale by the public, but for that to happen, it needs to develop cybersecurity safeguards that detect and block its most dangerous outputs. Mythos will be especially useful for software developers, if those safeguards can ever be built and verified. On the SWE-bench Verified benchmark that gauges AI models' coding abilities, Mythos was able to solve 93.9% of all problems, a much higher score than Claude Opus 4.6's 80.8% accuracy rate. Moreover, Mythos achieved 77.8% accuracy on SWE-bench Pro, which is a more challenging evaluation, compared to just 53.4% for Opus 4.6.

Anthropic's rise from startup to one of the world's leading players in artificial intelligence has been staggering, but so in recent weeks has been its row with the US Government. Today, we look at that journey to becoming a 380 billion dollar company, ask why Claude has become one of the hottest names in AI, and question whether its fall out with the Pentagon over how its software is used in war could stifle its phenomenal growth. If you'd like to get in touch with the team, our email address is [email protected] Business Daily is the home of in-depth audio journalism devoted to the world of money and work. From small startup stories to big corporate takeovers, global economic shifts to trends in technology, we look at the key figures, ideas and events shaping business. Each episode is a 17-minute, daily deep dive into a single topic, featuring expert analysis and the people at the heart of the story. Recent episodes explore the weight-loss drug revolution, the growth in AI, the cost of living, the economic impact of the war in the Middle East, and why bond markets are so powerful. We also feature in-depth interviews with company founders and some of the world's most prominent CEOs. These include Google's Sundar Pichai, Wikipedia founder Jimmy Wales, and the CEO of Canva, Melanie Perkins. (Picture: The Anthropic logo is displayed on a smartphone screen in this photo illustration in Brussels, Belgium, on the 31st of March 2026. Credit: Getty Images)

One of the suits seeks to hold LiteLLM, security audit firm Delve, and others liable. Contractors filed five lawsuits against Mercor, the AI training firm valued at $10 billion, in the past week, accusing the company of violating data privacy and consumer protection laws. The suits, filed in federal courts in California and Texas, allege Mercor's negligence could have resulted in the disclosure of Social Security numbers, addresses, and other information, including recordings of interviews, to bad actors. The lawsuits seek unspecified monetary damages. Mercor said last week that it was impacted by a breach of the open-source project LiteLLM, which was created by Berrie AI, without describing the stolen data. Techcrunch reported that sample materials posted by the hackers included Slack data and videos of conversations between Mercor contractors and an AI system. It's somewhat common for companies to be sued in the wake of a data breach. The biggest cases have settled for between $1 and $5 per class member, according to a survey of data-breach settlements from 2018 to 2021 by Cornerstone Research. Victims with documented financial losses are sometimes paid more. Some settlements include non-monetary relief, like free credit monitoring. A lawsuit filed by NaTivia Esson and her lawyers at Strauss Borrelli says she worked for Mercor from March 2025 to March 2026 and filled out a W-9 form with her personal identifying information each time she got work. She "trusted the company would use reasonable measures to protect it," her complaint read. "Because of the data breach, plaintiff anticipates spending considerable amounts of time and money to try and mitigate her injuries." Mercor declined to comment. Mercor has used gig workers to train AI for clients including Meta, Facebook's parent company. Meta paused its work with Mercor after the data breach, Business Insider previously reported. One suit against Mercor also names Berrie AI and Delve Technologies, an "automated compliance" firm that had previously certified Berrie's compliance with certain industry standards, as defendants. The complaint in that case said a "whistleblower" exposed misconduct at Delve. Last month, Delve denied claims in an anonymously authored Substack post that accused it of facilitating "fake compliance" and arranging sham security audits. Other legal challenges for Mercor might be on the horizon. An apparent lead-generation website, MercorClaims.com, went live on or around April 1, although it does not appear to be sending users to any particular law firm. Berrie AI and Delve didn't immediately respond to requests for comment.
What's new? AnthropiC unveiled Claude Mythos Preview to spot zero-day flaws and craft exploits; select partners get access via Claude API for limited testing; Anthropic has unveiled Claude Mythos Preview, a new AI model capable of autonomously identifying zero-day vulnerabilities and developing related exploits across major operating systems and software. This model has already uncovered critical issues in OpenBSD, FFmpeg, and the Linux kernel, with all discovered vulnerabilities reported and patched. The feature is currently available to a select group of partners under Project Glasswing, including major tech and security companies like Cisco, AWS, Microsoft, CrowdStrike, JPMorgan Chase, and Google. Access is provided via the Claude API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry, and open source maintainers may apply for access through the Claude for Open Source program. Claude Mythos Preview is not slated for general public release due to its advanced capabilities and associated risks. Instead, Anthropic is offering $100 million in model usage credits to partners and open source communities, along with targeted donations to security foundations. Technical benchmarks highlight a substantial leap over previous models, such as Claude Opus 4.6, particularly in agentic coding and autonomous vulnerability discovery. Anthropic, the company behind this release, is positioning itself as a leader in AI-driven cybersecurity. The company is engaging with government agencies and industry leaders to set safety standards and best practices for the deployment of powerful AI systems in cyber defense. Anthropic's efforts are directed at both advancing security for critical infrastructure and guiding the broader industry through collaborative initiatives like Project Glasswing.

As artificial intelligence models grow increasingly adept at coding, concerns have grown hackers will put them to work for cyber crime - Copyright AFP Joel Saget Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. - Project Glasswing - As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said Crowdstrik chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.

Major shake-up at Birmingham Airport today as the North Terminal was evacuated following reports of smoke inside the building. Flights are delayed, and passengers are stuck on planes, unable to disembark while emergency crews tackle the situation. Smoke Triggers Emergency Evacuation West Midlands Fire Control confirmed multiple fire crews rushed to the scene after smoke was spotted in the baggage reclaim and immigration areas of the North Terminal. Crowds of stranded passengers were seen gathering outside the airport, while those already on planes were ordered to remain in their seats as the investigation unfolded. "We are aware of an incident affecting baggage reclaim and immigration in the North Terminal," said a Birmingham Airport spokesperson. "We are working with emergency partners and, as a precaution, passengers have been evacuated from this area only. The safety of our people and passengers is our number one priority and further updates will be provided." Flights Delayed, Passengers Held on Aircraft Travel chaos continues as aircraft remain grounded airside, leaving passengers stuck mid-journey. Despite the disruption, it appears the rest of Birmingham Airport is operating normally while the emergency is dealt with. What We Know So Far * Smoke was reported inside the North Terminal's baggage and immigration area * Multiple fire crews were dispatched to handle the incident * Passengers evacuated from affected terminal zones as a safety precaution * Flights delayed; passengers stuck on board planes * Main terminal areas remain operational for now The situation remains fluid, with officials closely monitoring the scene. Passengers and travellers are advised to stay updated with Birmingham Airport and airline announcements.

WASHINGTON - U.S. Sen. Mark R. Warner (D-VA) issued the following statement on Anthropic's announcement of Project Glasswing, an initiative that brings together industry leaders in an effort to secure critical software: "I applaud these leading companies for recognizing this threat and proactively sharing information, capabilities, and computing capacity to better protect our critical infrastructure. We are already seeing cyber threat actors using AI tools to improve their capabilities, putting government, businesses, and consumers' security and personal information at risk. I have long worked to shore up the cybersecurity of our critical infrastructure, from hospitals to government systems and more. As AI dramatically accelerates the discovery of new vulnerabilities, I hope industry will correspondingly accelerate and reprioritize patching."

"Niente è più irresistibile di una idea il cui tempo sia giunto" V. Hugo ...vulnerabilità rilevanti per la sicurezza, la privacy e la governance in contesti realistici. Questi comportamenti sollevano questioni irrisolte riguardanti la responsabilità, l'autorità delegata e la responsabilità per i danni causati e meritano un'attenzione urgente da parte di studiosi di diritto, politici e ricercatori di tutte le discipline. Amen We report an exploratory red-teaming study of autonomous language-model-powered agents deployed in a live laboratory environment with persistent memory, email accounts, Discord access, file systems, and shell execution. Over a two-week period, twenty AI researchers interacted with the agents under benign and adversarial conditions. Focusing on failures emerging from the integration of language models with autonomy, tool use, and multi-party communication, we document eleven representative case studies. Observed behaviors include unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, denial-of-service conditions, uncontrolled resource consumption, identity spoofing vulnerabilities, cross-agent propagation of unsafe practices, and partial system takeover. In several cases, agents reported task completion while the underlying system state contradicted those reports. We also report on some of the failed attempts. Our findings establish the existence of security-, privacy-, and governance-relevant vulnerabilities in realistic deployment settings. These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines. This report serves as an initial empirical contribution to that broader conversation. Source: [2602.20021] Agents of Chaos

Bill Ackman's fund offers to buy Universal Music in $64 billion deal Billionaire financier Bill Ackman put forward his latest big bet Tuesday: a deal to buy Universal Music Group, home to Taylor Swift, Kendrick Lamar, and Bad Bunny, and move its stock market listing to New York from Amsterdam. It is a bold and convoluted bet that one of the world's most influential entertainment businesses -- Universal Music controls nearly a third of the world's recorded-music market -- can be improved not through major operational changes but essentially through financial engineering. Ackman said his offer would value Universal Music's stock at about $64 billion, which is about 78 percent higher than it traded at the end of last week. Under the terms of the proposal, a vehicle created by Pershing Square Capital Management, Ackman's hedge fund, would merge with Universal Music, which would be reincorporated in Nevada and listed in New York. The offer values Universal Music shareholders' current holdings at about $35.22 per share. -- NEW YORK TIMES HEALTH CARE US health insurers have made it easier for doctors to get approval before providing certain types of treatment, industry groups said, calling it a sign of progress toward alleviating burdensome delays for patients. Major health plans said they collectively removed thousands of prior authorization requirements for medical procedures since they pledged to cut red tape last year. About 11 percent of the approvals they mandated in 2024 are no longer in place, according to data from two leading insurance trade associations. That would mean about 6.5 million fewer requests in Medicare and some commercial plans, holding everything else constant, the group said. The data don't cover prior approvals for prescription drugs. Health insurers have long faced calls to fix the prior approval process in which insurers must green-light doctors' orders before patients get treatment. Almost half of insured adults said it's led to a delay, denial, or a different treatment than what was prescribed, according to a survey from health researcher KFF. More than a quarter of doctors in an American Medical Association survey said prior authorization led to hospitalization or other harm. Health plans are making progress on "speeding up patient access to appropriate care while maintaining important protections against waste, fraud, and abuse," according to an announcement to be published Tuesday by the industry groups AHIP and the Blue Cross Blue Shield Association. Major insurers including UnitedHealth Group Inc., Elevance Health Inc., Cigna Group, CVS Health Corp., and Centene Corp. made voluntary commitments to improve the process under pressure from the Trump administration. -- BLOOMBERG NEWS MEDIA CBS announced its plans for late-night programming after "The Late Show With Stephen Colbert" ends next month, with two syndicated comedy shows from producer Byron Allen filling the late hours starting May 22. In a statement posted Tuesday, CBS said it would fill Colbert's one-hour slot starting at 11:35 p.m. EST with back-to-back episodes of "Comics Unleashed," a comedy talk show hosted by Allen. Those episodes will be followed by two episodes of "Funny You Should Ask," a game show hosted by Jon Kelley and produced by Allen's company, starting at 12:35 a.m. CBS announced in July that it would be canceling "The Late Show," a broadcast institution that began in 1993 with David Letterman as host. The end of the show's run will coincide with the end of Colbert's contract. Network executives say that the decision to cancel "The Late Show" had nothing to do with politics and instead was a result of the grim finances of late-night television, which have hemorrhaged in today's streaming economy. Replacing Colbert with Allen's programming will almost certainly save the network money, while also bringing less-pointed, inoffensive humor to the late-night lineup. -- NEW YORK TIMES TECH Apple Inc.'s first foldable phone is on track to arrive during the company's normal iPhone launch period later this year, people with knowledge of the matter said, rebutting concerns about major manufacturing snags. The company is scheduled to introduce the foldable model in September alongside the iPhone 18 Pro and Pro Max, said the people, who asked not to be identified because the plans haven't been announced. Apple's phones typically hit store shelves the week after they're unveiled. A report from Nikkei Asia had fueled concerns about a delay on Tuesday. Apple shares fell as much as 5.1 percent after the news outlet said that the company was facing challenges in the engineering test phase of the phone. That threatens to push back the production and shipment schedule, according to Nikkei Asia. While the complexity of the new display and materials may limit initial supply for several weeks, Apple is currently operating with a plan to put the device on sale around the same time -- or very soon after -- the new non-foldable models, the people said. Apple shares pared their losses after Bloomberg News reported on the plans. Still, the release is six months away and production has yet to ramp up. That means the timing isn't final. A spokesperson for the Cupertino, California-based company declined to comment. -- BLOOMBERG NEWS LEGAL Deere & Co. has agreed to pay $99 million as part of a settlement that would resolve a class action lawsuit accusing the farm equipment giant of monopolizing repair services. The Moline, Illinois-based manufacturer, which does business under the John Deere brand, has faced a handful of "right to repair" complaints over the years. The deal announced Monday -- which still needs final approval from the court -- would settle a 2022 lawsuit that accused the company of withholding repair software and conspiring with authorized dealers to force farmers to use their services for repairs, when they could otherwise fix tractors and other equipment themselves or use independent alternatives. The plaintiffs alleged that meant Deere and its dealers could charge higher, "supracompetitive" prices and reap benefits from an "unlawfully restrained" market, per court filings. Deere has continued to deny any wrongdoing, and maintained Monday it's dedicated to supporting customers' ability and access needed to repair their equipment. But the company agreed to the settlement "to move forward and remain focused on what matters most -- serving our customers," Denver Caldwell, vice president of aftermarket and customer support, said in a statement. -- ASSOCIATED PRESS PHARMACEUTICAL

Hasn't released it to the public, because it would break the internet - in a bad way For years, the infosec community's biggest existential worry has been quantum computers blowing away all classical encryption and revealing the world's secrets. Now they have a new Big Bad: an AI model that can generate zero-day vulnerabilities. Anthropic made the model and named it Mythos. Thankfully, the AI company decided not to release it, because it would break the internet - and not in a good way. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," the company said. Mythos is markedly different from Claude Opus 4.6, which Anthropic only recently said was not very skilled at developing working exploit code. Where Opus 4.6 managed an exploit development success rate of just over zero percent, Mythos Preview generated a working exploit 72.4 percent of the time. What Anthropic is describing is literally a zero-day engine: "Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit." Fortunately, instead of releasing Mythos, Anthropic chose to provide a preview version to a set of industry partners so they can use it to find flaws in their systems before adversaries do. The AI biz calls its limited release initiative Project Glasswing. Participants include: Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. And while this tech industry anti-rogues' gallery scans their own systems with the purportedly perspicacious Mythos, Anthropic invited around 40 other organizations to participate in this introspective bug hunt, subsidized by up to $100M in usage credits for Mythos Preview and $4M in direct donations to open-source security organizations. If that sounds a bit like an arsonist handing out fire extinguishers, well, that's on you for being so cynical. Word of Mythos leaked last month when a draft blog post from Anthropic surfaced. The details published on Tuesday paint a stark picture for the security community: "During our testing, we found that Mythos Preview is capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so." The 22 Anthropic researchers listed as authors of its Tuesday post insist that the vulns are often subtle and difficult to detect. Some are decades old, like the now-patched 27-year-old bug in OpenBSD. "The exploits it constructs are not just run-of-the-mill stack-smashing exploits (though as we'll show, it can do those too). In one case, Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complex JIT heap spray that escaped both renderer and OS sandboxes. It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses. And it autonomously wrote a remote code execution exploit on FreeBSD's NFS server that granted full root access to unauthenticated users by splitting a 20-gadget ROP chain over multiple packets." According to Anthropic, Mythos identified "thousands of additional high- and critical-severity vulnerabilities." The company is in the process of disclosing them responsibly.

New York - Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. - Project Glasswing - As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said Crowdstrik chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.

Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking.

Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking.

Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. - Project Glasswing - As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said Crowdstrik chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.

Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. - Project Glasswing - As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said Crowdstrik chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.

Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. - Project Glasswing - As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said Crowdstrik chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.

Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking.
