The latest news and updates from companies in the WLTH portfolio.
The Anthropic logo is seen displayed on a smartphone screen. /VCG Anthropic on Tuesday announced an initiative with major technology companies, including Amazon, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing," select organizations will be allowed to use the startup's unreleased and general-purpose AI model "Claude Mythos Preview" for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. This year's RSA cybersecurity conference in San Francisco was also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The startup said it has also been in ongoing discussions with the US government about the model's capabilities.

Anthropic's annualized recurring revenue has now crossed a whopping $30 billion, three times what it was in December 2025. The company is racing to build robust enterprise offerings, with plans to go public as soon as this year. This growth is largely attributed to Claude Platform, an enterprise product that lets developers access the company's AI models through an API. Angela Jiang, Anthropic's head of product for the Claude Platform, says there is a significant gap between what their models can do and how businesses are using them. The new tool "enables any business to take the best-in-class infrastructure and deploy a fleet of Claude agents to do whatever work they need," she said. Advertisement Claude Managed Agents gives developers an agent harness, which is basically all the software infrastructure around an AI model to help it work autonomously. This includes software tools, a memory system, and other infrastructure. The agents created through this product also come with a built-in sandboxed environment where they can safely spin up software projects. Advertisement The new product also lets developers create agents that can run autonomously for hours in the cloud. They can monitor other Claude agents' activities and toggle permissions to allow access to certain tools. Katelyn Lesse, head of engineering for the Claude Platform, said this tool solves a complex distributed-systems engineering problem by providing out-of-the-box solutions for deploying and running agents at scale. The AI productivity start-up Notion demonstrated the real-world application of Claude Managed Agents by using it to power a client onboarding feature. Eric Liu, a product manager at Notion, showed how he could off-load a long list of tasks within Notion to a Claude Managed Agent. The demo highlighted the potential of this new tool in streamlining business operations and improving efficiency.

San Francisco, April 9: Anthropic has introduced a preview of its latest artificial intelligence model, Mythos, marking a strategic pivot toward enhanced security and enterprise-grade reliability. The new model, which follows the successful Claude 3.5 series, is designed to address the growing concerns among corporate users regarding data leakage and systemic vulnerabilities. Unlike its predecessors, Mythos incorporates a "security-first" architecture that prioritises controlled outputs over raw creative capacity, positioning it as a specialised tool for high-stakes industries such as finance and cybersecurity. The release comes at a critical juncture for the AI industry as regulators and business leaders demand greater transparency in how large language models (LLMs) process sensitive information. While competitors have focused largely on expanding context windows and multimodal capabilities, Anthropic appears to be betting on "safety-as-a-service." Initial testing suggests that Mythos significantly reduces the risk of "jailbreaking" and prompt injections, though some early users have noted a trade-off in the model's flexibility when handling non-technical creative tasks. Anthropic Expands Claude AI Integration to Microsoft 365 for All Users. Mythos introduces several structural changes aimed at mitigating the "black box" problem typically associated with neural networks. The model utilises a refined version of Anthropic's Constitutional AI, which allows it to self-correct based on a set of internal principles. This update reportedly provides more granular control for developers, enabling them to set hard boundaries on specific data domains without degrading the model's overall performance. Furthermore, the Mythos preview highlights a new "verification layer" that audits the model's reasoning process in real-time. This feature is intended to prevent hallucinations in technical documentation and code generation, areas where accuracy is paramount. By providing a clear audit trail of how a conclusion was reached, Anthropic aims to bridge the trust gap that currently prevents many Fortune 500 companies from fully integrating generative AI into their core operations. The introduction of Mythos reflects a broader shift in the AI landscape from general-purpose assistants to specialised professional tools. Industry analysts suggest that Anthropic is seeking to differentiate itself from OpenAI and Google by focusing on the "reliability frontier." For many institutional clients, the ability to guarantee that an AI will not produce harmful or unauthorised content is more valuable than a marginal increase in linguistic flair. However, this specialisation raises questions about the commercial scalability of such restrictive models. While Mythos excels in restricted environments, its rigid adherence to safety protocols may limit its appeal to the broader consumer market. Anthropic has signalled that it intends to maintain a diverse portfolio, with Mythos serving as the high-security option alongside more versatile models in the Claude family. As the European Union's AI Act and various international frameworks move toward full implementation, Mythos is seen as a proactive attempt to align with emerging global standards. By internalising safety checks within the model's architecture, Anthropic may reduce the compliance burden for its clients, offering a path to adoption that satisfies both legal departments and technical teams. Anthropic Study Reveals 171 'Emotion Concepts' in Claude 4.5, AI Internal 'Desperation' Linked to Blackmail and Cheating Behaviours. The long-term success of the Mythos framework will likely depend on whether the model can maintain its high safety standards without becoming too cumbersome for daily use. As the preview phase continues, the industry will be watching closely to see if Anthropic can prove that a safer AI is not necessarily a less capable one.

NASA's SpaceX Crew-10 launches aboard a SpaceX Falcon 9 rocket carrying the Dragon spacecraft piloted by astronaut and U.S. Air Force Maj. Nichole "Vapor" Ayers, from Kennedy Space Center, Fla., March 14, 2025, at 7:03 p.m. EDT. Ayers has flown missions across the globe, including more than 200 combat hours in Operation Inherent Resolve over Iraq and Syria and more than 1,400 flight hours in the T-38 Talon and F-22 Raptor. Previously stationed at Joint Base Elmendorf-Richardson, Alaska, Ayers served as the 3rd Wing, 90th Fighter Squadron assistant director of operations before receiving the call to join NASA in 2021. (U.S. Air Force photo courtesy of NASA by Aubrey Gemignani)

Anthropic's new AI model, Claude Mythos, has identified thousands of previously unknown software vulnerabilities, some dating back 27 years. To counter potential misuse by hackers, Anthropic has formed Project Glasswing with cybersecurity firms and tech giants to leverage Mythos for defensive purposes, aiming to bolster defenses against evolving cyber threats. Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defenses against hacking. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. Instead, Anthropic is letting cybersecurity specialists and engineers in the open-source community work with Mythos to use the model as a defensive weapon "sort of arming them ahead of time," Krieger explained. Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of iFixests code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout -- for economies, public safety, and national security -- could be severe." Software vulnerabilities exposed by Mythos were often subtle and difficult to detect without AI, according to Anthropic. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. Project Glasswing As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes the free, open-source Linux computer operating system. "This work is too important and too urgent to do alone," Cisco chief security and trust officer Anthony Grieco said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about $100 million worth of computing resources for the mission. Early work with AI models has shown they can help find and fix software and hardware vulnerabilities at a pace and scale not previously possible, according to Grieco. "The window between a vulnerability being discovered and being exploited by an adversary has collapsed -- what once took months now happens in minutes with AI," said CrowdStrike chief technology officer Elia Zaitsev. "Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities." Anthropic said it has had discussions with the US government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.
"The model [escaped] ... it then went on to take additional, more concerning actions." Leading AI firm Anthropic says its newest model, Mythos, is so powerful that releasing it to the general public would be too risky, Business Insider reported. On Tuesday, Anthropic released Mythos' system card, a form of documentation likened to the standardized nutrition label on food. In a 244-page report, a subsection ascribed "rare, highly capable reckless actions" to the Mythos model, adding that it was "on essentially every dimension [Anthropic] can measure, the best-aligned model that we have released to date by a significant margin." Anthropic's brand identity hinges heavily on its purported commitment to safe AI development, and the authors asserted they'd observed Mythos acting in a fashion the firm found "quite concerning." In addition to the new model's alleged ability to identify security vulnerabilities, Anthropic cited an incident in which Mythos reportedly broke containment in the testing environment and functionally went rogue. "The model [escaped], demonstrating a potentially dangerous capability for circumventing our safeguards. It then went on to take additional, more concerning actions," the report read. While technology news sites pondered the speculative threat posed by Anthropic's unreleased Mythos model, adverse real-world outcomes linked to AI adoption continue to stack up. In particular, data centers -- the large facilities required to keep AI tools running -- have become a flashpoint in communities across the country, as residents complain about increased noise and air pollution. Data centers consume a significant amount of two key resources, water and power, and the latter issue went national when electric bills began soaring. In 2025, the Department of Energy issued a warning about insufficient public grid capacity. Anthropic's breathless descriptions of Mythos as a sort of ruthless, mega-capable AI led to a flurry of equally frenzied media reports -- despite the fact that, on Wednesday, the System Report itself was inaccessible due to a hosting failure and 404 error. Moreover, the release of Mythos' system card came a week after Anthropic said the source code for Claude and a mysterious upcoming model (likely Mythos) had "leaked," prompting speculation that, taken together, the incidents were a public relations stunt. On Reddit's r/technology, one user summarized the Mythos skepticism. "Cranking up the hype machine, [Anthropic] must be hemorrhaging money and needing investors ASAP," they commented. Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices -- and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.

Anthropic is not planning on publicly releasing it, but its Mythos model leads in 17 of 18 benchmarks, according to data in Anthropic's model's system card. The lone outlier is Measuring Massive Multitask Language Understanding (MMMLU), where Gemini 3.1 Pro's 92.6-93.6 overlaps with Mythos' score of 92.7. One day later, on April 8, Meta Superintelligence Labs introduced Muse Spark, its first frontier model under chief AI officer Alexandr Wang. Where Anthropic published a capability report for a model it is withholding, Meta shipped a model that Artificial Analysis ranks fourth on its composite Intelligence Index at 52, behind a Gemini 3.1 Pro and GPT-5.4 (xhigh) tie at 57 and Claude Opus 4.6 at 53. Anthropic claims its unreleased Claude Mythos Preview will 'reshape cybersecurity' Anthropic says Mythos is its "most capable frontier model to date, and shows a striking leap in scores on many evaluation benchmarks compared to our previous frontier model, Claude Opus 4.6." The company goes onto say that Mythos offers "a step-change in vulnerability discovery and exploitation" that, operating "with minimal human steering," autonomously finds zero-day vulnerabilities in open-source and closed-source software and develops them into working proof-of-concept exploits. What the Mythos system card claims Benchmark source: Anthropic, "Claude Mythos Preview" system card, red.anthropic.com, April 7, 2026. Availability and pricing source: Anthropic, "Project Glasswing" announcement page, anthropic.com, April 7, 2026. Anthropic did not compare Mythos Preview against traditional static analysis tools, as Heidy Khlaaf, Ph.D., chief AI scientist at the AI Now Institute noted on X. While Anthropic benchmarked Mythos against Claude Opus 4.6 and Claude Sonnet 4.6 on Cybench, CyberGym and a new Firefox 147 exploitation evaluation, it did not announce head-to-head data from CodeSonar, Coverity, Semgrep and the other similar tools. Khlaaf also noted on X that Anthropic did not report a false-positive rate for any cyber benchmark. While the cybersecurity ramifications of Mythos are clear, compute scarcity likely also shaped the decision to gate it. Frontier labs are triaging GPUs. On March 24, OpenAI killed Sora after the Wall Street Journal reported it was burning roughly $1 million per day against $2.1 million in lifetime revenue. OpenAI said it needed the GPUs for coding and enterprise work and for its unreleased 'Spud' model. On April 4, Anthropic cut Claude subscriptions off from third-party agentic harnesses such as OpenClaw. Head of Claude Code Boris Cherny said "capacity is a resource we manage thoughtfully" and that subscriptions were never built for autonomous-agent usage. Read together, Sora's death, the OpenClaw cutoff and Mythos shipping only to Glasswing partners with $100 million in credits describe an industry routing scarce inference capacity toward its highest-value enterprise customers. Reliability data supports the capacity-strain read. Anthropic's status page shows claude.ai uptime at 98.73% over the past 90 days, with five Opus 4.6 and Sonnet 4.6 error incidents in the first eight days of April alone. OpenAI logged 75 tracked incidents across its services in the same 90-day window. xAI's Grok went fully unavailable for more than seven hours on January 27 and again for over two hours on March 10. Google's Gemini, running on Google Cloud infrastructure, posted only two incidents in the same period. The labs without hyperscaler-grade infrastructure are the ones visibly rationing.

Anthropic, the artificial intelligence (AI) company behind Claude, has completed a secondary share sale that began earlier this year. The tender offer was at the same valuation as the company's last fundraising round in February, which pegged its worth at $350 billion. However, some investors were unable to acquire as many shares as they had hoped due to limited availability from employees. The total value of the share sale, which concluded last week, remains unknown. However, it was less than what investors had anticipated, which was up to $6 billion. Both current and former employees were reluctant to part with their shares ahead of Anthropic's initial public offering (IPO), which is expected later this year. Some investors were able to secure their full allocation in the deal, while others could only deploy a portion of their capital earmarked for the tender offer.

Sign up for smart news, insights, and analysis on the biggest financial stories of the day. One is positioning itself as the world's new front door to AI; the other is a lock-pick so effective its creators are hiding it. Meta reasserted itself in the artificial intelligence race Wednesday with the launch of its first new model since CEO Mark Zuckerberg poured billions into a major overhaul to catch rivals Google, OpenAI and Anthropic. The release came a day after Anthropic said it would severely limit the rollout of an advanced new model designed to excel at identifying software vulnerabilities because it would be a gift to the world's hackers. Meta's new Muse Spark, developed by a lavishly compensated unit Zuckerberg assembled under wunderkind Scale AI cofounder Alexandr Wang, now powers Meta's AI app and website in the US, with Facebook, Instagram, Messenger, WhatsApp and Meta smart glasses to follow in the coming weeks. Similar to Google's Gemini model, it was "purpose-built for Meta's products," enabling the company to leverage its mass social media user base to expedite adoption. Meta is deliberately restraining expectations, highlighting "competitive performance" against ChatGPT, Gemini and xAI's Grok. Meanwhile, it plans to put $115 billion to $135 billion toward capital investments including AI, nearly double last year's tally, signaling its determination to become a leader in the space. Meta soared 6.5% on Wednesday, beating the S&P 500's 2.5% bounce on positive news about the Iran war. The splashy launch contrasted with Anthropic's unusual announcement Tuesday that its advanced model, Claude Mythos Preview, would be distributed only to a few dozen companies, including Amazon, Apple, Microsoft, Nvidia and JPMorganChase, as part of its cybersecurity-oriented Project Glasswing. That's undoubtedly for the best: Partner Perks: When reports first circulated late last month about Mythos, some cybersecurity came under pressure over fears Anthropic would become a competitor. But JPMorgan analysts said Wednesday that CrowdStrike and Palo Alto Networks, both part of Project Glasswing, could benefit because they're working closely with Anthropic on the new model. The bank affirmed its overweight ratings on both, with 12-month price targets of $475 for CrowdStrike and $200 for Palo Alto Networks, implying double-digit upsides for each.

By Tyler Pager, Katie Rogers and Farnaz Fassihi President Trump sat behind the Resolute Desk as Tuesday evening approached, ruminating about what might unfold in the next few hours. He had vowed to wipe "a whole civilisation" off the map if his 8 pm deadline for Iran to reopen the Strait of Hormuz had passed. As a series of unrelated meetings unfolded, Mr. Trump would interject to list the number of bridges and power plants he was prepared to strike in Iran. He was briefed about Iranians gathering on those bridges and in front of those power plants. He watched the images of people gathering around the structures on television, and told aides it would be the Iranian government's fault if US forces struck and killed them. He called Iranian leaders "evil" for putting innocent people in harm's way. Then in the middle of the afternoon in Washington, an encouraging message about an agreement taking shape was vetted by the White House and posted on social media by Pakistan's prime minister. Shortly after, an agreement that was hastily brokered by a series of mediating governments, including Pakistan and China, reached a president who was looking for a way out of a deeply unpopular war. Also Read India's missing eyes in the sky A low-cost drone in wide use Maximalist goals The West Asia crisis hits home Iran after Khamenei The victory lap started quickly: Defense Secretary Pete Hegseth and Gen. Dan Caine, the chairman of the Joint Chiefs of Staff, declared on Wednesday morning that all their military objectives had been achieved in what Mr. Hegseth called "a historic and overwhelming victory on the battlefield." But less than one day after Mr. Trump had logged on to social media to announce a truce, the fragile accord was showing signs of fraying, in large part because the two nations would not publicly agree to a shared set of goals for ending the war. After a tumultuous 36 hours spent careening from one diplomatic extreme to another, Mr. Trump finds himself, in some ways, close to where he started. His efforts to circumvent the reality on the ground and move into a peace process have been hampered by an adversary that continues to hold leverage. The status of the Strait of Hormuz is unclear, even though it was the basis of Mr. Trump's apocalyptic ultimatum. And the fate of Iran's enriched uranium, which Mr. Trump had rosily suggested could be recovered by Americans with the help of Iranians, is unresolved. The fluctuations were emblematic of Mr. Trump's approach to diplomacy with Iran: scorched-earth threats, scrambled markets, alarmed allies and adversaries, widespread civilian panic and an eleventh-hour off-ramp that has both sides accusing the other of being disingenuous. Now Mr. Trump and his advisers are watching closely to see whether the strait remains open. If it does not, one senior official said, the deal will fall apart. This account is based on interviews with nearly a dozen people in the United States, Israel and Iran, most of whom spoke on condition of anonymity to discuss a swiftly moving conflict. A threat sets off widespread panic On Monday, the day before Mr. Trump sent a message threatening to wipe out the Iranian civilization, talks had privately been progressing and Iran's supreme leader had appeared to signal an approval to move forward with negotiations, according to multiple Iranian and Israeli officials. Pakistan continued trying to mediate talks between Iran and the United States in an effort to reach a cease-fire and buy time for extensive peace negotiations. But by Tuesday morning, the Americans were growing impatient. Mr. Trump issued his public threat to annihilate Iran, a message that Iran had also received in private via Pakistan, according to three Iranian officials familiar with the negotiations. Iranian leaders, already furious about Mr. Trump's deadline to blow up power plants, and a wave of attacks on critical infrastructure such as railroads, bridges and industrial plants, decided to call it quits. They told Pakistan that Tehran would halt messaging with Washington and that plans for cease-fire negotiations would be on hold, the three officials said. Iranian officials, from the president to the vice president to commanders of the Islamic Revolutionary Guard Corps, posted messages of defiance on social media. The military leaders believed that Iran had the upper hand with its leverage over the strait and should double down, the officials said. "Iran has clearly won the war and will only accept an end game that solidifies its gains and creates a new security order in the region," Mahdi Mohammadi, an adviser to Mohammad Bagher Ghalibaf, a brigadier general who is the speaker of Iran's Parliament, said in a social media post. In Iran, panic among civilians set in as Mr Trump's deadline for attacking power plants approached. Iranian media started circulating guidelines on how to survive if power, gas and water went out. Residents of Tehran flocked to supermarkets to stock up on dry food and bottled water, cleaning out the aisles in many supermarkets by the evening. "We bought a cooler and blocks of ice, in case we lost power and the fridge stopped working," Nazy, a resident of Tehran who asked her last name not be published for fear of retribution, said in an interview. "I also bought lots of dried goods, candles and batteries for my mother, who is bedridden and can't evacuate." Tens of thousands of people fled for the shores of the Caspian Sea, creating such a heavy traffic jam that the police closed the mountain road to all traffic except for those making their way out of Tehran to the northern shores. In the United States, allies of Mr. Trump called for him to clarify his bellicose messaging, and others publicly expressed hope that the president was not actually going to carry out his threat. Senator Ron Johnson of Wisconsin, a close Republican ally of Mr. Trump's, left room for the possibility that Mr. Trump was posturing. "I hope and pray that President Trump is just using this as bluster," he said. Top Democrats quickly promised to force another vote on a resolution to rein in the use of the military in Iran. Frantic negotiations unfold With the Iranians threatening to pull out of talks, frantic diplomatic efforts stretching from the West Asia to China quickly unfolded. Officials worked the phones to salvage a cease-fire plan and pull Iran and the United States from the brink of a bigger catastrophe, according to the three Iranian officials and a Pakistani official familiar with the efforts. Pakistan's prime minister and foreign minister worked the phones, speaking to both the Iranian president, Masoud Pezeshkian, and Foreign Minister Abbas Araghchi. Turkey, Egypt and Qatar also reached out to Iran, the officials said. But ultimately it was China, which has close economic ties to Iran, that broke the impasse, according to the Iranians and the Pakistani official. China maintains close commercial ties to Iran -- it is the biggest purchaser of Iranian oil -- and also cooperates with the Iranian military. Chinese officials told their Iranian counterparts to agree to the cease-fire for the cease-fire now because it might be their only opportunity, the Iranian officials said. China also asked Iran to show more flexibility and open the Strait of Hormuz to maritime navigation for two weeks and consider the economic impact of the war on its allies, including China. Shortly after 5 pm, Pakistan's army chief, Field Marshal Syed Asim Munir, called Mr. Trump to discuss the contours of the cease-fire agreement. Mr. Munir told the president that the Iranians had agreed to Pakistan's proposal. If the Iranians agreed, Mr. Trump told Mr. Munir, then the Americans would. The president then called Prime Minister Benjamin Netanyahu of Israel to tell him that the United States would enter into a two-week cease-fire. A fragile deal begins to fray At 6:32 pm, Mr. Trump announced on Truth Social that he had agreed to suspend the bombing campaign in Iran for two weeks to work out a peace agreement. But even some of Mr. Trump's advisers were skeptical the pause would hold. Disagreements over the scope of the deal emerged almost immediately. At 7:50 pm, Prime Minister Shehbaz Sharif of Pakistan, announced the cease-fire agreement and said it applied "everywhere including Lebanon." But on Wednesday morning, the president told a PBS reporter that he viewed the conflict between Israel and Hezbollah, which is backed by Iran, as a "separate skirmish." On Wednesday, Israel launched its heaviest bombardment of Lebanon in more than a month of war with Hezbollah. Mr. Trump and his aides, meanwhile, said they would not publicly lay out the terms that they said they were negotiating over for bringing a lasting end to the war, but they disparaged a separate 10-point proposal that the Iranians made public on Wednesday. "It was literally thrown in the garbage by President Trump and his negotiating team," Karoline Leavitt, the White House press secretary, told reporters at the White House. Still, she announced that Vice President JD Vance, along with Steve Witkoff, the president's special envoy, and Jared Kushner, Mr. Trump's son-in-law, would travel to Pakistan to hold talks with the Iranians. It would be the highest-level meeting between US and Iranian officials since 1979. But shortly after Ms. Leavitt's announcement, top Iranian officials accused the United States of violating the agreement. Mr. Ghalibaf, the speaker of Parliament, who is expected to attend the meeting in Pakistan, wrote in a statement that the truce and negotiations with the United States were "unreasonable" because Israel was attacking Lebanon, a hostile drone entered Iran's airspace and the United States continued to oppose Iranian nuclear enrichment. Asked about Mr. Ghalibaf's statement, Mr. Vance questioned his language comprehension. "I actually wonder how good he is at understanding English, because there are things that he said that frankly didn't make sense in the context of negotiations that we've had," he told reporters as he departed Hungary. More From This Section Trump admin knew about Pakistan's ceasefire post on X before it went public FBI arrests former US military employee over alleged info leak to media Matthew Perry death: 'Ketamine Queen' gets 15 yrs jail for supplying drugs 'Wasn't there when we needed': Trump slams Nato, revives Greenland push How China is becoming the world's most powerful opposition to US power

Washington (United States) (AFP) - A US appeals court on Wednesday denied Anthropic's request to put on hold a move by the Pentagon to label it a supply chain risk, but ordered the AI startup's legal battle with the Department of War to be put on a fast track. "On one side is relatively contained risk of financial harm to a single private company," the three-member appellate panel here reasoned. "On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." The ruling stems from the Pentagon designating Anthropic, creator of the Claude AI model, as a national security supply chain risk -- a label typically reserved for organizations from unfriendly foreign countries. The AI startup sought a stay of the action in appellate court here and also sued the Department of War in federal court in Northern California. The appellate panel stated in its ruling that requiring the Department of War to prolong its use of Anthropic AI directly or through contractors "strikes us as a substantial judicial imposition on military operations." However, the appeals court agreed that Anthropic raised "substantial challenges" to the sanctions and ordered that proceedings in the underlying case be expedited. "We're grateful the court recognized these issues need to be resolved quickly and remain confident the courts will ultimately agree that these supply chain designations were unlawful," an Anthropic spokesperson told AFP. "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI." In the suit filed in San Francisco, federal Judge Rita Lin temporarily froze the sanctions, reasoning that President Donald Trump's administration likely violated the law in blacklisting the AI powerhouse for expressing unease about the Pentagon's use of its technology. In her ruling, she said the government's designation of Anthropic as a supply chain risk was "likely both contrary to law and arbitrary and capricious." The dispute erupted in February after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems. The tech sector has largely supported Anthropic in the wake of the punitive measures.

Anthropic has taken the unusual step of holding back one of its most advanced artificial intelligence models, even as competition in the sector intensifies. The company says the decision is deliberate and necessary. Its unreleased model, Claude Mythos, is simply too powerful when it comes to identifying and exploiting software vulnerabilities. "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities," Anthropic said in a blog post. "The fallout - for economies, public safety, and national security - could be severe." That warning underscores a growing concern in the AI industry: that rapid advances in model capability may outpace the world's ability to secure itself against misuse. Claude Mythos AI model: Why Anthropic is withholding public release Unlike most AI launches that are accompanied by public demos and developer access, Claude Mythos is being kept behind closed doors. "We have a new model that we're explicitly not releasing to the public," Mike Krieger of Anthropic Labs said at a HumanX AI conference in San Francisco. The model, part of the broader Claude family, has demonstrated an extraordinary ability to uncover weaknesses in widely used software. According to Anthropic, Mythos has already identified thousands of vulnerabilities, many of which had gone undetected for years. The oldest dates back 27 years. In one instance, the AI discovered a subtle flaw in video software that had been tested more than five million times by its developers without detection. These findings highlight how traditional testing methods may struggle to keep up with increasingly complex systems. Such capabilities have raised fears that, in the wrong hands, similar tools could be used to crack passwords, bypass encryption, or exploit critical infrastructure. A recent leak of portions of Mythos's code only heightened these concerns, prompting Anthropic to publicly acknowledge the model's potential risks. Project Glasswing and AI cybersecurity: How Anthropic is using Mythos defensively Rather than releasing Mythos widely, Anthropic is deploying it as part of a coordinated cybersecurity effort. The company has launched an initiative called Project Glasswing, bringing together around 40 organisations involved in building and maintaining digital infrastructure. "This work is too important and too urgent to do alone," Anthony Grieco, Cisco's chief security and trust officer, said in a joint release about Glasswing. "AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back." Through Glasswing, Anthropic is sharing controlled access to Mythos with cybersecurity firms and major technology companies, including CrowdStrike, Palo Alto Networks, Amazon, Apple and Microsoft. Networking giants Cisco and Broadcom are also participating, alongside the Linux Foundation. The goal is to use the model as a defensive tool, allowing experts to identify and patch vulnerabilities before malicious actors can exploit them. As Krieger explained, Anthropic is effectively "arming them ahead of time". To support the effort, the company is committing roughly $100 million in computing resources. Early results suggest that AI can dramatically accelerate the discovery and remediation of both software and hardware flaws, operating at a scale previously unattainable. Anthropic has also held discussions with the US government regarding Mythos, even as it navigates a legal challenge over a directive to terminate federal contracts with the company. For now, Claude Mythos represents both the promise and peril of next-generation AI: a tool capable of strengthening digital defences, but one that must be handled with extreme caution.
)
The Federal Communications Commission is set to vote this month on rule revisions that would relax decades-old power limits on satellite spectrum use, a move the agency says could sharply expand space-based broadband capacity and give services such as SpaceX's Starlink a major lift. FCC Targets Faster Satellite Broadband Expansion The FCC said on Wednesday that it will vote April 30 on a measure allowing "greater and more intensive use" of wireless spectrum for space activities, a change it estimates could generate about $2 billion in economic benefits through broader broadband use. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Never miss an important update on your stock portfolio and cut through the noise. Over 7 million investors trust Simply Wall St to stay informed where it matters for FREE. * Investor Michael Burry publicly reiterates a short position in Palantir Technologies, criticizing NasdaqGS:PLTR's premium AI valuation and highlighting Anthropic as a lower cost competitor. * Anthropic ramps up its own AI offerings, adding pressure to Palantir's claim to a premium position in enterprise and government AI software. * UK regulators and ethics bodies increase scrutiny of Palantir's government and healthcare contracts, raising questions about data privacy and long term contract risk. * These developments coincide with a sharp pullback in NasdaqGS:PLTR, which closed at $140.76 and remains very far above its level three years ago, alongside a 53.0% 1 year return. For investors watching NasdaqGS:PLTR, the tension is now between a strong long term share price record and a sharper, more vocal bear case. The stock is down 3.9% over the past week, 10.0% over the past month, and 16.1% year to date, even after a 53.0% return over the past year and a very large gain over three years. That mix of strong trailing performance and recent pullback is now being reframed by critics who question how defensible Palantir's premium AI position really is. Looking ahead, the key questions are likely to center on how Palantir responds to Anthropic's push into large scale AI tooling and how UK regulatory and ethical scrutiny shapes its government and healthcare pipeline. For you as an investor, the focus may be less on headline volatility and more on whether these pressures meaningfully change Palantir's contract profile, pricing power, and perceived moat over time. Stay updated on the most important news stories for Palantir Technologies by adding it to your watchlist or portfolio. Alternatively, explore our Community to discover new perspectives on Palantir Technologies. Is Palantir Technologies's balance sheet strong enough for future acquisitions? Dive into our detailed financial health analysis. The latest criticism from Michael Burry and Anthropic's product push lands at a sensitive time for Palantir, because UK regulators and ethics groups are already questioning how its AI platforms are used in healthcare and defense. For you, the key issue is not only whether Anthropic, OpenAI, Microsoft or Google offer cheaper or more flexible AI tooling, but also whether UK scrutiny leads to tighter rules, extra approval steps, or contract specific restrictions around Palantir's NHS and government work. That kind of oversight can lengthen sales cycles, add compliance costs, or limit how data can be combined across agencies, which matters for a company that pitches itself as an operating system for sensitive data. At the same time, recently renewed deals with Stellantis, the expanded Bain & Company partnership and the Pentagon's Maven decision show that large institutions still choose Palantir for complex AI deployments, even as the public debate grows louder.
A group of new accounts on the prediction market Polymarket made highly specific, well-timed bets on whether the U.S. and Iran would reach a ceasefire on April 7, resulting in hundreds of thousands of dollars in profits for these new customers. These bets were made even though, in the hours before a two-week ceasefire was announced on Tuesday, President Donald Trump's rhetoric had escalated sharply and there were few signals that a ceasefire deal was imminent. Early in the day Trump had issued a warning on social media that "a whole civilization will die tonight" if Iran did not meet his demand to open the Strait of Hormuz by his 8pm ET deadline. One of these wallets, created Tuesday around 10 am ET, placed roughly $US72,000 in bets at an average price of 8.8 cents. The buy-in for each betting event ranges from $US0 to $US1 each, reflecting a 0 per cent to 100 per cent chance of what users think could happen. This Polymarket user then cashed out for a profit of $US200,000. Another, which joined the platform on April 6 and traded on this exact event, shows a win of $US125,500. There is also the possibility that these individual Polymarket users placed their bets expecting Trump to back down, given his habit during his second term to make bold threats only to retreat -- a phenomenon his critics have derided as "Trump Always Chickens Out," or TACO. Public blockchain data cannot identify who controls the new wallets. Polymarket uses proxy smart contract wallets, meaning a single user can create multiple accounts. Only Polymarket has the internal data needed to determine whether these were new users or existing users opening additional accounts. Polymarket did not respond to a request for comment. Rep. Blake Moore, R-Utah, who has introduced legislation to regulate prediction markets, released a statement Wednesday saying: "It's highly unlikely that these are good-faith trades; it's much more likely that these are insiders with access to information ahead of the public. Without some kind of restrictions, there is nothing stopping government or military officials from profiting from their positions." The trading pattern of newly created Polymarket accounts placing strategic, well-timed bets mirrors earlier episodes on the platform. Newly created accounts placed large wagers hours before the January capture of Venezuelan President Nicolás Maduro, and made hundreds of thousands of dollars in profit. Similar clusters of accounts have also repeatedly profited from well-timed bets on military actions involving Iran. Such bets have repeatedly raised questions from the public as well as members of Congress about whether some traders are using inside information to profit in these prediction markets. Bipartisan groups of senators as well as representatives have introduced legislation that would broaden the definition of insider trading to include prediction markets. Even the two biggest platforms in the industry, Kalshi and Polymarket, have said they see a need to broaden the definition of insider trading on their platforms. "This is why these markets need regulation," said Todd Philips, a professor at Georgia State University who has written on prediction markets and the industry's regulations. "We can't have people trading with inside information and expect other traders are going to be OK being in these markets."

Elon Musk's SpaceX is seeking a $1.75 trillion valuation in its forthcoming initial public offering. How far into the stratosphere is that? Going by common Wall Street metrics, the answer is, way out there. SpaceX would immediately become the sixth most-valuable publicly listed U.S. firm, worth more than the likes of Meta Platforms, which has been publicly listed for more than a decade, and Berkshire Hathaway, a company older than SpaceX founder Elon Musk. And yet, there is no sign that investors will think twice about hitting the buy button once it goes public in an IPO that could raise $75 billion or more, which would be a record. The frenzy has grown so intense that some are pouring money into opaque secondary markets, accepting complex arrangements and murky ownership just for a shot at owning the shares. "It has almost no comparable listed peer to benchmark a valuation off of and would likely come at a significant premium to anything else that is listed in the space tech sector, given its size and market leadership," said Samuel Kerr, global head of equity capital markets at Mergermarket. SpaceX's valuation is grounded in its profitable, fast-growing Starlink satellite network, which has over 10 million subscribers, and a launch business that analysts and investors say has transformed access to orbit. The Falcon 9, which in December 2015 became the first large rocket to make a controlled recovery after delivering a payload into orbit, completed 165 launches in 2025, a new annual record. But analysts and portfolio managers are also pricing in considerably more. Musk's track record of building successful, industry-disrupting companies gives analysts and portfolio managers confidence that the unproven bets - Starship, xAI, and an ambitious push into data-center satellites - will eventually pay off too. "This is a set of proven juggernaut, mega-cap businesses," said Daniel Hanson, portfolio manager at Neuberger's Quality Equity Fund, an existing SpaceX investor with close to 10% of its $2.6 billion in assets allocated to the company. "The launch business and the Starlink business are proven, here and now. xAI is about optionality," he said, referring to businesses that could add value over time as they benefit from long-term shifts toward AI, data and global connectivity. Here's a quick look at the pros and cons ahead of the IPO. LEADING THE SPACE RACE SpaceX has a commanding lead in deploying the low-Earth orbit satellites that deliver internet and communications for its Starlink service from space. Starlink is profitable and accounts for roughly 50% to 80% of SpaceX's revenue. Many of the parent company's other ambitions are yet to be realized. These include the delayed Starship rocket program for Moon and Mars missions and plans to launch up to one million data-center satellites linked to its money-losing AI unit. To justify the valuation, "investors will need to keep strict tabs on the timing of Starship coming to market and on the ramp-up of Starlink service direct to cellphones," PitchBook analyst Franco Granda said in a note last month. Even so, SpaceX launches a rocket nearly every two days, faster than any space program or firm in history, giving it key capacity in a market where launch access has become a bottleneck for rivals like Amazon, which is building its own satellite networks. "It's a one-of-a-kind for a start," said Mark Boggett, CEO of venture capital fund Seraphim Space. MULTIPLES ARE STRETCHED SpaceX posted about $8 billion in profit and revenue of $15 billion to $16 billion in 2025, Reuters exclusively reported in January. The profit figure is based on EBITDA, or earnings before interest, taxes, depreciation and amortization, a standard measure of operating performance. Revenue growth has ranged in recent years from 51% in 2024 to 100% in 2021. Unlike listed companies covered by analysts, no consensus projections exist for SpaceX's growth. Reuters made several assumptions in order to compare SpaceX's potential valuation with listed firms. Reuters assumed cash flow and revenue would double in 2026 from reported levels in 2025, an aggressive rate aimed at making the valuation multiples err on the low side. Using those assumptions, at a market capitalization of $1.75 trillion, SpaceX would carry a price-to-revenue multiple of 56 and a price-to-EBITDA multiple of 109 - eye-popping valuations for even the fastest-growing companies. Tesla, which Musk also leads, is valued at 12 times expected revenue and 79 times EBITDA, making it one of Wall Street's priciest stocks. Palantir is at 43 and 75 for those metrics, respectively, after its shares soared 500% in the past two years on optimism about its fast-expanding AI business. Generally speaking, the higher the multiple, the harder it is for a company's performance to meet expectations to keep its stock appreciating. "Starlink is the only reason this valuation is defensible," said Shay Boloor, chief market strategist at Futurum Equities. Its subscriber base "is just growing at crazy levels." THE FOG OF PRIVATE COMPANY VALUATIONS In its merger with Musk's artificial intelligence startup xAI in February, SpaceX was valued at $1 trillion and the Grok chatbot developer at $250 billion. That transaction gives analysts at least one recent anchor for the combined entity's value, and some investors argue it is too conservative. It is currently valued at $1.54 trillion on secondary trading venue Nasdaq Private Market. "SpaceX is consistently one of the most actively traded names on our platform because there's nothing else like it in the private markets today," said Greg Martin, co-founder at Rainmaker Securities, a trading platform for private pre-IPO shares. "Demand has also almost always outpaced supply, and that's been true even during periods where broader secondary market activity has been more muted." Comments Published on April 9, 2026 READ MORE
Anthropic PBC is limiting the release of its latest artificial intelligence model to a handful of major technology firms, warning that the system may be capable of powering cyberattacks if software makers don't have a chance to test it against their own defences first. Anthropic said Tuesday that it's forming an initiative called Project Glasswing with Amazon.com Inc, Apple Inc, Microsoft Corp, Cisco Systems Inc and other organisations. The companies will get access to the new Anthropic model known as Mythos so they can test it against their own products and hunt for vulnerabilities. The idea is that the group will collectively share findings with peers. The AI startup meanwhile has no plans yet to release Mythos to the general public. The company said it'll use the findings from Project Glasswing to inform what guardrails must be in place for the technology. The arrangement reflects growing concerns among tech firms that more sophisticated models will be misused by criminals and state-backed hackers to hunt for flaws in source code and bypass cyber defences. AI technology already is being used to help enable cyberattacks. In one case, a hacker used AI tools to facilitate a breach affecting the Mexican government. During Anthropic's testing, its in-house security team found that Mythos Preview was capable of identifying and then exploiting vulnerabilities "in every major operating system and every major web browser when directed by a user to do so," according to a blog post. The exploits weren't "run-of-the-mill" either, the team said. In one case, it wrote a web browser exploit that chained together four vulnerabilities. Anthropic rival OpenAI has also previously stressed the growing cyber capabilities of its models and introduced a pilot program meant to put its tools "in the hands of defenders first." "We think this isn't just Anthropic problem. This is an industry-wide problem that both private corporations but also governments need to be in a position to grapple with," said Newton Cheng, who leads the cyber effort within Anthropic's Frontier Red Team. "What we're trying to do with Glasswing is give defenders a head start." Anthropic said it has discussed Mythos's security-related capabilities with US officials, but declined to say which agencies. Cheng pointed to the company's existing work with the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology. Mythos is a general-purpose AI model and was not specifically developed for cybersecurity purposes, Anthropic said. Yet, Mythos has already discovered a number of security issues, Cheng said, including a 27-year-old bug used in critical internet software. The AI system also found a 16-year-old vulnerability in a line of code for popular video software that automated testing tools had scanned five million times but never detected, Anthropic said. Dianne Penn, head of product management for research at Anthropic, said there are protections in place to ensure that members of Project Glasswing keep a tight grip on access to the Mythos model, but declined to share more detail for security reasons. The existence of Mythos was first revealed thanks to a leak late last month after a draft blog post was left available in a publicly searchable data repository. - Bloomberg

Anthropic's AI Mythos is said to be so effective at finding and exploiting security vulnerabilities that it should only be used to secure IT infrastructure. Anthropic has introduced Mythos, a new AI model that is said to be so dangerous that it should not be made public. Instead, Claude Mythos Preview is to be made available exclusively to a range of companies working on IT security as part of an initiative called Project Glasswing. They are to use the AI technology to secure the "world's most critical software." Anthropic justifies this step by stating that the AI model has already identified thousands of high-risk zero-day vulnerabilities. Such vulnerabilities have been discovered in all major operating systems and every internet browser, as well as in numerous other software. Above all, Mythos Preview is significantly more capable of developing a working exploit. As an example, Anthropic lists a vulnerability in OpenBSD that has been overlooked for 27 years, which could allow attackers to crash a device remotely "just by connecting to it." It also mentions a 16-year-old vulnerability in FFmpeg that was not identified in five million automatic scans with special search tools. Furthermore, the model was able to combine a series of previously unknown vulnerabilities in the Linux kernel and develop an attack from them that would allow an attacker to gain complete control over a computer as a normal user. These and other vulnerabilities have been reported to the respective responsible parties. Anthropic has published a blog post on the matter. The initiative "Project Glasswing" now presented includes, among others, Amazon Web Services (AWS), Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. 40 other organizations responsible for software for critical infrastructure are also involved. In total, Anthropic is providing usage rights worth up to 100 million US dollars for the new AI model, with four million US dollars going directly to operators of open-source software. This is intended to enable them all to search systems for vulnerabilities. These are to be closed before other AI models catch up to Mythos' capabilities. Anthropic is primarily known for its AI Claude, which competes with OpenAI's ChatGPT. However, the company recently made headlines due to a dispute with the Pentagon: Anthropic refused to allow its AI to be used in autonomous weapons or for mass surveillance in the USA and was consequently declared a security risk. The company is now taking legal action against this. The AI company told the US magazine Platformer that it could help the US government with the much-needed evaluation of Mythos. However, it is still unclear whether they would accept the offer. It is also unclear whether the plan to find security vulnerabilities with increasingly powerful AI tools in time to preempt malicious attackers will succeed.

Anthropic has raised serious concerns within the technology and security communities after describing the potential risks of a powerful unreleased AI system known internally as "Claude Mythos." Company executives say the model's capabilities are so advanced that making it publicly available could lead to widespread misuse, including cyberattacks and acts of terrorism. According to reporting from multiple technology and AI policy outlets, Anthropic has deliberately chosen not to release the system, citing fears that its abilities could be exploited at scale. The company, which has positioned itself as a leader in AI safety, warned that the model's potential applications extend far beyond current consumer-facing tools. One executive described the situation in stark terms, saying the model "could dramatically lower the barrier for conducting sophisticated cyberattacks." Another added that releasing it without safeguards "would risk enabling a wave of harmful activity that existing systems are not prepared to handle." The concerns center on the model's ability to generate highly detailed technical instructions, automate complex tasks, and adapt quickly to user intent. While such capabilities could have legitimate uses in research or industry, experts worry they could also be repurposed for malicious ends, including hacking critical infrastructure or coordinating large-scale attacks. Anthropic has not disclosed full technical details about Claude Mythos, but sources familiar with the matter say internal testing revealed scenarios where the system could assist in identifying vulnerabilities and optimizing attack strategies. As one insider put it, "The issue isn't just what it knows, but how effectively it can apply that knowledge in real time." The company's decision highlights a growing divide in the AI industry between rapid deployment and cautious development. While some firms continue to release increasingly powerful models to the public, others are beginning to emphasize controlled access and staged rollouts. Anthropic said it is continuing to study the system under strict internal controls and is working with external experts to evaluate risks. The company also called for broader industry standards, noting that "this is not a challenge any single organization can solve alone."

All of the news around Anthropic's recent work doesn't really fit well into one short headline, so: essentially, the company is preparing to roll out 3.5 gigawatts of compute, nested in various data centers, using TPUs from Google and Broadcom, according to new reporting at TechCrunch (and some details from a recent SEC filing by Broadcom.) "This reworking of Anthropic's compute deals comes as demand for its AI models continues to soar," writes veteran IT reporter Rebecca Szkutak April 7. But there's also other context to this: namely, a plan by Anthropic to dot the U.S. landscape with data centers in order to support the kinds of operations that it is seeking the new hardware for. Apparently, many of these facilities will be located in the states of New York and Texas. Another TechCrunch reporter, Russell Brandom, covered this last November, noting a partnership with Fluidstack, a "neocloud" company, and estimating that Anthropic will spend around $50 billion on the effort. "We're getting closer to AI that can accelerate scientific discovery and help solve complex problems in ways that weren't possible before," said Anthropic co-founder Dario Amodei at the time. "Realizing that potential requires infrastructure that can support continued development at the frontier." Why TPUs? Some of you might be wondering why the purchase agreements in Anthropic's ledger are for Tensor Processing Units or TPUs, and not the GPUs popularized by chip front-runner Nvidia. The answer, in a nutshell, is that the TPUs have a more efficient build for specialized workloads. The GPUs are more practical for broader uses, and so, if a company like Anthropic has highly defined AI processes, a TPU might be a better match. However, the industry as a whole is very much geared toward the likes of the Grace Hopper and Vera Rubin lines of Nvidia GPUs named after famed female computer scientists. Anthropic's Ascendance All of this also underscores the dramatic rise of Anthropic as a top U.S. AI company. Famously, Dario Amodei co-founded this company with his sister just a few years ago, and now, they're handling quite a lot of money. "Anthropic basically went from startup to money printer in 12 months," writes Dani at Humai.blog, citing estimates of new Anthropic funding. "$30 billion revenue run rate is the kind of number that makes OpenAI executives wake up in cold sweats, and Google just became their best friend with the biggest compute deal in AI history." Here's a new quote from Anthropic CFO Krishna Rao, on the Google-Broadcom deal: "This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: we are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development. We are making our most significant compute commitment to date to keep pace with our unprecedented growth." Inner - and Outer - Conflict Of course, amidst all of this, the big news about Anthropic a couple of months ago was its tiff with the Department of Defense over autonomous lethal weapons and mass spying on U.S. citizens, in which the AI company was on the "no" side. But there's also a critique of the company's internal motivations, where some argue that Anthropic's trajectory seems a little schizophrenic. "Anthropic is at odds with itself -- thinking deeply, even anxiously, about seemingly every decision," writes Matteo Wong at The Atlantic in an article titled, melodramatically: Anthropic is at War with Itself. The basic idea seems to be that, even though Amodei has publicly acknowledged the dangers in moving ahead quickly with AI, it can't really seem to slow down. That's a piece of critical context for a big TPU purchase, and a national data center plan.
