The latest news and updates from companies in the WLTH portfolio.
Rep. Seth Moulton (D-Mass.) ripped Polymarket on Friday after the popular prediction market platform allowed users to place bets on the fate of a missing American F-15 fighter pilot shot down over Iran. The since-deleted market offered degenerates the opportunity to wager on what date the US would confirm that the downed airman had been found - with most (63%) predicting that they wouldn't be rescued until Saturday. "This is DISGUSTING," Moulton fumed on X, sharing a screenshot of the betting market. Moulton noted that the page went up amid "an ongoing search and rescue operation." "Their safety is unknown," the congressman wrote. "They could be your neighbor, a friend, a family member. "And people are betting on whether or not they'll be saved." Polymarket said the betting page should not have been allowed to go up and was removed. "We took this market down immediately as it does not meet our integrity standards," the company wrote on X. "It should not have been posted, and we are investigating how this slipped through our internal safeguards," it added. Polymarket allows users to place bets on a wide range of Iran war topics, such as whether US ground forces will be used and when a cease-fire may be announced. "Taking down this particular bet after I called it out can only be the first step," Moulton wrote in a subsequent post. "There are still 219 war bets active on your platform. "Remove these immediately." The company claims it does "make money or charge any fees on any geopolitical markets." Moulton's criticism of Polymarket comes after the company took heat from Democratic lawmakers earlier this month after six suspected insiders made $1.2 million on contracts tied to the strikes on Iran - including an alleged $550,000 windfall related to Supreme Leader Ayatollah Ali Khamenei's death. Sen. Chris Murphy (D-Conn.) pledged to draw up legislation to ban bets tied to government actions in response to the gambling on the Iran war. "This is American commercial immorality on steroids," Murphy told the Washington Post, arguing that prediction markets have created a more "dystopian world." "People shouldn't be rooting for people to die because they placed a bet," the senator said.

Aman Gupta is a Digital Content Producer at LiveMint with over 3.5 years of experience covering the technology landscape. He specializes in artificial intelligence and consumer technology, reporting on everything from the ethical debates around AI models to shifts in the smartphone market. <br> His reporting is grounded in first-hand testing, independent analysis, and a focus on how technology impacts everyday users. He holds a PG Diploma in Radio and Television Journalism from the Indian Institute of Mass Communication, Delhi (Class of 2022). <br> Outside the newsroom, he spends his time reading biographies, hunting for the perfect coffee beans, or planning his next trip. <br><br> You can find Aman on <a href="https://www.linkedin.com/in/aman-gupta-894180214">LinkedIn</a> and on X at <a href="https://x.com/nobugsfound">@nobugsfound</a>, or reach him via email at <a href="[email protected]">[email protected]</a>.

Starlink's growth may justify optimism but critics of Elon Musk say the numbers still require heroic assumptions Elon Musk's SpaceX is reportedly eyeing a June initial public offering (IPO), one that would value the company at $1.75 trillion (€1.52 trillion). SpaceX is part rocket company, part satellite internet provider (Starlink), alongside Musk's AI venture xAI, and the social network X. It's an eclectic mix, to put it mildly. The rocket business, once a cash sink, is now solidly profitable, but it is Starlink that carries the burden of a $1.75 trillion valuation, "the only reason this valuation is defensible", as one analyst told Reuters. But is it really defensible? Media reports suggest SpaceX made about $8 billion in profit on $15 billion to $16 billion in revenues last year. By what alchemy does that translate into $1.75 trillion? There have been some heroic attempts at justification, most noticeably from PitchBook analyst Franco Granda, who last month described a price-sales ratio of "nearly 95" as "rich but not irrational". Since when was a price-sales ratio of 95 merely "rich but not irrational"? In fact, even that understates things. Granda was talking about a $1.5 trillion valuation although at these altitudes, another $250 billion barely registers. Some perspective. The S&P 500 trades on about three times sales, the Nasdaq six. Meta, Apple, Alphabet, and Microsoft all trade for between seven and nine times sales. Musk's other trillion-dollar company, Tesla, is richer (14), as is Nvidia (19.5). [ Amazon in talks to buy $9bn satellite group Globalstar in bid to rival Elon Musk's StarlinkOpens in new window ] However, the only large-cap company with a price-sales ratio even resembling SpaceX's is the famously expensive AI and defence technology stock Palantir (84), which until now had been in a league of its own. You have to do a lot of mental work to make the sums work. PitchBook sees $150 billion in revenue by 2040. Even under that heroic scenario where SpaceX grows its revenues tenfold, the company would still be valued at more than 10 times its 2040 sales - which is to say, still very, very rich. Of course, all this valuation talk will seem terribly old-fashioned to Elon Musk's devoted retail followers. They will be an important constituency, given Musk reportedly hopes to allocate up to 30 per cent of SpaceX's IPO to retail investors. For Musk devotees, valuation is a detail best ignored. More valuation‑minded observers, meanwhile, will surely squint at the maths and wonder what they're missing.

Anthropic accidentally released a software package for Claude Code AI that included a large amount of internal, sensitive data. The leak quickly caught the attention of developers worldwide, but the response was particularly intense in China, where the AI tool is officially unavailable. Chinese engineers who access Anthropic's services through virtual private networks scrambled to download mirrored copies and began analyzing the exposed code in detail. On social media platforms in China, discussions about the so-called "Claude Code source code leak incident" surged. Developers shared detailed breakdowns of the software's architecture, memory systems, and agent framework. Even though access to U.S. AI models is restricted on national security grounds, Chinese engineers have long been interested in frontier coding assistants that promise to automate software development workflows. The leak provided them with a rare technical window into the operational logic and orchestration layer that transforms a large language model into a usable product. Industry experts note that the leaked code did not expose Claude's underlying model weights, which remain the most valuable component of any closed-source AI system. Still, the operational design and product decisions revealed are considered highly valuable. Beijing-based systems architect Zhang Ruiwang called the code batches a "treasure" because they show key engineering choices behind the product. For Chinese developers and rival AI labs, this information can accelerate internal product development and provide insights that are otherwise difficult to obtain. Anthropic responded by removing the release and sending takedown notices to code-hosting platforms like GitHub. However, mirrored copies had already spread across multiple repositories, making containment difficult. Reports indicate that thousands of repositories were affected, increasing scrutiny over Anthropic's internal controls. The incident comes at a sensitive time for the private company, which has built its reputation on Al's security and operational discipline, and marks a second recent exposure of sensitive information.

You can no longer use OpenClaw with Claude for free. (Representational image made with AI) Anthropic has announced that it will no longer let users run third-party tools on Claude for free. Now, those who want to use tools, such as OpenClaw on Claude, will need to pay more in addition to their existing Claude subscription. Anthropic executive Boris Cherny confirmed the change on X. He stated that this will go into effect from Saturday, April 4, 2026. He wrote on X, "Starting tomorrow at 12 pm PT (12:30 am IST), Claude subscriptions will no longer cover usage on third-party tools like OpenClaw." OpenClaw is an open-source agentic AI platform. Users can run OpenClaw locally on their device on AI models such as Claude or ChatGPT. This allows you to use OpenClaw as your AI agent that can then do your tasks on its own. Anthropic has since worked on giving agentic AI features to Claude, such as Dispatch and computer use. Claude can now be controlled remotely via your phone, and the AI can do tasks such as building and testing an app in a single prompt. OpenClaw went viral a few weeks ago when OpenClaw AI bots started posting on Moltbook, an AI-only social media platform. Peter Steinberger shared a post on X, claiming that he tried to change Anthropic's mind but to no avail. He wrote, "Both me and Dave Morin tried to talk sense into Anthropic, best we managed was delaying this for a week." Steinberger took a dig at the Dario Amodei-led startup over allegedly coping OpenClaw's features. He added, "Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source." According to Cherny, Anthropic's computing capacity was not optimised for third-party tools like OpenClaw. He wrote, "We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools." In the past week, many users have complained that they have run out of their Claude usage limits much quicker than before. However, Anthropic has stated that no user was charged unfairly, and instead gave some suggestions for reduced token use. Boris Cherny confirmed that users can still run tools like OpenClaw, but at an additional cost that will be billed separately. He stated, "You can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key." Do note that the creator of OpenClaw, Peter Steinberger, joined OpenAI recently. Though the agentic AI platform remains open source with no involvement of the Sam Altman-led startup. On X, some users expressed their discontent with this change. One user wrote, "I love you guys, and Claude is amazing, but this low-key sucks this decision is actually going to actively hurt a lot of people." Another person commented, "The increasing demand is nothing to do with third party use. The amount of tokens is the same. Why do you treat us as stupid?" One user asked for a clear refund. The person stated, "That's not how business works. You have to refund or give more notice." Boris Cherny confirmed that the company was going to give refunds. He replied, "We are issuing full refunds, as well as discounts for overages. If you're impacted, expect an email link tomorrow."

Street research is expected for three companies in the week ahead, and six lock-up periods will be expiring. One IPO priced this past week, joined by one SPAC, and one major deal joined the pipeline. The biggest news of the week came from the backlog, though: Elon Musk's SpaceX (SPACE) has Renaissance Capital provides pre-IPO research to institutional investors and investment banks. The Firm manages two IPO-focused funds: The Renaissance IPO ETF (NYSE: IPO) and the Renaissance International IPO ETF (NYSE: IPOS). Individual investors can get a free overview of the IPO market on www.renaissancecapital.com, and try a free trial of our premium platform, IPO Pro (ipopro.renaissancecapital.com). Through Renaissance Capital's pre-IPO research service, institutional investors get an independent opinion, in-depth fundamental analysis, and customizable financial models on all IPOs. HMH raised $210 million at an $880 million valuation, priced below midpoint, and ended its debut week down 6%. TMCR's $292 million direct listing is backed solely by a 2% royalty on an unpermitted Pacific Ocean mining project, with no current revenue. The Renaissance IPO Index is down 7.8% year-to-date, underperforming the S&P 500's 3.6% decline.

Mercor is one of the few companies that OpenAI, Anthropic, and other AI labs rely on for generating training data. The firm employs a vast network of human contractors to create custom, proprietary datasets for these labs. These datasets are usually kept under wraps as they play a crucial role in developing valuable AI models that power products like ChatGPT and Claude Code. The data generated by Mercor is highly sensitive as it can give competitors, including other US and Chinese AI labs, insights into their training methods. However, it remains unclear if the information leaked in Mercor's breach would significantly benefit a competitor. OpenAI is currently investigating the incident to determine how its proprietary training data may have been compromised but has assured that the incident does not affect user data.

Elon Musk has reportedly mandated the banks and advisors involved in SpaceX's upcoming initial public offering (IPO) to purchase subscriptions to the AI chatbot Grok. The New York Times, citing sources, reported on Friday that some banks have agreed to spend millions annually on Grok and have begun integrating it into their IT systems as a mandatory condition for SpaceX IPO roles. The company did not immediately respond to Benzinga's request for comment. Wall Street's Biggest Names Line Up As Bookrunners International banks, including the Royal Bank of Canada, Mizuho Financial Group and Macquarie Group, are also participating, focusing on share distribution from their respective markets. A $75B Raise That Could Rewrite IPO History SpaceX's IPO has been a topic of significant interest. The company filed confidentially for its IPO with the Securities and Exchange Commission recently. By filing confidentially, SpaceX could receive SEC feedback and make adjustments before its prospectus becomes public. Despite the high valuation, prediction markets have been optimistic about SpaceX's IPO. Photo Courtesy: Shutterstock Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors. Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Mercor works with major companies by providing training data for AI models. Meta has paused its work with Mercor and is investigating after a recent data breach at the AI training startup, a source familiar with the matter confirmed to Business Insider. Mercor, which was valued at $10 billion in a funding round in October, works with major tech companies like Meta to train AI models with the help of thousands of human contractors and experts. Wired first reported on Friday that Meta had paused all its work with the company. Meta declined to provide a comment. Mercor confirmed to Business Insider on Friday that the company had experienced a security breach. "The privacy and security of our customers and contractors is foundational to everything we do at Mercor. We recently identified that we were one of thousands of companies impacted by a supply chain attack involving LiteLLM," Mercor said in a statement, referring to the open source project LiteLLM. "Our security team moved promptly to contain and remediate the incident," the company added. "We are conducting a thorough investigation supported by leading third-party forensics experts."

Are you a subscriber to Anthropic's Claude Pro ($20 monthly) or Max ($100-$200 monthly) plans and use its Claude AI models and products to power third-party AI agents like OpenClaw? If so, you're in for an unpleasant surprise. Anthropic announced a few hours ago that starting tomorrow, Saturday, April 4, 2026, at 12 pm PT/3 pm ET, it will no longer be possible for those Claude subscribers to use their subscriptions to hook Anthropic's Claude models up to third-party agentic tools, citing the strain such usage was placing on Anthropic's compute and engineering resources, and desire to serve a wide number of users reliably. "We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," wrote Boris Cherny, Head of Claude Code at Anthropic, in a post on X. "Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API." The company also reportedly sent out an email to this effect to some subscribers. However, it's not certain if subscribers to Claude Team and Enterprise will be impacted similarly. We've reached out to Anthropic for further clarification and will update when we hear back. To be clear, it will still be possible to use Claude models like Opus, Sonnet, and Haiku to power OpenClaw and similar external agents, but users will now need to opt into a pay-as-you-go "extra usage" billing system or utilize Anthropic's application programming interface (API), which charges for every token of usage rather than allowing for open-ended usage up to certain limits, as the Pro and Max plans have allowed so far. The technical reality, according to Anthropic, is that its first-party tools like Claude Code, its AI vibe coding harness, and Claude Cowork, its business app interfacing and control tool, are built to maximize "prompt cache hit rates" -- reusing previously processed text to save on compute. Third-party harnesses like OpenClaw often bypass these efficiencies. "Third party services are not optimized in this way, so it's really hard for us to do sustainably," Cherny explained further on X. He even revealed his own hands-on attempts to bridge the gap: "I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages." Prior to the news, Anthropic had also begun imposing stricter Claude session limits every 5 hours of usage during business hours (5am-11am PT/8am-2pm ET), meaning that the number of tokens you could send during those sessions dropped. This frustrated some power users who suddenly began reaching their limits far faster than they had previously -- a change Anthropic said was to help "manage growing demand for Claude" and would only affect up to 7% of users at any given time. Anthropic is not banning third-party tools entirely, but it is moving them to a different ledger. The new "Extra Usage" bundles represent a middle ground between a flat-rate subscription and a full enterprise API account. The response from the developer community has been a mixture of analytical acceptance and sharp frustration. Growth marketer Aakash Gupta observed on X that the "all-you-can-eat buffet just closed," noting that a single OpenClaw agent running for one day could burn $1,000 to $5,000 in API costs. "Anthropic was eating that difference on every user who routed through a third-party harness," Gupta wrote. "That's the pace of a company watching its margin evaporate in real time." However, Peter Steinberger, the creator of OpenClaw who was recently hired by OpenAI, took a more skeptical view of the "capacity" argument."Funny how timings match up," Steinberger posted on X. "First they copy some popular features into their closed harness, then they lock out open source." Indeed, Anthropic recently added some of the same capabilities that helped OpenClaw catch-on -- such as the ability to message agents through external services like Discord and Telegram -- to Claude Code. Steinberger claimed that he and fellow investor Dave Morin attempted to "talk sense" into Anthropic, but were only able to delay the enforcement by a single week. User @ashen_one, founder of Telaga Charity, voiced a concern likely shared by other small-scale builders: "If I switch both [OpenClaw instances] to an API key or the extra usage you're recommending here, it's going to be far too expensive to make it worth using. I'll probably have to switch over to a different model at this point." ."I know it sucks," Cherny replied. "Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best mode The timing of the crackdown is particularly notable given the talent migration. When Steinberger joined OpenAI in February 2026, he brought the "OpenClaw" ethos with him. OpenAI appears to be positioning itself as a more "harness-friendly" alternative, potentially using this moment as a customer acquisition channel for disgruntled Claude power users. By restricting subscription limits to their own "closed harness," Anthropic is asserting control over the UI/UX layer. This allows them to collect telemetry and manage rate limits more granularly, but it risks alienating the power-user community that built the "agentic" ecosystem in the first place. Anthropic's decision is a cold calculation of margins versus growth. As Cherny noted, "Capacity is a resource we manage thoughtfully." In the 2026 AI landscape, the era of subsidized, unlimited compute for third-party automation is over. For the average user on Claude.ai, the experience remains unchanged; for the power users running autonomous offices, the bell has tolled.

Meta has paused its work with Mercor and is investigating after a recent data breach at the AI training startup, a source familiar with the matter confirmed to Business Insider. Mercor, which was valued at $10 billion in a funding round in October, works with major tech companies like Meta to train AI models with the help of thousands of human contractors and experts. Wired first reported on Friday that Meta had paused all its work with the company. Meta declined to provide a comment. Mercor confirmed to Business Insider on Friday that the company had experienced a security breach. "The privacy and security of our customers and contractors is foundational to everything we do at Mercor. We recently identified that we were one of thousands of companies impacted by a supply chain attack involving LiteLLM," Mercor said in a statement, referring to the open source project LiteLLM. "Our security team moved promptly to contain and remediate the incident," the company added. "We are conducting a thorough investigation supported by leading third-party forensics experts."
After multiple plan changes, flight disruptions, and prolonged delays, Mumbai's cruise professionals have barely managed to return home in time for the Easter weekend. The closure of key airports in West Asia has severely disrupted travel, making it difficult for hospitality staff to reach ships operating in the Caribbean and Europe. A Gorai resident with a decade of experience in the cruise industry said seafarers have not witnessed such uncertainty since the pandemic. "Many cruises have scaled down operations in Europe. Those working on Dubai routes are being sent home within months. Companies have also halted advance joinings" he said. Industry professionals added that with changing airspace dynamics, European deployment on cruises has gradually declined. Frequent flight rescheduling has led to uncertain joining and sign-off dates, forcing crew members to extend their time onboard beyond planned contracts. "Since flights were cancelled, my reliever couldn't reach the ship, and I had to stay an extra month. With a four-year-old daughter and my wife alone at home, it was difficult," said Rakesh Zolar, the seafarer, who is now back in Mumbai and fishing with his brothers until his next contract. Many are still waiting for assignments, delayed due to limited flight availability. "Direct flights are too expensive, so companies avoid them. Now we are either asked to share the cost or wait till the situation stabilises," said Julian Collins, another Gorai-based crew member who returned from Singapore in December. Rossi D'Souza, a Gorai resident, said, "My cousin was to return to Mumbai on March 29, but his reliever couldn't reach the ship. He has now missed Easter with his family. While we worry about his safety, we hope things improve soon."

A Starship launch tower stands at the Starbase launch site at SpaceX's South Texas facility, February 6, 2026. /VCG SpaceX CEO Elon Musk said on Friday that the company's next Starship test flight will take place in May and not April as originally scheduled. Musk posted on social media platform X that the next flight of Starship's V3 vehicle is four to six weeks away, or in the first two weeks of May. Earlier, he had said the first flight would take place in April. SpaceX's debut of the V3 Starship iteration has been delayed for months as the company has packed dozens of upgrades into the vehicle to make it more reliable and suitable for NASA missions, including landing on the moon under the Artemis program. Starship, the company's next-generation rocket, is designed to be fully reusable and carry far larger payloads than its Falcon rocket. SpaceX's previous Starship test launch, its 11th, was in October. The company has confidentially filed for a US initial public offering, setting the stage for what could become the largest stock market debut on record. The Starbase, Texas-headquartered firm is targeting a potential valuation of more than $1.75 trillion.

Global markets were thrown into turmoil after President Donald Trump delivered a prime-time address on the Iran war that failed to provide a clear path forward. Oil prices surged immediately following the speech, with Brent crude jumping nearly 5% to $105 per barrel as uncertainty gripped investors. At the same time, U.S. futures dropped sharply. The Dow fell 1%, the S&P 500 declined 1.1%, and the Nasdaq slid 1.4%, signaling broad concern across financial markets. International markets reacted just as strongly. Japan's Nikkei index dropped 1.9%, one of the first major indicators of global investor sentiment after U.S. trading hours. The reaction was driven largely by what the speech did not include. Trump declared the war a success but acknowledged that U.S. involvement would continue for at least two to three more weeks. He did not outline a concrete exit strategy or provide clear assurances about stabilizing the situation. That lack of clarity sent shockwaves through markets already sensitive to the conflict. Energy prices remain at the center of the issue. The national average for gas has climbed to $4.06 per gallon, up significantly from pre-war levels of around $2.90. Trump addressed the spike directly, blaming Iran for the increase. "This short-term increase has been entirely the result of the Iranian regime launching terror attacks," Trump said, pointing to strikes on oil tankers and regional infrastructure. The Strait of Hormuz remains a critical flashpoint. Roughly 20% of the world's oil supply moves through the waterway, which has been heavily disrupted since the conflict began, per the Daily Mail. Despite its importance, Trump offered little reassurance about reopening the route. Instead, he shifted responsibility to other nations. "The countries of the world that receive oil through the Hormuz Strait must take care of that passage," he said. He went further, urging allies to take direct action. "Go to the Strait and just take it," Trump added, suggesting other countries should lead efforts to secure the route. That stance raised concerns among analysts, who expected clearer leadership from the United States. The speech also avoided several key topics. There was no mention of deploying U.S. ground troops or a detailed plan involving NATO allies. That omission added to the uncertainty surrounding the administration's next steps. Meanwhile, military activity in the region continues to increase. Additional U.S. naval forces, including amphibious assault ships and thousands of troops, are being deployed to the Middle East. Allies in the region are also weighing their options. The United Arab Emirates has reportedly considered sending forces to help secure the Strait of Hormuz, while some European nations remain hesitant to escalate involvement. The market response highlights the broader impact of the conflict. Investors are reacting not just to the war itself, but to the lack of a clear resolution.

Anthropic says subscriptions for Claude (including Claude Code) will stop covering usage on third-party tools like OpenClaw starting April 4 at 12pm PT. The company is framing the move as a capacity-management step, effectively limiting how far paid Claude plans can be consumed through external integrations. For teams relying on OpenClaw with a Claude Code subscription, this changes the cost and availability of everyday workflows. Instead of having subscription coverage extend into that third-party environment, usage will no longer be covered once the cutoff hits. Third-party agent tooling can create unpredictable demand spikes, since developers may run multiple tasks, long-running agent loops, or higher-volume workloads than a provider expects from native product interfaces. Anthropic's decision suggests it's prioritizing more predictable utilization of model capacity. It also underlines a broader pattern in AI platforms: vendors increasingly draw boundaries around what "subscription" includes, especially as usage shifts from direct API calls to agentic tools and marketplaces. After the cutoff, OpenClaw-based usage may require new billing arrangements or different tooling strategies, depending on what Anthropic and OpenClaw make available. The next practical question is how OpenClaw will handle the transition for existing users -- whether it offers alternative plans, changes routing to other supported models, or updates pricing for Claude-backed workloads. Users will likely want to verify how their current integration billings will behave after April 4 at 12pm PT.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

The opinions expressed by columnists are their own and do not necessarily represent the views of Townhall.com. Most film buffs know the trivia: the word "Mafia"was never uttered in "The Godfather," after real-life crime figures pressured filmmakers to avoid the term. Whatever their internal conflicts, mob leaders were united on one point: the Mafia didn't exist. That same instinct is now on display in Silicon Valley, as AI companies compete for government contracts while downplaying the ideological frameworks shaping their decisions. In a recent Wired article, Anthropic President Daniela Amodei, sister of CEO Dario Amodei, sought to disclaim Anthropic's association to Effective Altruism (EA), saying, "I'm not the expert on effective altruism... My impression is it's a bit of an outdated term." So, there we have it. Ms. Amodei is no expert, or, in the parlance of the mob itself, "she don't know nothing." Curiously, Ms. Amodei's self-exoneration fails to mention that she is married to Holden Karnofsky, one of the most publicly prolific apostles of EA and a top technical advisor for Anthropic. It doesn't take a prompt to Grok to understand why super-woke Anthropic aims to downplay its fealty to EA as it pursues government contracts from the anti-woke Trump Administration. So far this is proving difficult. While Dario managed to delete his social media posts calling Trump a "feudal warlord," he soon returned to form by protesting the president's chip deal with China by asserting it was "like selling nuclear weapons to North Korea and then bragging." This is a fairly eye-opening broadside against a figure that Anthropic publicly claims to support. If the leaders of Anthropic are driven to cloak their connection to a fringe movement that owes its current status to billions of dollars in donations from Anthropic's inner circle, how nervous should the rest of us be? The answer to that depends on whether you believe, as EA does, that technology including AI should be controlled by a handful of elites for the "good of mankind" and to "protect global security, democratic institutions, and human welfare." This "good" includes going all-in for climate change, open borders, and vastly more spending on government social programs, while opposing the use of AI for specific applications related to national defense, criminal surveillance, and border security. Claude got its foot in the door by practically giving itself away to the federal government for a mere $1, quickly embedding in national security workflows. But that access is now colliding with reality. Anthropic is in a direct standoff with the Pentagon after refusing to support key military and surveillance uses, triggering a government-wide cutoff attempt that has since spilled into a high-profile court fight. That standoff has now moved from the Pentagon to the courts, and, for the moment, the courts have sided with Anthropic. In late March, a federal judge blocked the administration's attempt to label the company a national security risk and cut off its use across federal agencies. The ruling fits a familiar pattern. A Biden-appointed judge blocks a core national security decision by a duly elected president, not on narrow grounds, but by substituting her own judgment for his. That's not a procedural check. It's judicial overreach into decisions voters already made. Americans elected a president to run national defense and enforce the law, not to have those decisions second-guessed by judges and Silicon Valley ideologues. Yet that's exactly where we are. And it still dodges the real issue. Anthropic is trying to control how the government uses its own tools. Given Anthropic's core opposition to the very specific promises President Trump made to the American people to ensure the nation's safety and prosperity, as well as combat woke attempts to limit free speech, perhaps the time has come for the administration to bid adieu to Anthropic in exchange for AI partners who support the mandate Trump received from voters. Even if some users feel Claude is technologically superior to its competitors, whatever advantages it may offer are entirely worthless if they don't advance the president's policies. By withholding their product based upon the current whims of a far-left ideology, Anthropic is able to exercise a veritable veto over the executive branch, an absolutely outrageous and chilling proposition. The White House isn't oblivious to Anthropic's attempt to obfuscate its woke lineage and EA pedigree, nor to the restrictions they are placing on contractors to comport with EA's guiding principle of the purported "good" measured by progressive material standards. Former FCC policy advisor Nathan Leamer is correct in noting that EA is "a governing philosophy that is entirely built on godless progressive ideas." These are not principles that will make America great again, no matter by which name they are called. Gerard Scimeca is chairman and general counsel for CASE, Consumer Action for a Strong Economy, a free-market consumer advocacy group he co-founded.

Tens of thousands of Irish holidaymakers have begun travelling to Spain for the first major weekend of the summer - but they may encounter strikes by ground staff that have already triggered significant chaos across 12 principal airports. Reports suggest waits of up to five hours at baggage reclaim areas this morning. With Spain's three primary transport unions organising a sequence of industrial action throughout the Easter period, travellers on (predominantly domestic services currently) have been landing without their belongings at popular destinations including the Canaries and Malaga. The disruption is expected to extend throughout the weekend and into the following week as the unions select the busiest period of the year thus far to exert maximum pressure on Spain's largest ground handling firm. Irish passengers could face a 'baggage lottery' at impacted airports including Barcelona-El Prat, Madrid-Barajas, Ibiza, Palma de Mallorca, Gran Canaria, Tenerife, Fuerteventura, Bilbao, Lanzarote, Alicante, Valencia, and Malaga. The Groundforce industrial action affected these airports for 24-hours on Good Friday (3 April), with additional walkouts scheduled over the Easter period. Strikes are timetabled for Monday, Wednesday, and Friday during three busy windows: 5am-7am, 11am-5pm, and 10pm-midnight, reports Cork Beo. Nevertheless, there are methods to navigate the disruption - and the travel specialists at UK firm Send My Bag, who provide door-to-door baggage shipping from Ireland, have outlined four recommendations to ensure your luggage reaches the same destination and time as you do. Darren Johnston at Send My Bag states: "These strikes have hit at the worst possible time for families! When baggage handlers walk out, it creates a massive backlog. Even if your flight lands on time, you could be looking at several hours of waiting at a standstill carousel, or worse, your bags being left behind entirely as ground crews struggle to keep up with the volume of holiday traffic." Darren recommends passengers implement these precautions: Reports today indicate some travellers at the principal Spanish airports are experiencing delays of up to 4 to 5 hours to retrieve their luggage from carousels, with additional accounts of baggage vanishing altogether.

Glen Anderson has been brokering trades in private company shares since 2010, back when the number of institutional investors focused on the late-stage private market could be counted on two hands. Today, he says, there are thousands. As president of the investment bank Rainmaker Securities, which focuses solely on private securities markets and facilitates transactions in roughly 1,000 stocks, Anderson has a front-row seat to one of the most nail-bitingly large moments in the history of the secondary market. And right now, he suggests, the narrative has three main characters: Anthropic, OpenAI, and SpaceX. The upshot: the storyline is more complicated than the headlines suggest. Anderson's read on Anthropic is consistent with what Bloomberg reported earlier this week: demand for the company's shares has become almost insatiable. Bloomberg quoted Ken Smythe, founder and CEO of Next Round Capital, saying that buyers had indicated to his outfit that they had $2 billion of cash ready to deploy into Anthropic, even as roughly $600 million in OpenAI shares that investors are trying to sell haven't found takers. Anderson sees something similar at Rainmaker. "The hardest stock to source in our marketplace is Anthropic," he told TechCrunch yesterday afternoon from his Miami home. "There's just no sellers." Part of what turbocharged that demand, Anderson argues, was Anthropic's very public standoff with the Department of Defense -- a turn of events that initially seemed like bad news for the company but has wound up becoming a gift. "The app got more popular, people rallied around the company as kind of a hero, taking on big government," he said. "I think it amplified the story and made it even more differentiated from OpenAI." That distinction is becoming increasingly meaningful to investors navigating a market where, for years, the prevailing logic was to bet on everyone. Anderson no ...

That silence cracked when the Environmental Protection Agency and the Department of Health and Human Services announced they will finally monitor drinking water for microplastics and pharmaceutical contaminants. The move, another win for the "Make America Healthy Again" movement, places these substances on a draft of the agency's "Sixth Contaminant Candidate List," a procedural step that could eventually force water utilities to filter them out. But for a public already skeptical of federal assurances, the announcement raises a troubling question: Why did it take so long to admit what independent science has been saying for years? And what can the EPA do about the issue once they find pharmaceuticals and microplastics in drinking water across the country, an issue that has already been raised for years? A December 2024 study published in the Journal of Pharmaceutical Sciences found that active pharmaceutical contaminants are not a myth but a measurable reality in drinking water supplies worldwide. Researchers documented APIs in treated wastewater, groundwater and tap water, concluding that conventional treatment processes are simply not equipped to remove these compounds. Among the most concerning findings: pharmaceutical residues promote antibiotic-resistant bacteria, bio-accumulate in the food chain and disrupt endocrine systems. The study specifically names nanotechnology, microalgal treatment and reverse osmosis as promising alternatives, but notes these remain underutilized. Meanwhile, a systematic review published in the Journal of Xenobiotics identified 39 different estrogenic compounds across water bodies in 59 countries. Concentrations ranged from 0.002 to more than 10 million nanograms per liter. Estrone, estradiol and ethinylestradiol, the synthetic hormone found in birth control pills, topped the list. These compounds were detected not just in wastewater effluent but in rivers, lakes, surface waters and drinking water sources. "The presence of APIs in water resources poses a significant threat not only to aquatic organisms but also to human health," the pharmaceutical study authors wrote. That threat includes endocrine disruption, a condition where synthetic chemicals mimic or block natural hormones, confusing the body's regulatory systems. Endocrine disruption does not announce itself with a single symptom. It manifests as metabolic dysfunction, reproductive disorders, thyroid imbalances and neurodevelopmental issues. When estrogenic compounds enter the body through drinking water, even at low concentrations, they can bind to hormone receptors and alter gene expression. The second mechanism, gut dysbiosis, receives less attention but may be equally damaging. Microplastics, which have been discovered inside human tissues and across the planet from ocean depths to Arctic ice, act as physical irritants and chemical sponges in the gastrointestinal tract. They alter microbial communities, damage intestinal lining and create chronic inflammatory states. Together, these two pathways form a hidden engine of modern chronic disease. Hormone disruption impairs metabolic signaling. Gut dysbiosis undermines immune function and nutrient absorption. Drinking water, the one substance no human can avoid, becomes a delivery system for both. EPA Secretary Lee Zeldin framed the announcement as a family safety issue. "I can't think of an issue that hits closer to home for American families than the safety of their drinking water," he said. But monitoring is not the same as regulating. And regulation without enforcement of advanced treatment standards leaves the underlying problem intact. Seven governors from states including New Jersey and Michigan, along with 175 environmental and health groups, filed a legal petition late last year demanding action. Thursday's announcement responds to that pressure but stops short of mandating the filtration upgrades that independent research says are necessary.
