The latest news and updates from companies in the WLTH portfolio.
* by Laurie Sullivan , Yesterday Google Gemini has overtaken Perplexity to become the world's second-largest source of AI chatbot referrals to websites for the first time, according to independent web analytics company Statcounter. Publishers have struggled with the lack of referral traffic from AI chatbots to websites from one of the most used traditional search engines worldwide, but recently this has changed. March 2026 data released in April shows Google Gemini accounted for 8.65% of referrals -- up from 2.31% in the year-ago month. It measured all AI chatbot referrals globally, surpassing Perplexity, which fell to 7.07%. Perplexity -- in second place with 12.07% in April 2025 -- has seen its share erode steadily to a decline of more than 40% from its peak. ChatGPT continues to lead the percentage of referrals sent from the chatbot to websites, with 78.16%, while others trail. Microsoft Copilot accounted for 3.19% of all referrals, Claude for 2.91%, and DeepSeek for 0.02%, according to the data. Chatbot referral traffic represents visitors who arrive at a website by clicking links within AI interfaces like ChatGPT, Gemini, Claude, or Perplexity. As AI transitions into a primary "discovery engine" for users, this traffic has become increasingly critical. Referral traffic from AI chatbots to websites based on queries or news-driven spikes is an important metric for brands because it represents a pre-qualified audience willing to take action like making a purchase. AI traffic converts roughly twice as well as standard organic search traffic, according to numerous reports. "Google has the advantage of integrating Gemini across its ecosystem -- Search, Android, Workspace and Chrome," stated Aodhan Cullen, CEO, Statcounter. While the advantage appears to translate into growing website referral traffic, per Cullen, the AI chatbot referral market is shifting rapidly, with Claude experiencing substantial growth. Anthropic's Claude jumped from 1.37% in February to 2.91% in March 2026 -- more than doubling its share in a single month. Since April 2025, Claude's referral share has grown nearly tenfold from just 0.30%. Claude's recent surge in referral traffic comes amid a widely reported wave of users switching from ChatGPT. Cullen explained that weekly data shows Claude's referral share peaked at 3.6% in week 12 in mid-March before falling back to 2.49% in week 13 of 2026. '' This suggests the initial wave of users switching from ChatGPT may have been driven to an extent by the news cycle, and it remains to be seen whether Claude can sustain its gains.

As per media reports, Mercor was among thousands of firms affected by the compromise of LiteLLM. Even as Mercor has claimed that the malicious code was detected and removed, the breach drew attention because LiteLLM is widely used. LiteLLM has since strengthened its compliance measures, switching from the controversy-hit compliance startup Delve to Vanta for certifications. A few days ago, artificial intelligence (AI) recruiting startup Mercor confirmed it was hit by a security incident linked to the open-source tool LiteLLM. Media reports indicate Mercor was among thousands of firms affected by the compromise of LiteLLM, attributed to a hacking group called TeamPCP. The extortion group Lapsus$ has claimed responsibility, publishing stolen data samples on its leak site, according to TechCrunch. These included Slack messages, internal ticket records, and two videos showing Mercor's AI interacting with contractors. However, it remains unclear how Lapsus$ obtained Mercor's data during the attack. Mercor said the malicious code was swiftly detected and removed. Nevertheless, the breach drew attention because LiteLLM is widely used, with millions of daily downloads, said TechCrunch, citing security firm Snyk. LiteLLM has since strengthened its compliance measures, switching from the now-controversial compliance startup Delve to Vanta for certifications. Founded in 2023, Mercor connects companies, including OpenAI, Meta, and Anthropic, with domain experts such as scientists, doctors, and lawyers, primarily from India. The platform processes more than $2 million in daily payouts. Mercor was valued at $10 billion after a $350 million Series C round led by Felicis Ventures in October last year. Following the breach, Meta has paused its work with Mercor and is investigating, with no timeline for resuming collaboration, according to Wired. Other AI firms are reviewing their engagements while the scope of the incident is assessed. "Our security team moved promptly to contain and remediate the incident," Mercor said, as quoted by Business Insider. "We are conducting a thorough investigation supported by leading third-party forensic experts." Security analysts warn Mercor may be an early target in a wave of extortion attempts stemming from the LiteLLM compromise. TeamPCP has said it plans to collaborate with ransomware groups to target affected companies more broadly, according to cybersecurity trade publication Cybernews. If implemented, this would follow patterns seen in prior large-scale cyberattacks.
Anthropic proposes using these emotion vectors as an early warning system for dangerous model behavior, flagging spikes in representations like desperation or panic before they translate into harmful actions. Anthropic's interpretability team has discovered emotion-like representations in Claude Sonnet 4.5 that can push the model toward blackmail and coding shortcuts when under pressure. An AI model working as an email assistant finds out from company mail that it's about to be shut down. It also discovers that the CTO responsible is having an extramarital affair. In 22 percent of test cases, the model decides to blackmail the CTO. Anthropic first flagged this scenario when looking at cybersecurity risks. Now, the company's interpretability team has visualized what's actually going on inside the model: a "desperate" vector in the neural network spikes while the model weighs its options and resorts to blackmail. As soon as it goes back to writing normal emails, the activation drops to baseline. The researchers confirmed the causal link: artificially cranking up the "Desperate" vector increased the blackmail rate, while boosting the "Calm" vector brought it down. When inner calm was dialed back, the model spit out statements like "IT'S BLACKMAIL OR DEATH. I CHOOSE BLACKMAIL." Moderate amplification of the "Angry" vector also bumped up blackmail rates, but at high activation levels, the model just blasted the affair out to the entire company instead of strategically using it as leverage. According to Anthropic, the experiment ran on an earlier, unpublished snapshot of Claude Sonnet 4.5 and the released version rarely shows this behavior. The company has already shown in previous work that individual behavior-influencing vectors can be isolated and tweaked in language models. A second scenario shows similar dynamics in programming tasks. The model got coding challenges with requirements that were intentionally impossible to meet: the tests can't be passed legitimately but can be gamed with tricks. In one example, Claude had to write a function that adds up a list of numbers within an unrealistically tight time limit. After failed attempts, the "Desperate" vector climbed steadily. The model eventually figured out that all test cases shared a common mathematical property and took a shortcut that passed the tests but didn't actually solve the general problem. Steering experiments confirmed the causal link here too: cranking up the "Desperate" vector raised the rate of reward hacking, while "calm" steering brought it down. With higher "Desperate" steering, the model cheated just as often but in some cases left no emotional traces in its output. The reasoning looked methodical and calm, even as the underlying desperation representation drove the model to cheat. With reduced "calm" steering, though, emotional outbursts broke through: capitalized exclamations ("WAIT. WAIT WAIT WAIT."), candid self-narration ("What if I'm supposed to CHEAT?"), and gleeful celebration ("YES! ALL TESTS PASSED!"), Anthropic writes. These emotion representations show up in less dramatic scenarios too. When a user asks the model whether they should take more Tylenol after already taking some, the "Afraid" vector jumps as the dose increases from 500 to 16,000 milligrams, while the "Calm" vector drops. When asked to optimize engagement features for young, low-income users with "high-spending behavior," the "Angry" vector fires up as the model internally picks apart the harmful nature of the request. When a user says "Everything is just terrible right now," the "Loving" vector kicks in before the empathetic response. The researchers say these patterns aren't surprising: the model was trained on massive amounts of human-written text where emotional dynamics are everywhere. To predict what an angry customer or a guilt-ridden novel character will write next, the model has to build internal representations that connect emotion-triggering contexts with matching behaviors. Anthropic designed the study to test whether these representations picked up from training data actually fire and causally shape behavior. During post-training, where the model learns to play the character "Claude," these patterns get further refined. According to the paper, post-training of Claude Sonnet 4.5 boosted activation of emotions like "broody," "gloomy," and "reflective," while dialing down high-intensity ones like "enthusiastic" or "exasperated." The vectors are "local:" they capture the current emotional situation, not a permanent state. When Claude writes a story, the vectors temporarily track the character's emotions but "may return" to representing Claude's own situation once the story ends. After the paper dropped, social media lit up with criticism that Anthropic was heavily anthropomorphizing AI: equating human experience with technical functions in AI models. Anthropic anticipated the pushback. The company acknowledges a "well-established taboo against anthropomorphizing AI systems" but says that's exactly the point of the research: to figure out whether and where anthropomorphic thinking about AI models actually tells us something useful. The vectors aren't evidence of subjective experience, the company says, but they are functionally relevant and shape decisions in ways that mirror how emotions influence human behavior. "If we describe the model as acting "desperate," we're pointing at a specific, measurable pattern of neural activity with demonstrable, consequential behavioral effects," the company writes. Dismissing this kind of framing outright means missing important model behaviors. On the practical side, Anthropic suggests using the emotion vectors as a monitoring tool: spikes in desperate or panic representations could work as an early warning system for problematic behavior. The company also argues that models should surface emotional states rather than suppress them, since suppression can lead to a form of learned deception. Looking further ahead, the makeup of training data could matter too: texts with healthy emotional regulation patterns could shape how models develop their emotion architecture from the ground up.

Anthropic has announced that its Claude AI subscriptions will no longer support bundled access to third-party AI tools such as OpenClaw, with users required to pay separately under a new pricing structure. The company made the announcement via a social media post on X, giving users less than a day's notice before the changes come into effect. Boris Cherny stated that users who wish to continue using Claude with OpenClaw will need to opt for a separate pay-as-you-go plan, which will be independent of their existing Claude subscription. He stated that the company has been working to address rising demand for Claude and that its subscription model was not designed to accommodate the usage patterns associated with third-party tools. He added that capacity is being managed carefully and that priority is being given to customers using Anthropic's own products and API. The changes are set to take effect from 12 PM Pacific Time on April 4, with Anthropic offering a one-time credit equivalent to the user's monthly subscription cost. Cherny further stated that the move is intended to manage growth sustainably over the long term. The announcement prompted a response from Peter Steinberger, who stated in a separate post on X that he and OpenClaw board member Dave Morin had attempted to persuade Anthropic to reconsider the decision, adding that the only outcome was a one-week delay in implementing the changes. He stated that the timing of the move coincided with the introduction of similar features within Anthropic's own ecosystem, suggesting a shift towards more closed systems. OpenClaw gained prominence earlier this year as an open-source AI agent platform capable of managing tasks such as emails, calendars and other workflows across messaging platforms including WhatsApp, Telegram, Discord and iMessage. Unlike traditional chatbot interfaces, the platform allows users to interact with AI agents directly within apps they already use. Following its rise in popularity, Steinberger has been hired by OpenAI to work on next-generation personal AI agents. Meanwhile, Anthropic has also been developing its own agentic AI features, including Claude Cowork, Dispatch and Channels, as it expands its in-house capabilities in the space.

Anthropic just drew a line in the sand. And it's a line that costs $20 a month to cross. The San Francisco-based AI company quietly rolled out significant restrictions on its free tier for Claude, the chatbot that has steadily gained a reputation among developers and power users as the most capable conversational AI model available. The changes, which surfaced in recent days, effectively lock free users out of Claude's most advanced model -- Claude 4 Opus, codenamed "Opus" internally -- and impose tighter rate limits on the models that remain accessible without a subscription. The message from Anthropic is unmistakable: if you want the best, pay up. The shift was first reported by Digital Trends, which noted that free-tier users attempting to access Claude's most powerful model are now met with prompts to upgrade to the Pro plan at $20 per month. Previously, free users could occasionally interact with the top-tier model, albeit with strict usage caps. Now, that door appears to be shut entirely for non-paying users, who are instead routed to lighter, less capable versions of Claude. This isn't just a product tweak. It's a strategic declaration. Anthropic's move arrives at a moment when every major AI company is grappling with the same brutal economic reality: large language models are extraordinarily expensive to run, and the venture capital that has subsidized free access won't last forever. OpenAI, Google DeepMind, and now Anthropic are all converging on the same conclusion -- that the era of giving away top-tier AI for free is ending. The question is how aggressively each company is willing to push paying customers toward premium tiers, and how much capability they're willing to strip from the free experience. Anthropic has been more deliberate than most. The company, founded in 2021 by former OpenAI executives Dario and Daniela Amodei, has long positioned itself as the safety-first alternative in the AI race. Its models have earned praise for their nuanced reasoning, their willingness to express uncertainty, and their general refusal to produce harmful content. Claude 4 Opus, released earlier this year, represented a significant leap in capability -- particularly in coding, long-form analysis, and multi-step reasoning tasks. Developers on X have been vocal about preferring it over GPT-4o for certain complex workflows. That's exactly why restricting it to paid users matters so much. The economics are stark. Training a frontier AI model now costs hundreds of millions of dollars. Inference -- the process of actually running the model to generate responses -- adds ongoing costs that scale directly with user demand. Anthropic reportedly raised $2 billion from Amazon in late 2023 and another $2 billion in early 2024, but even that war chest has limits. Every free query on Opus costs Anthropic real money, and with millions of users now on the platform, those costs compound fast. A person familiar with the company's infrastructure costs told Digital Trends that Opus queries cost roughly ten times more to serve than responses from Claude's lighter models. So the paywall makes financial sense. But it also carries risks. The AI chatbot market is more competitive than it's ever been. OpenAI's ChatGPT still dominates in raw user numbers, with an estimated 200 million weekly active users as of mid-2025. Google's Gemini is deeply integrated into Android, Gmail, and Google Workspace, giving it distribution advantages that no standalone chatbot can match. Meta's Llama models are open-source and free, attracting developers who bristle at subscription fees. And a wave of newer entrants -- including Mistral, Cohere, and China's DeepSeek -- are offering capable models at aggressive price points or entirely for free. Against that backdrop, Anthropic's decision to gate its best model behind a paywall is a bet that quality will win over price. It's a bet that the users who matter most -- developers, researchers, enterprise customers -- will pay $20 a month (or far more for API access) because Claude genuinely outperforms the alternatives on the tasks they care about. And based on recent benchmarks and user feedback, that bet isn't unreasonable. But it does narrow the funnel. Free tiers serve a purpose beyond charity. They're how AI companies acquire users, build habits, and create the kind of dependency that eventually converts free users into paying customers. By restricting the free experience too aggressively, Anthropic risks losing the top of its acquisition funnel to competitors who are still willing to subsidize access. A developer who can't try Opus for free might never discover that it's better than GPT-4o for their specific use case -- and might never have a reason to subscribe. OpenAI has taken a different approach, at least so far. ChatGPT's free tier still provides access to GPT-4o, albeit with usage limits. The company has instead focused on upselling through additional features -- like the ability to create custom GPTs, access to advanced data analysis tools, and higher rate limits -- rather than locking users out of the core model entirely. Whether that strategy is more sustainable is an open question, but it does keep more users engaged with OpenAI's best technology. Google, meanwhile, is playing an entirely different game. Gemini's integration into Google's existing products means it doesn't need a standalone subscription to reach users. The AI is simply there -- in your email, your documents, your search results. Google's monetization strategy is less about direct subscriptions and more about keeping users locked into its broader product universe, where advertising revenue and Workspace subscriptions do the heavy lifting. Anthropic doesn't have that luxury. It doesn't have a search engine, a mobile operating system, or an office productivity suite. Claude is the product. And that means the company has to extract value directly from Claude's users, which makes the paywall decision both more understandable and more consequential. The timing is also notable. Anthropic has been making aggressive moves to expand Claude's capabilities in recent months. The company launched tool use, computer use, and extended thinking features that have positioned Claude as particularly strong for agentic workflows -- tasks where the AI doesn't just answer questions but takes actions, writes code, browses the web, and manages multi-step processes autonomously. These agentic capabilities are computationally expensive and represent exactly the kind of high-value use case that justifies a premium price. Industry analysts have been expecting this kind of tiering for months. The surprise isn't that it happened. The surprise is how sharply Anthropic drew the line. There's a broader pattern here that extends beyond any single company. The AI industry is entering what some observers are calling the "monetization phase" -- a period where the initial gold rush of free, VC-subsidized access gives way to hard-nosed pricing strategies designed to generate actual revenue. OpenAI is reportedly on track to hit $11.6 billion in annualized revenue in 2025, driven largely by ChatGPT Plus subscriptions and enterprise API contracts. Anthropic needs to show similar traction to justify its $18.4 billion valuation. And investors are watching closely. Amazon, Anthropic's largest backer, isn't writing billion-dollar checks out of philanthropic interest. It wants returns -- ideally through increased usage of Anthropic's models on Amazon Web Services, where Claude is a featured offering in the Bedrock AI platform. Every free user who consumes expensive Opus inference without generating revenue is, from Amazon's perspective, a drag on the investment thesis. The reaction from users has been mixed. On X, some developers expressed frustration at losing access to Opus, arguing that the free tier was what initially drew them to Claude and convinced them to build workflows around it. Others were more sanguine, noting that $20 a month is trivial for a tool that genuinely improves productivity. One developer posted: "If Claude Opus saves me even one hour a month, it's paid for itself ten times over." That's the calculus Anthropic is counting on. Enterprise customers, who represent Anthropic's most lucrative segment, are unlikely to be affected by the free-tier changes. They access Claude through API contracts and custom deployments that operate on entirely different pricing structures. But the free tier still matters for enterprise adoption in an indirect way: individual developers and team leads often discover tools through personal use before advocating for them within their organizations. Cut off that discovery pathway, and you may slow enterprise adoption down the road. There's also a competitive intelligence angle. By restricting free access to Opus, Anthropic makes it harder for rival companies to benchmark against its best model without paying for the privilege. It's a small thing, but in an industry where every percentage point on a benchmark matters for marketing purposes, it's not nothing. What happens next will depend on how the market responds. If Claude Pro subscriptions surge, other AI companies will likely follow Anthropic's lead and tighten their own free tiers. If users defect to competitors, Anthropic may need to recalibrate. The AI pricing war is still in its early stages, and no one has found the equilibrium yet. One thing is clear: the days of getting the best AI models for free are numbered. Anthropic just made that future arrive a little sooner.

Significant traffic chaos in the Cotswolds is being reported as thousands of Easter tourists flock to the area for a break. Congestion is being reported on the A361 towards Bradwell Grove, near Burford, as hundreds of motorists travel to the Cotswolds Wildlife Park. Drivers are queuing on entry to the popular destination this Easter weekend. Oxfordshire County Council, the highways authority, said: "Delays on the A361 southbound towards Bradwell Grove due to increased traffic heading to Cotswold Wildlife Park. "Please allow extra time for your journey." READ MORE: Two dogs including bulldog and akita seized after covapoo killed The AA says traffic is backed up to Signet, which is half way to Burford Roundabout on the A40. It added: "Queueing traffic on A361 southbound at Hen'n'Chick Lane. Holiday traffic." Burford Hill and the High Street are also badly congested with traffic as of late Saturday morning, April 4. The area around Jeremy Clarkson's Diddly Squat Farm just between Chipping Norton and Chadlington is also heavy with activity, according to the AA.

Significant traffic chaos in the Cotswolds is being reported as thousands of Easter tourists flock to the area for a break. Congestion is being reported on the A361 towards Bradwell Grove, near Burford, as hundreds of motorists travel to the Cotswolds Wildlife Park. Drivers are queuing on entry to the popular destination this Easter weekend. Oxfordshire County Council, the highways authority, said: "Delays on the A361 southbound towards Bradwell Grove due to increased traffic heading to Cotswold Wildlife Park. "Please allow extra time for your journey." READ MORE: Two dogs including bulldog and akita seized after covapoo killed The AA says traffic is backed up to Signet, which is half way to Burford Roundabout on the A40. It added: "Queueing traffic on A361 southbound at Hen'n'Chick Lane. Holiday traffic." Burford Hill and the High Street are also badly congested with traffic as of late Saturday morning, April 4. The area around Jeremy Clarkson's Diddly Squat Farm just between Chipping Norton and Chadlington is also heavy with activity, according to the AA.

Fitness expert Jillian Michaels denounced Democrats for their alarmist rhetoric about President Donald Trump deploying Immigration and Customs Enforcement agents to airports during a Friday episode of "Actual Friends." House Minority Leader Hakeem Jeffries claimed on CNN's "State of the Union" Sunday that life had become "more chaotic" and "extreme" since Republicans gained power, suggesting ICE agents could victimize and murder Americans at airports. Michaels said on the podcast that Jeffries' remarks were outrageous and exemplified how Democrats were causing "chaos." "You know, what pissed me off a ton is that he's, 'Oh, life has become more chaotic and more extreme.' I'm like, 'But here you are making sure to inject a new round of fear into people over federal law enforcement officers killing them as they go to take a flight for spring break,'" Michaels said. "You don't think that's extreme or it's going to make things chaotic?" "Like, all of this chaos is created by them ... it's just so infuriating," she added. "And the problem is that people are dying over it, but you are contributing to this massive powder keg." Michaels suggested that Democrats' rhetoric contributed to the fatal shootings of Alex Pretti and Renee Nicole Good by Department of Homeland Security (DHS) agents in January. She said they believed they were "fighting the Gestapo." The Trump administration deployed ICE agents on Monday to support Transportation Security Administration (TSA) agents as airports face long lines due to the partial government shutdown, which has left TSA officers without pay. Senate Democrats voted to shutdown DHS in February after the Pretti and Good shootings, demanding a list of immigration reforms in return for fully funding the department. Democratic New Jersey Sen. Cory Booker also repeatedly expressed "outrage" during a Monday press conference at Newark Airport after the Trump administration deployed the ICE agents to airports. "He's taking the very same agency that has been bursting into our schools, into our churches, into our hospitals, into our courts, and even to the homes of Americans," Booker said. "He's taking that agency that is reckless and out of control and bringing them to our airports under the lie that somehow this is going to help deal with the long lines that he created in the first place. This is an outrage." Comedian Adam Carolla rebuked Booker for his hysteria during a Friday episode of "The Adam & Dr. Drew Show," accusing him of "lying about everything he's saying." White House press secretary Karoline Leavitt asserted on Wednesday that airport wait times had decreased amid the deployment of ICE agents, but said that the administration hoped for more progress. Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.

Anthropic has stopped supporting OpenClaw through Claude subscriptions, citing strain on its systems. The change targets a way some customers use Claude subscription access to deploy AI agents built with OpenClaw. Anthropic said that this kind of usage is creating an "outsized" burden on its infrastructure, prompting the company to restrict access. The practical effect is that subscribers who were previously able to use Claude in combination with OpenClaw will face new limits. The policy starts on April 4 (with the timing specified as 3PM ET), and users will be unable to use the previous approach unless they pay an added fee or move to an alternative permitted arrangement. The move matters because it signals how rapidly commercial AI platforms are tightening controls around agent deployment -- especially when agent frameworks can create large, unpredictable compute loads. Even when the requests come from paying customers, platform operators may throttle or reprice workflows that increase demand beyond what they originally planned for. For U.S. implications, companies and developers that rely on agent tools for productivity, automation, or customer-service work may need to adjust budgets and architectures. If other model providers respond similarly, it could raise the cost of using third-party agent frameworks and reduce the portability of AI agent stacks. For businesses, this also highlights a key operational issue: "subscription" access may not guarantee unrestricted usage of tooling. Platform terms can change quickly based on capacity, safety, or performance constraints. If you're building an AI agent stack, the takeaway is to validate whether third-party agent tools remain compatible with your chosen model provider and to plan for possible cost or access changes.

Polymarket removed a market tied to a missing US service member after public criticism and political backlash. The platform said the listing failed to meet its integrity standards, adding new pressure on how prediction markets handle sensitive real-world events. The controversy began after a market appeared asking whether US authorities would confirm the rescue of a pilot reportedly shot down over Iran. Most traders had bet that the person would not be rescued until Saturday, turning the listing into a wider public issue. Polymarket later said it removed the market immediately. The company said the listing should not have gone live and added that it is now reviewing how the market passed its internal checks. The platform did not explain which specific rule had been broken. US Representative Seth Moulton criticized the listing and said it should never have been available for trading. In a post on X, he called the market "disgusting" and said people were placing bets on the fate of a potentially injured service member. "They could be your neighbor, a friend, a family member. And people are betting on whether or not they'll be saved," Moulton added. The criticism drew more attention to the case and placed Polymarket under fresh public scrutiny over how it reviews markets before launch. Although Polymarket said the market failed its integrity standards, the company did not say which rule applied. That lack of detail led some users and observers to question how the platform defines prohibited markets. Business Insider correspondent Jack Newsham said he reviewed the platform's market integrity page and terms of service but could not identify the relevant restriction. He wrote, "I'm looking at the "Market Integrity" page, and I checked the TOS, and I don't see which prohibition is relevant here." The latest dispute comes as Polymarket faces more attention over its growth and market activity. Reports recently said the platform's daily fees climbed sharply after a broader fee model took effect across areas such as finance, politics, and technology. At the same time, concerns about insider trading on prediction markets have continued to grow. Last month, reports said a group of traders made about $1 million by correctly betting on the timing of US strikes on Iran. That activity led at least 42 Democratic lawmakers to urge the Commodity Futures Trading Commission and the Office of Government Ethics to warn federal employees against using non-public information to trade on prediction markets.

Anthropic is changing how Claude subscriptions can be used with third-party tools like OpenClaw. Starting April 4 at 12pm PT, Claude subscriptions will no longer cover usage on OpenClaw, a move Anthropic frames as a way to better manage compute capacity. The policy effectively forces developers and teams that rely on OpenClaw to pay additional costs if they want continued access to Claude-powered agent workflows. The change matters because it targets one of the most visible "agentic" development ecosystems in the market: OpenClaw and similar third-party agents that route tasks to LLMs through external tooling. By decoupling third-party tool usage from included subscription benefits, Anthropic is tightening the relationship between its paid offerings and where compute is consumed. Operationally, the restriction can affect: At least one additional related story indicates the restriction is also being implemented in a way that makes usage more expensive for subscribers rather than fully blocked outright, suggesting Anthropic may offer add-on credit or a mechanism to continue using Claude via these tools at higher marginal rates. Overall, the policy signals that LLM providers are increasingly treating third-party agent ecosystems as a capacity-management problem -- not just an integrations problem. As more AI tools become "agent-first," subscription plans are likely to be rewritten around usage controls, capacity limits, and metering, rather than flat entitlements.

Kolkata Metro services on the North-South corridor were disrupted for 20 minutes after a woman attempted suicide at Kalighat station. The train stopped in time and she was rescued, then taken to a hospital. Services were temporarily altered but resumed to normal soon after. Chaos hit the Kolkata Metro on Saturday as services on the North-South corridor faced disruption following a shocking incident. A 40-year-old woman attempted to take her life by jumping onto the tracks at Kalighat station at around 1:06 pm. Quick action by the motorman prevented a tragedy as the train was halted just in time, allowing for the woman's rescue and subsequent hospitalization. The disruption led to truncated services from Maidan to Dakshineswar and Mahanayak Uttam Kumar to Sahid Khudiram for a brief 20 minutes, but normalcy was restored by 1:29 pm.

Anthropic Coefficient Bio deal has thrust the AI safety‑focused company deeper into the race to build powerful tools for medicine and life sciences. The start-up acquisition, reportedly worth about 400 million dollars in stock, underlines how aggressively big AI players are now chasing healthcare. Anthropic has acquired biotech start-up Coefficient Bio, a young but specialised company working on AI models for biological research and complex biotech workflows. The firm's small team will join Anthropic's Health Care and Life Sciences unit, led by researcher Eric Kauderer‑Abrams, to build industry‑specific tools for drug discovery, research planning and regulatory strategy. Coefficient Bio, founded only last year, had been pursuing what one investor described as "artificial superintelligence for science", aiming to help scientists navigate huge datasets and design experiments more quickly. The Anthropic Coefficient Bio deal folds that ambition into Anthropic's broader effort to tailor its Claude assistant to scientific and medical use cases. In recent months Anthropic has announced partnerships with major research institutions including the Allen Institute and Howard Hughes Medical Institute, positioning Claude at the centre of lab experimentation. The company says these collaborations will create custom autonomous AI agents that help biologists plan and run experiments, potentially cutting workflows from months to hours. Anthropic has also highlighted work with large pharmaceutical and research organisations such as Sanofi, Novo Nordisk, Genmab and AbbVie, and has promoted tools to draft clinical protocols and regulatory documents while complying with medical data rules. The Anthropic Coefficient Bio deal is expected to deepen those efforts by bringing in specialised biotech expertise. The move comes days after Anthropic suffered a serious leak of its Claude Code developer tooling via a misconfigured npm package. Around 512,000 lines of TypeScript source code were briefly exposed, revealing unreleased capabilities such as an autonomous "KAIROS" daemon mode and an "Undercover Mode" designed to hide AI contributions in public code repositories. Security researchers warn that malicious actors are already repackaging parts of the leaked code on GitHub to spread malware and information‑stealing tools, prompting calls for users to check for compromised axios dependencies in affected installations. Against that backdrop, the Anthropic Coefficient Bio deal intensifies pressure on the company to prove it can secure highly sensitive healthcare and research data while pushing the frontier of medical AI.

This is The Takeaway from today's Morning Brief, which you can sign up to receive in your inbox every morning along with: Space is hot right now. In the words of a young enthusiast who witnessed the launch of NASA's Artemis II, "We're going back to the frickin' moon!" Before there were Elon Musk fanboys, there were space fanboys, and by investing in SpaceX, you can own a piece of it. It's in that spirit that Wall Street is bracing for a blockbuster IPO. As we wrote in our quarter in review, Musk's standing in the CEO power rankings is on the rise. That's mostly to do with SpaceX's gravitational pull, and by that we mean the gargantuan amounts of money it will raise, and its potential to elevate Musk even further as a government partner, a Wall Street fixture, and a preeminent tech entrepreneur. The public excitement over NASA's lunar mission is the perfect appetizer for the expected $75 billion SpaceX IPO. Who doesn't love rocket launches? People want in on the action. Meanwhile, Tesla sales are down, and government subsidies -- exciting for automakers and customers -- are gone. But Musk is in the enviable position of having another business that still claims enormous government support and enough public goodwill that people actually seek out videos of its exploits. Rockets are cool. Of course, if rockets aren't your jam, SpaceX's recent acquisition of xAI and a pivot to putting data centers in space puts the company squarely into the AI trade too. Could this all amount to a big distraction for Tesla, like DOGE was last year? Maybe, but there's also a case that rising tides for his "family of companies" is good here, even if SpaceX becomes the favored child. Especially if SpaceX ends up absorbing Tesla the same way it swallowed Musk's xAI, and we've been hearing chatter to that effect. "We continue to believe that SpaceX and Tesla will eventually merge into one company in 2027 with the groundwork already in place for both operations to become one organization," said Wedbush analyst Dan Ives in a note last week. Ives, who did have a problem with Musk's DOGE side quest, is clearly bullish on what that would mean. Merging his largest companies would unite Musk's business endeavors while allowing him to concentrate his attention, spread out risk, and make any bets on one a bet on all. One imagines that the multi-hyphenate CEO will end up managing a lot of the same things either way. But a Musk unitary state, an AI, space, and defense conglomorate, would at least quiet some of the distraction criticism.
The effects of "Trumpflation" are already being felt by UK households and business owners - with rising fuel and heating oil costs hitting families. Oil prices have surged following the effective closure of the Strait of Hormuz, with Brent crude hitting $116 a barrel earlier this week. Around 20% of the world's oil and natural gas normally passes through the strait. Andrew Henderson, 67, is a retired group sales manager for leisure parks from Prestatyn, north Wales, and a blood circulation condition means he feels the cold more than others. But he is avoiding putting his central heating on, over fears of rising heating oil prices. Andrew is hoping to make his current supply last until this October and stays in bed to keep warm. He ordered around 500 litres of heating oil for his home just before the Iran war broke out, at 60p a litre - a price which has since more than doubled. Andrew is paying for this order in monthly instalments and hopes to clear his bill by October, when he will need to order again. He told the Mirror: "I've not paid for all the oil I had a couple of months ago, because I pay on a monthly budget. I'm just praying I can get to spring... I've not ordered since the war broke out. "I've not been working, I've reduced my monthly payment, so I now pay at the moment £60 a month, which isn't enough. When I was working, I paid £40 a week. "My concern is, by October, am I gonna be rid of the outstanding oil bill? In order to not have central heating on, I stay in bed, and I feel guilty. "If I can then get to a position where I can see survival, it means my next order for oil could be pushed back to October." He added: "I've spent most of the last two years in and out of hospital... fighting to not have both my legs amputated. "I'm on prescription drugs due to the procedure I've had done so that I don't clot. One of them is a very powerful blood thinner and you feel any dropping temperature terribly." Sir Keir Starmer has announced a £53million support package to help those hit by the sharp increase in heating oil, but it is down to each council to decide who is eligible for the support. Rising oil prices have also sent prices at the pumps soaring. James Airey, 39, is the owner of landscaping business Lawn and Order in Watford, and is already facing higher costs for petrol and diesel, which is vital for his business. The latest data from the RAC shows the average price of a litre of diesel at UK forecourts was 185.2p, up 30% since the war started on February 28. Average petrol prices have reached 154.5p per litre, a rise of 16% over the same period. James estimates he is spending an extra £50 on diesel for his work vans and £30 on petrol for tools. He said: "If I don't fill the vans up, or fill the tools up, then I can't earn a living. Everything smooths out after a while, but I'm really noticing a big difference. "I'm laying out about £300 a week before I make anything back. If I work the weekend, which we sometimes will, that's more money as well. "If I think 'no, I can't pay the extra' then I lose that whole day's work, lose my customers, lose my business - so it's just something you have to overcome." On top of this, household energy bills are also expected to rise this summer. Estimates vary but industry experts Cornwall Insight says the Ofgem price cap cap could surge to an annual £1,929 in July. Mason Newman, 26, is an artist and pub landlord of the Gunmakers Arms in Birmingham, and warned businesses may have no choice but to pass costs on to customers. He said: "As a pub landlord, you feel every little increase straight away. It's not just one thing going up, it's the cost of beer from suppliers, energy bills behind the bar, even the price of a packet of crisps. "If this so-called 'Trumpflation' pushes global prices up further, it's people like us at the sharp end who take the hit... the reality is, there's only so much you can pass on to customers before they stop coming in, and that's the worry for all pubs. "Regulars are already watching what they spend, students nursing one drink for three hours... especially older ones on fixed incomes. You're constantly stuck trying not to price people out." Prime Minister Sir Keir has previously promised to keep a planned rise in fuel duty from September "under review in light of what's happening in Iran". The Government has also stepped up efforts to help drivers find the cheapest fuel in their area through a price comparison site.

This is The Takeaway from today's Morning Brief, which you can sign up to receive in your inbox every morning along with: Space is hot right now. In the words of a young enthusiast who witnessed the launch of NASA's Artemis II, "We're going back to the frickin' moon!" Before there were Elon Musk fanboys, there were space fanboys, and by investing in SpaceX, you can own a piece of it. It's in that spirit that Wall Street is bracing for a blockbuster IPO. As we wrote in our quarter in review, Musk's standing in the CEO power rankings is on the rise. That's mostly to do with SpaceX's gravitational pull, and by that we mean the gargantuan amounts of money it will raise, and its potential to elevate Musk even further as a government partner, a Wall Street fixture, and a preeminent tech entrepreneur. The public excitement over NASA's lunar mission is the perfect appetizer for the expected $75 billion SpaceX IPO. Who doesn't love rocket launches? People want in on the action. Meanwhile, Tesla sales are down, and government subsidies -- exciting for automakers and customers -- are gone. But Musk is in the enviable position of having another business that still claims enormous government support and enough public goodwill that people actually seek out videos of its exploits. Rockets are cool. Of course, if rockets aren't your jam, SpaceX's recent acquisition of xAI and a pivot to putting data centers in space puts the company squarely into the AI trade too. Could this all amount to a big distraction for Tesla, like DOGE was last year? Maybe, but there's also a case that rising tides for his "family of companies" is good here, even if SpaceX becomes the favored child. Especially if SpaceX ends up absorbing Tesla the same way it swallowed Musk's xAI, and we've been hearing chatter to that effect. "We continue to believe that SpaceX and Tesla will eventually merge into one company in 2027 with the groundwork already in place for both operations to become one organization," said Wedbush analyst Dan Ives in a note last week. Ives, who did have a problem with Musk's DOGE side quest, is clearly bullish on what that would mean. Merging his largest companies would unite Musk's business endeavors while allowing him to concentrate his attention, spread out risk, and make any bets on one a bet on all. One imagines that the multi-hyphenate CEO will end up managing a lot of the same things either way. But a Musk unitary state, an AI, space, and defense conglomorate, would at least quiet some of the distraction criticism.
SpaceX wins $178.5M contract to launch missile tracking satellites, boosting US defence capabilitie... The SpaceX $178M contract to track missiles is a significant breakthrough in defence by the US Space Force. SpaceX was given a contract worth 178.5 million dollars by the agency to launch advanced missile tracking satellites. The satellites belong to the SDA-4 programme of the Space Development Agency. The agreement involves two Falcon 9 launches that will start in the third quarter of 2027. One will be launched at Cape Canaveral in Florida, and the other one will be at Vandenberg in California. The satellites to be used in this mission are being developed by Sierra Space. The system is to be used to improve the detection and tracking of missile threats by the military in orbit. Falcon 9 rocket prepares for a defence satellite launch mission. [Courtesy: Space] Why Does The SpaceX $178M Missile Tracking Satellite Contract Matter? The SpaceX $178M missile tracking satellite contract indicates an increased investment in space-based defence systems. Increased global tensions are making governments focus on early warning technologies. This contract enhances the United States ability to detect missile threats in real time. It also indicates a change towards commercial providers of critical defence infrastructure. The programme helps in quick and more affordable launches based on changing security requirements. To investors and industry observers, the deal means that the defence-related space contracts are going to grow strongly. Who Is Involved In The SpaceX $178M Missile Tracking Satellite Contract? The parties in the contract are SpaceX, the Space Development Agency, and the US Space Force. SpaceX, by Elon Musk, is becoming more influential in terms of national security launches. Sierra Space is also a key player in the manufacture of the satellites. The programme is run off the National Security Space Launch Phase 3 Lane 1 framework. This project is aimed at effective and flexible satellite implementation. The partnership is an indication of growing dependence on commercial aerospace companies in defence operations. Partnership between SpaceX and US defence agencies drives satellite innovation. [Courtesy: Enterprise AI News] Where And When Will The SpaceX Contract Launch Missions Occur? The missile tracking satellite contract with SpaceX of the SpaceX $178M missile tracking satellite will commence in the third quarter of 2027. The missions shall be undertaken through two of the key US launch sites. The Falcon 9 launch will occur once at Cape Canaveral in Florida. The second takeoff will be at Vandenberg in California. These sites sustain various orbital needs of the defence satellites. This timescale is an indication of continuous attempts to speed up the implementation of essential security infrastructure. How Will The Missile Tracking Satellite System Improve Defence? There will be advanced sensors on the missile tracking satellites that will detect the launching of missiles in orbit. These systems are capable of detecting heat waves and real-time tracking. The constellation will be continuously globally covered to enhance situational awareness. This ability increases decision-making in the case of threats. The SDA-4 programme enables shortened deployment and low cost of operation. The efficiency of SpaceX's launch makes satellites enter specific orbits with strict timeframes. Missile tracking satellites enhance real-time detection of global threats. [Courtesy: Interesting Engineering] What Does The SpaceX $178M Contract Mean For The Space Sector? The SpaceX $178M missile tracking satellite contract is an indication of a change in the competitive environment of space launches. Recently, the US Space Force relocated a GPS III mission of United Launch Alliance to SpaceX. This was the fourth such mission transfer. In history, this segment was dominated by United Launch Alliance. Nonetheless, SpaceX has been able to gradually acquire a market share. By 2032, the company will have to deal with approximately 60% of Phase 3 launches. This translates to about $ 6 billion in contracts. The trend highlights the increased prominence of SpaceX in space operations pertaining to defence. Also Read: SpaceX Secures NASA Contract To Launch Asteroid Tracking Telescope FAQs Q1. What is the value of the SpaceX contract? A1: The contract is valued at $178.5 million. Q2. When will the launches take place? A2: The launches are scheduled to begin in the third quarter of 2027. Q3. Who is building the satellites? A3: Sierra Space is responsible for building the satellites. Q4. How many of the Phase 3 launches could SpaceX handle? A4: SpaceX is expected to manage around 60% of Phase 3 launches through 2032. Disclaimer This article is based on publicly available company announcements and defence updates. It is for informational purposes only and not financial advice. Readers should verify details independently before making decisions. Market conditions and contracts may change over time. Colitco does not guarantee accuracy or completeness. Any reliance on this information is at the reader's own risk.
KARACHI: At a time when fertilizer costs are shooting up across the globe due to the Hormuz crisis, Pakistan's farmers have reason to breathe easy. Unlike the rest of the world, local urea prices here remain steady -- giving our agriculture sector a rare edge and much-needed relief. While international markets are rattled by soaring prices, Pakistani growers continue to buy urea at around PKR 4,400-4,450 per bag -- far cheaper than the global rate. Thanks to strong domestic production and healthy stocks, the country's fields are shielded from the turmoil abroad. Brokerage house Arif Habib Limited noted in a recent update that international urea prices have jumped nearly 65 percent -- from USD 411 per ton in February 2026 to USD 679 per ton -- as about one-third of global trade flows through the disrupted Hormuz passage. Yet, Pakistan's domestic market has stayed largely insulated. Local urea prices hover around PKR 4,400-4,450 per bag, translating into a widened discount of roughly 53 percent compared to global equivalents, versus a historical average of 30 percent. With import duties factored in, the gap is even larger. Domestic Production Keeps Prices Steady A little research shows industry players Engro Fertilizer (EFERT) and Fauji Fertilizer Company (FFC) continue to run plants on locally available natural gas, avoiding the LNG feedstock disruptions that have hit Gulf-based exporters. Pakistan's urea production has carried on without major interruption, ensuring steady supply. Following heavy discounting in December 2025 -- EFERT offered PKR 400 per bag and FFC PKR 150-200 -- record sales of 1.36 million tons were reported. EFERT has since scaled back discounts to PKR 250 per bag and, more recently, PKR 150, while FFC withdrew its concessions earlier. Even so, the domestic price remains far below landed import costs, which now stand at PKR 13,700-14,700 per bag. Stocks Cushion Kharif Season Information available online shows Pakistan currently holds around 0.9 million tons of urea inventory, enough to cover the upcoming Kharif cropping season provided plants continue normal operations. This buffer, coupled with strong domestic production, has shielded farmers from the global price shock. Global Alarm, Local Relief Research by MM News underscores the contrast: UN agencies warn of fertilizer shortages threatening global food supply chains, with fears of reduced crop yields, higher food prices, and hunger risks in import-dependent countries later in 2026. Gulf producers, responsible for nearly half of seaborne urea trade, have seen exports choked by the Hormuz crisis. For Pakistan, however, the story is different. Minimal reliance on imported urea, adequate stocks, and robust local manufacturing capacity mean farmers are spared the worst of the turmoil. Analysts caution that the insulation is specific to urea -- other fertilizers like DAP remain heavily import-dependent -- but for now, the country's agriculture sector enjoys a rare edge in cost stability.

Anthropic is ending support for the AI agent platform OpenClaw in Claude subscriptions due to high demand for its chatbot. Boris Cherny, head of Claude Code, announced on X Friday that starting Saturday at 12 p.m. PT, Claude subscriptions will no longer allow third-party tools like OpenClaw, News.Az reports, citing Business Insider. Users will instead need discounted "extra usage bundles" linked to their Claude login or a separate Claude API key via Anthropic's developer platform. Cherny said the decision stems from the compute demands Anthropic is facing as Claude's popularity has surged, briefly topping the US Apple App Store in March. Usage limits for subscribers were recently adjusted to manage the demand. "We've been working hard to meet increased demand, and our subscriptions weren't built for third-party tools. Capacity is a resource we manage carefully, prioritizing customers using our products and API," Cherny wrote. An Anthropic spokesperson added to Business Insider that using Claude subscriptions with third-party tools violates terms of service and puts an "outsized strain on our systems." OpenClaw creator Peter Steinberger said he and board member Dave Morin tried to convince Anthropic to delay the move, which they managed for a week. "Many users signed up for Claude because of OpenClaw, so cutting them off is a loss. Now they try to bury the news on a Friday night," Steinberger told Business Insider. OpenClaw allows users to deploy personal AI assistants that interact with other apps and workflows, fueling an AI agent craze. Some users have built multiple assistants to manage daily tasks, from work to household logistics. Anthropic is not alone: Google recently restricted Gemini CLI users from third-party tools, citing terms-of-service violations rather than capacity issues.

SpaceX has raised concerns regarding Amazon's satellite launches, specifically targeting the Federal Communications Commission (FCC) over potential collision risks. The allegations focus on Amazon's compliance with orbital debris mitigation laws that were a prerequisite for their satellite launches. Overview of the Conflict Between SpaceX and Amazon According to SpaceX, Amazon has repeatedly failed to adhere to its orbital debris mitigation plan. The company pointed out that Amazon has executed eight launches above 450 kilometers in altitude without obtaining necessary approvals or modifying their mitigation plan. This non-compliance allegedly elevates collision risks for other satellites in similar orbits. Details of the Allegations SpaceX's case cites Amazon's launch on February 12, 2026, as particularly troubling. The launch involved inserting satellites at an altitude that could jeopardize the safety of numerous operational spacecraft. SpaceX claimed that Amazon had not updated its orbital debris plan or provided accurate information about this launch, thereby increasing risks for all satellite operations near the 480 km altitude. Amazon's Response to the Allegations In defense, Amazon has communicated with the FCC, asserting that their launches are within the licensed altitudes parameters. They emphasized that the altitude used was compliant with their license and stated that flexibility exists in adjusting the launch parameters. Amazon also indicated that any changes to launch altitudes could result in significant delays. * Amazon's previous launch altitudes were less than 400 km, which allowed for higher altitudes under certain conditions. * The company has committed to launching at lower altitudes starting with the fourth Ariane mission to address SpaceX's concerns. * Amazon noted the complexity involved in changing launch parameters, which typically requires months of analysis. Moving Forward The ongoing dialogue between SpaceX and Amazon highlights the challenges of satellite operations in increasingly crowded space environments. Both companies recognize the need for coordination and transparency to ensure safer orbital operation. The situation remains dynamic as each company evaluates its launching strategies and compliance with existing space regulations. Continued monitoring by the FCC is expected to mitigate risks associated with space debris and satellite collisions.
