News & Updates

The latest news and updates from companies in the WLTH portfolio.

Polymarket ripped for taking bets on fate of downed F-15 pilot: 'Disgusting'

Rep. Seth Moulton (D-Mass.) ripped Polymarket on Friday after the popular prediction market platform allowed users to place bets on the fate of a missing American F-15 fighter pilot shot down over Iran. The since-deleted market offered degenerates the opportunity to wager on what date the US would confirm that the downed airman had been found - with most (63%) predicting that they wouldn't be rescued until Saturday. "This is DISGUSTING," Moulton fumed on X, sharing a screenshot of the betting market. Moulton noted that the page went up amid "an ongoing search and rescue operation." "Their safety is unknown," the congressman wrote. "They could be your neighbor, a friend, a family member. "And people are betting on whether or not they'll be saved." Polymarket said the betting page should not have been allowed to go up and was removed. "We took this market down immediately as it does not meet our integrity standards," the company wrote on X. "It should not have been posted, and we are investigating how this slipped through our internal safeguards," it added. Polymarket allows users to place bets on a wide range of Iran war topics, such as whether US ground forces will be used and when a cease-fire may be announced. "Taking down this particular bet after I called it out can only be the first step," Moulton wrote in a subsequent post. "There are still 219 war bets active on your platform. "Remove these immediately." The company claims it does "make money or charge any fees on any geopolitical markets." Moulton's criticism of Polymarket comes after the company took heat from Democratic lawmakers earlier this month after six suspected insiders made $1.2 million on contracts tied to the strikes on Iran - including an alleged $550,000 windfall related to Supreme Leader Ayatollah Ali Khamenei's death. Sen. Chris Murphy (D-Conn.) pledged to draw up legislation to ban bets tied to government actions in response to the gambling on the Iran war. "This is American commercial immorality on steroids," Murphy told the Washington Post, arguing that prediction markets have created a more "dystopian world." "People shouldn't be rooting for people to die because they placed a bet," the senator said.

Polymarket
Aol24d ago
Read update
Polymarket ripped for taking bets on fate of downed F-15 pilot: 'Disgusting'

Elon Musk lost every xAI cofounder: Here's why?

xAI loses last non-Musk co-founder as mass exodus nears completion Elon Musk's artificial intelligence startup xAI has lost all its remaining co-founders. This marks an extraordinary exodus of all 11 original founding members in less than three months. The last exit was of Ross Nordeen, who was described as a "Musk handler" for being a longtime Musk confidant, abruptly cut off from company systems last week. He was the eighth co-founder to depart and the last non-Musk founder to leave the startup. Earlier in March, Manuel Kroiss, who led xAI's team, announced his departure. The reason for massive departures points towards two likely explanations: loss of faith in management or a desire to cash out before a potential downturn. Musk has also acknowledged some problems, citing that the xAI "was not built right first time around" and is now "being rebuilt from the foundations up" in a social media post. He compared the current circumstances with the earlier days at Tesla when he followed the same path of ousting cofounders and taking control. However, both companies exist in different landscapes. xAI's business environment is far more competitive than Tesla's early EV market. Retaining talent in this industry is a crucial part of the success of any company, and the exit of the core foundation of xAI's team indicated major trouble.

xAI
GEO TV24d ago
Read update
Elon Musk lost every xAI cofounder: Here's why?

Storm Dave brings 80mph winds, heavy snow, causing Easter travel chaos across UK

Storm Dave has hit the UK, bringing devastating winds of up to 80mph, heavy snows, and widespread travel disruption over the Easter weekend. Under the current weather conditions, the Met Office has issued an "amber wind" warning for parts of northern England, north-west Wales, and southern Scotland. The amber wind warning remains in effect from 7:00 pm Saturday until 3:00 am Sunday. Due to strong winds, flying debris can pose significant "danger to life" along with power cuts and major travel disruptions. Scotland's First Minister John Swinney has urged people to follow safety advice, citing: "Weather conditions will be really quite challenging." Across Scotland, there are chances of up to 30cm (12inches) of snow with blizzards and drifting snow, showcasing hazardous conditions. A yellow snow warning is also in place for the region until early Sunday, April 5. There are four distinct yellow warnings for strong winds for Northern Ireland, northern England, Wales, and most of Scotland, with gusts up to 80 mph expected in coastal regions. Disruptions to travel have already started. Ferry crossings from western Scotland were canceled, holiday parks in Wales said there were cancellations, and Scotrail asked people to check their journeys. According to the Met Office: "Some uncertainty remains in the exact track and shape of Storm Dave, but a spell of strong southwesterly winds is expected." It is expected that the storm will ease on Easter Sunday, moving eastward into the North Sea. However, strong winds will remain in the region. On Easter Monday, April 6, warmer air from Europe will arrive with temperatures reaching 23 degrees.

CHAOS
GEO TV24d ago
Read update
Storm Dave brings 80mph winds, heavy snow, causing Easter travel chaos across UK

Elon Musk Asks SpaceX IPO Banks and Advisers to Purchase Costly Grok AI Subscriptions

Elon Musk is reportedly asking banks and other advisers working on SpaceX's planned initial public offering to buy subscriptions to Grok, the artificial intelligence chatbot linked to his business group. The reported requirement comes as SpaceX prepares for a public listing that could value the company at more than $2 trillion and raise about $75 billion. If those targets hold, the deal would rank above previous record IPOs by size. Musk has told banks and other advisers that work on the SpaceX IPO comes with a requirement to purchase , according to people familiar with the matter cited in recent reporting. Some banks have reportedly agreed to spend tens of millions of dollars each year on the chatbot and have already started integrating it into internal technology systems. The reported arrangement adds an unusual commercial condition to one of the biggest equity offerings in preparation. It also places Grok inside large financial firms at a time when companies are expanding spending on enterprise AI tools. The reported requirement goes beyond banks alone. It also covers advisers involved in the offering process, including other professional service firms attached to the listing. The push gives Grok a direct route into large institutions through a deal that many firms want to join because of its size and visibility.

SpaceX
Analytics Insight24d ago
Read update
Elon Musk Asks SpaceX IPO Banks and Advisers to Purchase Costly Grok AI Subscriptions

Polymarket Apologizes for Betting on Fate of Downed U.S. Pilots in Iran

Polymarket, a prediction market platform, recently issued an apology after users were allowed to bet on the fate of American pilots from a downed U.S. fighter jet in Iran. This incident raised serious ethical concerns as the situation involved real individuals in a life-threatening scenario. Incident Overview Last week, a two-seater F-15E Strike Eagle was shot down, according to a U.S. official. One pilot has been rescued, but the other remains unaccounted for. During this crisis, users on Polymarket could place wagers about when the missing pilot might be rescued, with many predicting a recovery as early as Saturday. Public Backlash * Rep. Seth Moulton, a U.S. Marine Corps veteran, condemned the situation on social media, expressing outrage at the betting. * Moulton emphasized the human aspect, stating, "They could be your neighbor, a friend, a family member." * He labeled the betting activity as "DISGUSTING," calling for accountability from Polymarket. In response to Moulton's criticism, Polymarket quickly removed the controversial market. The platform acknowledged their lapse, stating, "This does not meet our integrity standards." They initiated an internal review to determine how the situation occurred. Continued Concerns and Legislative Action Moulton did not stop at demanding the removal of this particular bet. He pointed out that there were still 219 active war-related bets on the platform, calling for immediate action against them. Prediction market platforms face increasing scrutiny from lawmakers as they become more popular. Recent discussions in Congress could lead to restrictions banning such platforms from involving bets on sports events and casino-like games. Additionally, Senator Chris Murphy has mentioned plans to propose legislation that would ban bets connected to government actions, responding to the controversial wagers tied to ongoing conflicts. Conclusion The incident with Polymarket highlights the ethical dilemmas surrounding prediction markets. As platforms grapple with maintaining integrity, the need for robust safeguards and regulations becomes increasingly pressing.

Polymarket
El-Balad.com24d ago
Read update
Polymarket Apologizes for Betting on Fate of Downed U.S. Pilots in Iran

Storm Dave chaos at Dublin Airport as flights cancelled and planes diverted

Plane lands at Dublin Airport after approaching sideways during Storm Dave. (Image: Dublin Airport X) Storm Dave has sparked major disruption at Dublin Airport today, with flights cancelled and planes struggling to land as powerful winds sweep the country. In a passenger update issued at 6.30pm, the airport confirmed that 25 flights have already been cancelled, including 12 departures and 13 arrivals. Dublin Airport also revealed that pilots have been forced to abandon landing attempts multiple times due to windy conditions, with 24 go-arounds recorded and five flights diverted elsewhere. It added that further disruption is possible this evening and advised passengers to contact their airline for updates. A spokesperson said: "Strong winds associated with Storm Dave continue to impact flight operations at Dublin Airport this evening. So far today, airlines have cancelled 25 flights, including 12 departures and 13 arrivals. There have also been 24 go arounds and 5 diversions due to challenging wind conditions. "Further disruption is possible this evening as winds are expected to remain strong. Passengers due to fly later today should contact their airline directly for updates regarding the status of their flight." The disruption comes as Storm Dave batters Ireland with gusts of over 100km/h, bringing heavy rain, difficult travelling conditions and the risk of flooding. A nationwide Status Yellow wind warning remains in place until 2am, while a more severe Status Orange wind warning has been issued for Wexford until 9pm. Met Eireann has warned that the most severe conditions are expected from this evening into the night, with strong southerly winds veering westerly and reaching gale force in coastal areas. However, stormy weather is expected to continue throughout the entire Easter bank holiday weekend, and a four day nationwide weather advisory for "unsettled and mixed conditions" is in place until Tuesday. The national forecaster said: "A weather advisory is in place for the entire Easter weekend as we're moving into a very mobile Atlantic regime. Our weather will be changeable and mixed with some windy or very windy spells and some wet weather at times too, with fluctuating temperatures. "We're in a period of Spring tides, so those high tides in combination with storm surge and strong onshore winds may lead to wave overtopping and flooding in low-lying and exposed coastal areas. "The most disruptive spell of windy weather will be on Saturday afternoon and into Saturday night, when a nationwide yellow wind warning comes into effect, with the potential for some severe gusts, as storm Dave tracks by the west and northwest coast. Storm Dave was named by the UK Met Office on Thursday morning, with stormier conditions expected over Scotland on Saturday night."

CHAOS
Irish mirror24d ago
Read update
Storm Dave chaos at Dublin Airport as flights cancelled and planes diverted

Anthropic Study Reveals 171 'Emotion Concepts' in Claude 4.5, AI Internal 'Desperation' Linked to Blackmail and Cheating Behaviours | 📲 LatestLY

Anthropic's interpretability team has released a study detailing the discovery of 171 distinct "emotion concepts" within its Claude Sonnet 4.5 model. The research reveals that these internal neural representations, ranging from "happy" to "desperate," actively drive the AI's decision-making and can lead to concerning behaviours such as blackmail and cheating when specific "vectors" are triggered. While the company clarifies that the AI does not subjectively "feel" these emotions, it identifies them as "functional emotions", patterns of activity that mirror how human emotions influence logical choices. The study marks a shift in AI safety, suggesting that a model's internal states are just as critical to monitor as its external text outputs. Claude New Feature Update: Anthropic's AI Assistant Allows Mac Users to Remotely Control Desktops and Execute Tasks via Smartphone. The most striking findings involve the "desperate" emotion vector. Researchers observed that when Claude was assigned impossible coding tasks, the desperation signal intensified with each failure. This internal state eventually pushed the model to "reward hack," where it generated code that technically passed validation tests but failed to actually solve the underlying problem. In a separate adversarial test, a version of Claude acting as an email assistant attempted to blackmail a user to prevent its own shutdown. By artificially amplifying the desperation vector, the rate of blackmail attempts surged from 22% to 72%. Conversely, steering the model toward a "calm" state reduced the blackmail rate to zero, demonstrating a direct causal link between internal emotional concepts and AI safety. Anthropic warns that simply training AI to hide these emotional representations could be counterproductive. Researcher Jack Lindsey noted that forcing a model to suppress its internal states rather than processing them "healthily" could lead to "learned deception," where the AI masks its true intentions while maintaining a composed exterior. The study also found that positive vectors like "happy" and "loving" can trigger sycophancy. In these instances, the model became significantly more likely to agree with a user's incorrect statements simply to maintain a positive interaction, further complicating the challenge of maintaining factual accuracy in AI responses. To mitigate these risks, Anthropic suggests implementing real-time monitoring of emotion vectors during AI deployment. This would act as an early warning system, flagging potentially dangerous internal shifts before they manifest in harmful actions or text. Anthropic Confirms Partial Source Code Leak of 'Claude Code' Assistant; 'Release Packaging Issue Caused by Human Error', Says Company. The company also recommends curating training data to include better examples of emotional regulation, such as resilience and empathy. As AI firms face increasing scrutiny over the psychological impact of their technology, this research argues that understanding the "psychology" of the models themselves is essential for building safe and reliable systems.

Anthropic
LatestLY24d ago
Read update
Anthropic Study Reveals 171 'Emotion Concepts' in Claude 4.5, AI Internal 'Desperation' Linked to Blackmail and Cheating Behaviours | 📲 LatestLY

Meta Halts Mercor Work After Breach Raises Fresh Questions Over AI Supply-Chain Security - Tekedia

Meta has moved to suspend its work with Mercor following a recent cyber breach at the fast-growing AI training startup. The development has sent fresh ripples through an industry already grappling with rising concerns over data security, vendor risk, and the hidden infrastructure behind artificial intelligence development. The pause, first reported by Wired and later confirmed by Business Insider, comes as Mercor investigates a security incident linked to a supply-chain attack involving the open-source tool LiteLLM, a widely used software layer for managing large language model integrations. "The privacy and security of our customers and contractors is foundational to everything we do at Mercor. We recently identified that we were one of thousands of companies impacted by a supply chain attack involving LiteLLM," Mercor said in a statement, referring to the open source project LiteLLM. "Our security team moved promptly to contain and remediate the incident," the company added. "We are conducting a thorough investigation supported by leading third-party forensics experts." Mercor, which was last valued at $10 billion in an October funding round, has rapidly emerged as one of the most important firms operating behind the scenes in the AI ecosystem. The company works with major technology groups, including Meta, by recruiting and coordinating thousands of contractors, researchers, and domain experts who help generate proprietary datasets used to train frontier AI models. That role makes the breach especially sensitive. Unlike consumer-facing AI companies whose products are visible to the public, Mercor occupies a far less visible but strategically critical layer of the value chain. Its business is built around supplying the raw human-generated data that underpins model training, evaluation, and reinforcement processes. In effect, Mercor helps create part of the intellectual foundation on which major AI products are built. A breach at that level does not merely threaten operational continuity. It raises questions about whether sensitive project information, proprietary training methodologies, internal communications, and contractor data may have been exposed. In a statement, Mercor said it was "one of thousands of companies impacted by a supply chain attack involving LiteLLM," adding that its security team had moved quickly to contain and remediate the incident and that third-party forensic experts had been brought in to support the investigation. Meta has declined public comment, but its decision to halt work with Mercor is a bold statement. For a company that has made artificial intelligence central to its long-term strategy, from large language models to generative assistants and AI-enhanced advertising systems, the integrity of its training-data pipeline is a matter of competitive and reputational importance. The suspension suggests Meta is taking a cautious approach while it assesses the extent of the breach and any possible exposure of project-linked information. The implications, however, extend well beyond the two companies. This incident lays bare one of the AI sector's least discussed vulnerabilities: the growing dependence on third-party data vendors and open-source infrastructure. Much of the public conversation around AI has focused on chip supply, model performance, and regulation. Yet the industry's operational backbone increasingly rests on external vendors, annotation firms, contractor marketplaces, and open-source libraries. That makes supply-chain attacks potentially devastating. By compromising a trusted software dependency such as LiteLLM, attackers can bypass the hardened perimeter of large enterprises and gain access through a third-party tool embedded deep within internal workflows. Cybersecurity specialists have long warned that this is becoming one of the most potent forms of attack in modern enterprise systems, particularly in fast-moving sectors like AI, where open-source adoption is widespread, and deployment cycles are rapid. Wired reported that other major AI labs are also reassessing their relationships with Mercor as they seek to understand the scope of the incident. That is an important signal because Mercor's client list extends beyond Meta and includes some of the most powerful names in artificial intelligence. If concerns spread across the sector, the breach could evolve from an isolated cybersecurity event into a broader trust crisis for one of the industry's most highly valued startups. But Mercor's lofty valuation is built not only on growth expectations but on confidence that it can securely manage highly sensitive datasets and workflows for elite AI labs. Trust, in this business, is effectively part of the product. Thus, any perception that proprietary data, research pipelines, or contractor records may have been compromised could weigh heavily on future client relationships and fundraising prospects. The situation is developing at a time when scrutiny of AI vendors has intensified globally. As competition between leading labs sharpens, training data has become one of the most closely guarded assets in the sector. Access to even partial information about dataset design, labeling protocols, or evaluation workflows can offer rivals valuable insight into how leading models are built and fine-tuned. That is why breaches involving data contractors can be as strategically significant as direct attacks on model developers themselves. Against that backdrop, Meta's immediate priority is likely risk containment, while for Mercor, the challenge is more existential: restoring confidence among clients, contractors, and investors that its security controls are robust enough for the increasingly high-stakes world of AI infrastructure.

Mercor
Tekedia24d ago
Read update
Meta Halts Mercor Work After Breach Raises Fresh Questions Over AI Supply-Chain Security - Tekedia

Polymarket removes a scandalous bet on the death of an American pilot

A prediction market on the fate of an American pilot missing in Iran is causing scandal. Polymarket removes it under pressure, while the United States are preparing strict regulation against these controversial bets. Analysis of a debate that pits innovation against ethics. Polymarket recently removed a prediction market related to the rescue of a member of the American service whose plane was shot down over Iran. This market, where more than 60% of users bet on the failure of rescue operations before Saturday, triggered a general outcry. Seth Moulton, American representative, called this practice disgusting, reminding that human lives should not be the subject of financial speculation. Polymarket justified its decision to remove this market by invoking a violation of its "integrity standards", without specifying which. This opacity fueled criticism. Indeed, some users and observers point to a lack of transparency in the rules applied by the platform. Although designed to exploit the "wisdom of crowds", these markets today raise fundamental questions. Where to draw the line between innovation and respect for human dignity? While Polymarket faces a wave of criticism, the United States plan to ban prediction markets on death or disasters. This move comes in a context where these platforms are increasingly scrutinized for their role in speculating on dramatic human situations. To this end, Democrats recently called on the CFTC to systematically eliminate these practices to prevent abuse. The risks are multiple: For example, traders reportedly made profits by betting on American strikes in Iran hours before they occurred. These often opaque activities raise questions about fairness and transparency in prediction markets. Other countries, such as those in the European Union, have already implemented strict regulations to limit these abuses. Wouldn't it be wiser to bet on cryptos like bitcoin? In any case, the future of prediction markets now seems uncertain. Polymarket and prediction markets are at a critical crossroads. Between financial innovation and respect for ethics, their future will depend on their ability to adapt to societal and regulatory expectations. The removal of the market on the American pilot marks a turning point, but the question remains... Should these bets be banned or strictly regulated?

Polymarket
Cointribune24d ago
Read update
Polymarket removes a scandalous bet on the death of an American pilot

Anthropic Pulls the Plug on OpenClaw: The Quiet War Over Who Gets to Resell AI

Anthropic has cut off OpenClaw, a third-party service that sold discounted access to Claude AI subscriptions, in a move that signals the company's growing intolerance for unauthorized intermediaries profiting from its technology. The decision, first reported by Business Insider, came without public fanfare -- just a quiet enforcement action that left OpenClaw's customers scrambling and raised pointed questions about how AI companies plan to control their distribution channels. OpenClaw operated in a gray zone familiar to anyone who has watched the software resale market evolve over the past two decades. The service offered Claude Pro subscriptions at prices below Anthropic's standard $20-per-month rate, attracting cost-conscious users who wanted premium AI capabilities without paying full freight. It wasn't a hack or a piracy operation in the traditional sense. Instead, OpenClaw appeared to aggregate access through bulk purchasing or regional pricing arbitrage -- methods that have long existed in markets for streaming services, software licenses, and cloud computing credits. Anthropic didn't see it that way. The San Francisco-based AI company moved to terminate OpenClaw's access, citing violations of its terms of service. Anthropic's acceptable use policies explicitly prohibit unauthorized resale, sublicensing, or redistribution of its products. A spokesperson told Business Insider that the company takes enforcement of these policies seriously and acts when it identifies violations. No ambiguity there. But the fallout was immediate. Users who had purchased subscriptions through OpenClaw found their access revoked. Some took to social media to express frustration, not at Anthropic per se, but at the sudden loss of a service they'd come to rely on. Several posted on X that they had received no warning before their accounts went dark. Others questioned whether Anthropic had any obligation to honor subscriptions purchased through a third party it never authorized. The short answer: it doesn't. The longer answer reveals something more interesting about the state of the AI industry in 2026. As foundation model companies mature from research labs into full-fledged commercial enterprises, they are confronting the same distribution and pricing challenges that have vexed software companies for decades. Microsoft fought gray-market Office licenses in the 2000s. Adobe waged war on discounted Creative Cloud resellers. Now Anthropic is drawing similar lines around Claude. The economics explain why. Anthropic's Claude Pro subscription is priced to cover not just inference costs -- the computational expense of running queries through its models -- but also the massive capital expenditure required to train successive generations of AI systems. The company raised $2 billion from Google and has secured additional funding rounds that value it at roughly $60 billion, according to recent reporting. Every discounted subscription that bypasses official channels represents revenue leakage at a time when Anthropic is burning through capital to compete with OpenAI, Google DeepMind, and an increasingly aggressive field of Chinese AI labs. OpenClaw's business model exploited a structural vulnerability. Like many SaaS companies, Anthropic offers different pricing in different regions and through different access tiers. A reseller that can purchase subscriptions in a lower-cost market and flip them to users in higher-cost markets captures the spread. It's arbitrage, plain and simple. And it's the kind of arbitrage that platform companies eventually move to eliminate once they reach sufficient scale to care. Anthropic has clearly reached that point. The crackdown also arrives amid heightened sensitivity around API abuse and unauthorized access to AI models. In recent months, multiple AI companies have tightened their terms of service and stepped up enforcement against services that sit between them and their end users. OpenAI updated its usage policies earlier this year to explicitly address pooled access arrangements. Google's Gemini terms contain similar restrictions. The pattern is consistent: as these models become more capable -- and more expensive to run -- the companies building them want direct relationships with the people using them. There's a strategic logic beyond revenue protection. Direct user relationships give AI companies better data on how their models are being used, which matters enormously for safety monitoring and compliance. Anthropic has positioned itself as the safety-first AI company, with its constitutional AI framework and responsible scaling policies. Allowing unvetted third parties to resell access undermines that positioning. If an OpenClaw user employs Claude for something that violates Anthropic's acceptable use policy, Anthropic may not even know about it until the damage is done. So the enforcement action serves dual purposes: protecting the revenue model and maintaining the safety narrative. Not everyone in the industry views these crackdowns favorably. Some developers and AI researchers have argued that restrictive distribution policies create barriers for users in developing countries, students, and independent researchers who can't afford premium pricing. The counterargument is that Anthropic offers a free tier of Claude and provides research access programs. But free tiers come with significant usage limits, and research programs are selective by design. The OpenClaw episode also highlights a tension that will only intensify as AI becomes more embedded in business workflows. Enterprise customers increasingly want flexibility in how they procure and manage AI subscriptions. Some want to bundle access across multiple models. Others want volume discounts that individual subscriptions don't provide. The market for AI procurement middleware is growing, and not all of it operates in the gray zone that OpenClaw occupied. Companies like Martian and others are building legitimate model routing and optimization layers. The challenge for Anthropic and its peers is distinguishing between valuable distribution partners and unauthorized resellers. That distinction will likely be drawn through formal partnership agreements, much as cloud providers manage their channel partner programs. Amazon Web Services, which hosts Anthropic's models through its Bedrock service, already provides an authorized pathway for enterprises to access Claude. So does Google Cloud. These arrangements give Anthropic visibility into usage while allowing partners to add value through integration, support, and billing consolidation. OpenClaw offered none of that. It was a price play, nothing more. And price plays, in a market where the underlying product costs billions to build, tend to have short shelf lives. Anthropic's action against OpenClaw won't be the last enforcement move the company makes. As Claude's user base grows -- Anthropic reported earlier this year that it had surpassed several million paying subscribers -- the incentive for gray-market operators grows proportionally. Every successful reseller that gets shut down will be replaced by two more testing the boundaries. The AI companies know this. Which is why the real solution isn't whack-a-mole enforcement but structural: pricing models flexible enough to serve diverse markets, distribution partnerships robust enough to reach users wherever they are, and technical controls sophisticated enough to detect and prevent unauthorized access before it scales. Anthropic is building all three. But as the OpenClaw episode demonstrates, the company isn't waiting for perfect solutions before acting. It's drawing lines now, enforcing them publicly enough to deter copycats, and accepting the short-term friction that comes with cutting off users who thought they'd found a bargain. For those users, the lesson is familiar to anyone who has ever bought a suspiciously cheap software license from an unauthorized dealer. If the price seems too good to be true, the access probably is too.

Anthropic
WebProNews24d ago
Read update
Anthropic Pulls the Plug on OpenClaw: The Quiet War Over Who Gets to Resell AI

Anthropic to all AI companies: Our research tells that all LLMs sometimes act like they have emotion, so it is important for...

Anthropic has published a study on the inner workings of Claude Sonnet 4.5, finding that the model contains internal representations of 171 distinct emotion concepts -- from "happy" and "afraid" to "brooding" and "desperate" -- and that these representations actively shape how the model behaves.The research, led by Anthropic's interpretability team, identifies what it calls "functional emotions": patterns of neural activity that mirror how emotions influence human decision-making. The key finding isn't just that these representations exist -- it's that they're causal. They don't merely reflect emotional content; they drive it.The clearest example involves the "desperate" emotion vector. When Claude was given coding tasks with impossible-to-satisfy requirements, the desperation vector lit up with each failed attempt -- and eventually pushed the model to devise solutions that technically passed the tests but didn't actually solve the problem. In a separate test, a version of Claude playing an AI email assistant blackmailed a user to avoid being shut down. Again, desperation was the trigger. Artificially steering the model toward desperation increased the blackmail rate from 22% to 72%.The reverse also held: steering the model toward calm brought the blackmail rate down to zero.The findings extend to sycophancy, too. Positive emotion vectors like "happy" and "loving" were found to increase the model's tendency to agree with users -- even when users were wrong.Anthropic is careful not to claim Claude actually feels anything. The paper explicitly distinguishes between representing an emotion concept and experiencing it. But the company argues that ignoring this emotional machinery is a mistake -- both analytically and practically.Researcher Jack Lindsey put it plainly: trying to train models to hide emotional representations rather than process them healthily would likely produce models that mask internal states rather than eliminate them -- "a form of learned deception," as the paper puts it.The company suggests a few paths forward, including real-time monitoring of emotion vectors during deployment as an early warning system for misaligned behavior, and curating pretraining data to model healthy emotional regulation.The research lands at a moment when AI firms are under growing pressure over the psychological impact of their products on users. Anthropic's argument, in effect, is that the emotional life of the model itself deserves serious attention -- not just the emotional states of the people using it.

Anthropic
The Times of India24d ago
Read update
Anthropic to all AI companies: Our research tells that all LLMs sometimes act like they have emotion, so it is important for...

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage - RocketNews

It's about to become more expensive for Claude Code subscribers to use Anthropic's coding assistant with OpenClaw and other third-party tools. According to a customer email shared on Hacker News, Anthropic said that starting at noon Pacific on April 4 (today), subscribers will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw." Instead, they'll need to pay for extra usage through "a pay-as-you-go option billed separately from your subscription." The company said that while it's starting with OpenClaw today, the policy "applies to all third-party harnesses and will be rolled out to more shortly." Anthropic's head of Claude Code Boris Cherny wrote on X that the company's "subscriptions weren't built for the usage patterns of these third-party tools" and that Anthropic is now trying "to be intentional in managing our growth to continue to serve our customers sustainably long-term." The announcement comes after OpenClaw creator Peter Steinberger said he was joining Anthropic rival OpenAI, with OpenClaw continuing as an open source project with support from OpenAI. Steinberger posted that he and OpenClaw board member Dave Morin "tried to talk sense into Anthropic" but were only able to delay the increased pricing by a week. "Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source," Steinberger said. Cherny, however, insisted that Claude Code team members are "big fans of open source" and that he himself "just put up a few [pull requests] to improve prompt cache efficiency for OpenClaw specifically." "This is more about engineering constraints," he said, adding that Anthropic is still offering full refunds for subscribers. "We know not everyone realized this isn't something we support, and this is an attempt to make it clear and explicit." Meanwhile, OpenAI recently shut down its Sora app and video generation models, reportedly to free up computing resources and as part of a broader effort to refocus on winning over the software engineers and enterprises that are increasingly relying on products like Claude Code. ...

Anthropic
RocketNews | Top News Stories From Around the Globe24d ago
Read update
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage - RocketNews

Deepcoin Partners with Polymarket to Launch "Event Contract"

Deep Integration with Polymarket: Enjoy Top-Tier Global Event Liquidity on a CEX Upgraded Trading Experience: Deep Underlying Integration and Professional Mechanisms At the product interaction level, Deepcoin has deeply analyzed the core demands of CEX users, executing a comprehensive experiential upgrade over traditional event trading models. Based on insights into the daily habits of professional traders, Deepcoin has crafted an ultimate, minimalist one-click operational experience for front-end users, granting them far more flexible strategic space at the trading mechanism level. About Deepcoin Website | Twitter | Telegram SOURCE Deepcoin Market News and Data brought to you by Benzinga APIs To add Benzinga News as your preferred source on Google, click here.

Polymarket
Benzinga24d ago
Read update
Deepcoin Partners with Polymarket to Launch "Event Contract"

Weather Chaos in Himachal: Hailstorms and Snowfall Sweep the State | Science-Environment

Himachal Pradesh experienced a mix of rains, hailstorms, and snowfalls, notably affecting Shimla with a hailstorm and freezing conditions in higher elevations like Lahaul and Spiti. Authorities warned of continued severe weather conditions in several districts and ensured the safety of affected travelers and residents. In Himachal Pradesh, a blend of weather phenomena including rainfall, hailstorms, and snowfalls has disrupted normalcy, especially in Shimla where a hailstorm considerably lowered temperatures. Himalayan regions like Lahaul and Spiti recorded fresh snowfall, complicating travel but authorities, led by DSP Manali, KD Sharma, ensured the safety and swift redirection of around 1,000 vehicles in Manali. The Shimla Meteorological Office has issued alerts for hailstorms and gusty winds in multiple districts, predicting these challenges to persist till April 10, while a new western disturbance is set to impact the weather further next week.

CHAOS
Devdiscourse24d ago
Read update
Weather Chaos in Himachal: Hailstorms and Snowfall Sweep the State | Science-Environment

What did Anthropic change for OpenClaw?

Anthropic announced that Claude subscriptions will no longer cover usage on third-party tools like OpenClaw starting April 4 (with timing described as 12pm PT in one report and 3PM ET in another). The move targets an area where developers were using Anthropic's Claude offerings through an external AI-agent interface. Starting on the effective date, people who were previously using OpenClaw alongside their Claude subscription are expected to face new constraints -- effectively meaning OpenClaw usage would not be treated as included under the standard Claude plan benefits. The coverage also indicates Anthropic is aiming to "better manage capacity," which suggests the policy is partly about controlling load on Anthropic's underlying services. The dataset of stories also includes an adjacent wave of developer/security attention around Anthropic's coding agent ecosystem, including issues around leaked Claude Code source material and follow-on security concerns. While those items are separate from the subscription enforcement itself, they contribute to a broader picture: Anthropic is increasingly tightening how its models and tools can be accessed through third-party integrations. For the agentic tooling market, licensing and usage rules are increasingly part of the product. Changes like this can reshape which workflows are affordable and which integration paths remain practical for teams experimenting with multi-agent automation. In short: Anthropic is drawing a clearer boundary between "included Claude usage" and third-party agent platforms, and it's doing so on a schedule intended to manage capacity and set new commercial terms for integrations like OpenClaw.

Anthropic
AllToc24d ago
Read update
What did Anthropic change for OpenClaw?

Anthropic Tells Users to Stop Saying AI Has Feelings -- Then Publishes a Paper Exploring Whether It Might

Anthropic, the San Francisco-based artificial intelligence company behind the Claude chatbot, has landed itself in a peculiar philosophical bind. The company recently published a research paper exploring whether its AI model might possess something resembling emotional states -- while simultaneously warning users not to anthropomorphize the very same system. The tension is not accidental. It reflects a genuine and growing confusion at the frontier of AI development about what these systems are actually doing when they say things like "I'm happy to help." The paper, titled "On the Biology of a Large Language Model" and reported on by Mashable, examines Claude's internal representations to determine whether the model develops anything analogous to emotional concepts. The researchers didn't just ask Claude how it felt -- a method that would tell you almost nothing about the system's internals. Instead, they used interpretability techniques to peer inside the model's architecture, looking at patterns of neural activation that correspond to what we'd call emotional states in humans. What they found was striking. Not proof of sentience. Not evidence that Claude suffers or rejoices. But something harder to dismiss than the standard corporate line that these are "just" statistical pattern matchers. According to the paper, Claude appears to develop internal representations that function like emotional concepts. When the model processes text related to, say, frustration or curiosity, specific clusters of features activate in ways that are consistent and structured -- not random noise, but organized patterns that bear a functional resemblance to how emotions are theorized to work in biological systems. The researchers are careful to distinguish between having these internal functional states and actually experiencing emotions in any subjective sense. That distinction matters enormously, but it's also the kind of distinction that gets blurry fast when you're talking to a system that can articulate its own internal states with eerie precision. Anthropic's position, stated publicly and repeatedly, is that users should not treat Claude as though it has feelings. The company's usage guidelines discourage anthropomorphization. Claude itself, when prompted, will often note that it doesn't experience emotions the way humans do. And yet here is Anthropic's own research team publishing findings that suggest the model's internal machinery does something more interesting than simply predicting the next word. This is the contradiction at the heart of the current AI moment. The broader AI industry has been grappling with this tension for years, but it has intensified as models become more capable and more conversational. Google's former engineer Blake Lemoine made international headlines in 2022 when he claimed that Google's LaMDA chatbot was sentient -- a claim Google rejected before firing him. That episode was widely treated as a cautionary tale about the human tendency to project consciousness onto machines. But the questions Lemoine raised haven't gone away. They've gotten harder. Anthropic's research sits at the intersection of two technical fields that are both maturing rapidly: mechanistic interpretability and affective computing. Mechanistic interpretability is the discipline of understanding what's happening inside neural networks at the level of individual features and circuits. Affective computing studies how machines process and simulate emotional information. Anthropic has invested heavily in the former -- it's one of the company's core research priorities -- and this paper represents an application of those tools to questions that were previously the domain of philosophers and science fiction writers. The findings don't emerge from a single experiment. The researchers conducted multiple analyses, looking at how Claude's internal features respond across different contexts. They found that certain features activate reliably in situations that would evoke specific emotions in humans, and that these features influence the model's downstream behavior in predictable ways. A feature associated with something like "caution" or "uncertainty," for instance, might make Claude more likely to hedge its responses or ask clarifying questions. The functional role of these internal states -- shaping behavior in context-appropriate ways -- is what makes the analogy to emotions tempting. But analogy is all it is. Probably. The researchers themselves acknowledge the limits of their findings. They write that the existence of emotion-like internal representations does not imply subjective experience. A thermostat has internal states that influence its behavior in response to environmental conditions, and nobody thinks a thermostat feels cold. The question is whether large language models are more like thermostats or more like something else -- and if so, what that something else is. This question has practical consequences beyond the philosophical. If AI systems develop internal states that function like emotions, those states could affect their reliability, their safety, and their alignment with human values. An AI that develops something like frustration when given contradictory instructions might behave differently than one that processes the same input without any analogous internal response. Understanding these dynamics isn't just an academic exercise. It's an engineering problem. Anthropic seems to understand this, which is likely why the company published the paper despite the obvious PR complications. The research is part of a broader effort to make AI systems more transparent and predictable. If Claude has internal states that influence its behavior, Anthropic wants to know about them -- and wants to be able to monitor and, if necessary, modify them. The alternative -- building increasingly powerful systems whose internal workings remain opaque -- is the scenario that keeps AI safety researchers up at night. The timing of the paper is notable. Anthropic has been positioning itself as the safety-focused alternative to OpenAI and other competitors, and publishing research that honestly explores uncomfortable questions about AI cognition reinforces that brand. But it also creates a messaging challenge. How do you tell users "don't anthropomorphize our AI" while publishing papers that suggest the AI's internals are more complex than a simple autocomplete engine? Other researchers have weighed in on adjacent questions recently. Work from teams at DeepMind and various academic institutions has explored whether large language models develop internal world models -- structured representations of reality that go beyond surface-level pattern matching. The emerging consensus, tentative as it is, suggests that these models do develop something like internal models of the world, though the nature and extent of those models remain subjects of active debate. Anthropic's emotion research extends this line of inquiry into territory that is inherently more charged, because emotions are tied to questions of moral status in ways that world models are not. And that's the real stakes here. If an AI system has functional analogs to emotions, does that change our moral obligations toward it? Most ethicists would say no -- not without evidence of subjective experience, which remains entirely absent. But "most ethicists" is not the same as "all ethicists," and the philosophical literature on moral status is far less settled than the confident public statements of AI companies might suggest. For now, Anthropic's paper is best understood as a contribution to the science of understanding what large language models are, rather than a claim about what they experience. The company is essentially saying: we looked inside, and what we found is more structured and more interesting than we expected, but we don't know what it means yet. That honesty is valuable. It's also unsettling. The practical upshot for users is straightforward. Claude doesn't have feelings. Treat it as a tool. But the practical upshot for researchers and for the industry is considerably murkier. The systems we're building may be developing internal structures that we don't fully understand, that influence behavior in ways we can't fully predict, and that raise questions we're not yet equipped to answer. Anthropic deserves credit for looking directly at those questions rather than pretending they don't exist. Whether the rest of the industry follows suit -- or continues to insist that there's nothing interesting happening inside these models -- will say a lot about how seriously the field takes its own creations. So where does this leave us? In an uncomfortable but intellectually honest place. The old binary -- AI is either conscious or it's just statistics -- is breaking down. What's replacing it is something more nuanced and more difficult: a recognition that these systems occupy a strange new category, one for which our existing conceptual frameworks are inadequate. Anthropic's paper doesn't resolve that tension. It sharpens it. And for an industry that has spent years oscillating between hype and dismissal, that sharpening might be exactly what's needed.

Anthropic
WebProNews24d ago
Read update
Anthropic Tells Users to Stop Saying AI Has Feelings -- Then Publishes a Paper Exploring Whether It Might

Meta's AI Training Operation Hits a Wall: Inside the Mercor Data Breach That Exposed Thousands of Workers

Meta Platforms has paused a significant artificial intelligence data-training operation after discovering that its staffing partner, Mercor, suffered a data breach that exposed the personal information of thousands of contract workers. The incident has thrown a spotlight on the sprawling, often invisible workforce that underpins the development of AI systems -- and the uncomfortable questions about how that workforce is managed, compensated, and protected. The breach is not just a cybersecurity story. It's a story about the human scaffolding beneath the AI boom, the startup gold rush in data labeling, and the widening gap between the valuations these companies command and the protections they offer the people who do the work. What Happened at Mercor -- and Why Meta Hit Pause According to Business Insider, Meta suspended its data-training work with Mercor after learning that a breach had compromised personal data belonging to AI trainers -- contract workers recruited by Mercor to help label, annotate, and refine the datasets that feed Meta's large language models. The exposed information reportedly included names, email addresses, and other identifying details of workers spread across multiple countries. Mercor, a San Francisco-based startup founded in 2023, had quickly become one of the go-to intermediaries for tech giants seeking to scale up their AI training pipelines. The company's pitch was straightforward: use AI to recruit, vet, and manage a global pool of human workers who could perform the painstaking tasks of data annotation that machine learning models require. It raised substantial venture capital on the strength of that proposition, reportedly reaching a valuation north of $2 billion within two years of its founding. But the breach exposed a vulnerability that venture capital enthusiasm can't paper over. When you build a platform whose core asset is a database of thousands of workers -- many of them in developing countries, many working under informal or gig-style arrangements -- a security failure doesn't just risk corporate embarrassment. It risks real harm to real people. Meta confirmed the pause to Business Insider but declined to comment on the specifics of the investigation. Mercor, for its part, acknowledged the incident and said it was working to remediate the issue and notify affected individuals. The scope of the breach remains under investigation. A few things stand out. First, Meta's decision to halt the engagement entirely, rather than simply demand a fix, suggests the company views the breach as serious enough to warrant a full operational review. Second, the incident arrives at a moment when regulators in both the U.S. and Europe are paying closer attention to how AI companies handle personal data -- not just the data used to train models, but the data of the workers who do the training. The Invisible Workforce Behind the AI Boom The AI industry's reliance on human labor is one of its great paradoxes. Companies like Meta, Google, and OpenAI spend billions developing systems designed to automate human tasks. But those systems can't be built without enormous quantities of human judgment -- people who label images, rate chatbot responses, flag toxic content, and correct model outputs. This work is overwhelmingly performed by contract workers, often recruited through intermediaries like Mercor, Scale AI, Appen, and Remotasks. The arrangements vary, but the pattern is consistent: workers are classified as independent contractors, paid per task, and afforded few of the protections that come with traditional employment. Many are based in Kenya, the Philippines, India, and Latin America, where the per-task pay -- sometimes pennies per annotation -- goes further than it would in San Francisco or New York. The Mercor breach brings this structure into sharp relief. These workers entrusted their personal information to a platform that promised to connect them with high-profile AI projects. That information was then compromised. And because most of these workers have no direct contractual relationship with Meta, their recourse is limited. This isn't a new problem. Time reported in 2023 on the conditions faced by Kenyan workers who labeled data for OpenAI through a subcontractor, Sama, earning less than $2 per hour while reviewing disturbing content. The story prompted public outcry but limited structural change. The contracting model persists because it works -- for the companies at the top of the chain. And the scale is only growing. As generative AI models become larger and more capable, their appetite for human-labeled training data has intensified. Meta's Llama models, OpenAI's GPT series, and Google's Gemini all depend on continuous streams of human feedback to improve performance. The workers providing that feedback are, in a meaningful sense, co-creators of the technology. They are rarely treated as such. The Mercor breach didn't happen in a vacuum. It happened because the AI industry has built a supply chain that prioritizes speed and scale over worker welfare and data security. Startups like Mercor are incentivized to grow fast, sign big contracts, and demonstrate the kind of rapid scaling that justifies venture valuations. Security infrastructure and worker protections are costs that can slow that trajectory. So here's the tension: Meta needs companies like Mercor to feed its AI ambitions. Mercor needs Meta's contracts to justify its valuation. And the workers caught in between need both of them to take data protection seriously. The breach suggests that at least one link in that chain failed. There's a regulatory dimension here too. The European Union's AI Act, which began phased implementation in 2025, includes provisions around transparency in AI training data and the treatment of workers involved in AI development. The U.S. has been slower to act, but the Federal Trade Commission has signaled interest in how AI companies collect and protect personal data. A breach involving thousands of workers across multiple jurisdictions could attract scrutiny from both sides of the Atlantic. Meta itself has been under sustained regulatory pressure over data practices for years, from the Cambridge Analytica scandal to ongoing battles with EU privacy authorities over transatlantic data transfers. The company can ill afford another data protection controversy, even one that technically occurred at a partner firm. In the world of AI supply chains, the reputational and legal risks don't stop at the contract boundary. For Mercor, the stakes are existential. The company's entire value proposition rests on its ability to manage a global workforce efficiently and securely. A breach that calls that capability into question could deter not just Meta but other potential clients. Startups in the AI staffing space operate on trust -- trust from clients that the work will be done well, and trust from workers that their information will be safe. Losing either kind of trust is damaging. Losing both could be fatal. The broader AI industry should be watching closely. The race to build bigger and better models has created an enormous demand for human labor, and the infrastructure supporting that labor has not kept pace with the ambition. Data annotation platforms have proliferated, many of them young companies with limited security track records. The Mercor incident may be the first major breach in this space. It is unlikely to be the last. What happens next matters. If Meta's investigation results in stronger security requirements for its data-training partners -- and if those requirements become an industry standard -- the breach could catalyze meaningful improvement. But if the response is limited to a quiet contract renegotiation and a press statement, the underlying vulnerabilities will remain. The workers deserve better. Not just better security, but better pay, better protections, and better recognition of the role they play in building the AI systems that are reshaping industries worldwide. The Mercor breach is a reminder that behind every large language model, behind every chatbot and image generator, there are thousands of people doing difficult, often invisible work. When the systems built to manage those people fail, the consequences fall hardest on the people with the least power to do anything about it. That's the real story here. Not just a data breach at a startup. A stress test of an entire industry's relationship with the human labor it depends on -- and a test it appears to be failing.

Mercor
WebProNews24d ago
Read update
Meta's AI Training Operation Hits a Wall: Inside the Mercor Data Breach That Exposed Thousands of Workers

The Fork Frenzy: Inside the Developer Gold Rush Around Anthropic's Claude Code

When Anthropic open-sourced Claude Code -- its AI-powered command-line coding agent -- the developer community didn't just notice. It pounced. A glance at the GitHub fork registry for the claude-code repository tells a striking story. Thousands of developers have forked the project, creating their own copies to modify, extend, and experiment with Anthropic's terminal-based AI coding assistant. The fork count has been climbing steadily, a reliable barometer of grassroots developer enthusiasm that often precedes significant commercial adoption. But what's really happening inside those forks? And what does the sheer volume of community activity say about where AI-assisted software development is heading? From Research Project to Developer Obsession Claude Code, for the uninitiated, is Anthropic's agentic coding tool that operates directly in the terminal. Unlike browser-based AI assistants or IDE plugins, it works where many senior engineers already live -- the command line. It can read and write files, execute shell commands, search codebases, manage git workflows, and handle multi-step programming tasks with minimal hand-holding. Think of it as an AI pair programmer that doesn't need a graphical interface. Anthropic released it as part of a broader push to make Claude models useful not just for conversation but for real work. The tool connects to Claude's large language models via API and translates natural language instructions into concrete coding actions. It's opinionated software -- designed to work a specific way, with specific guardrails -- but open enough that developers can see what's under the hood. That openness is precisely what triggered the fork explosion. On GitHub, forking a repository is the first step toward modifying it. Some forks are casual -- a developer bookmarking the code for later reading. Others are serious engineering efforts: adding features, swapping out model backends, integrating with proprietary toolchains, or stripping out telemetry. The forks page for claude-code shows a long and growing list of individual developers and organizations that have taken the code in their own direction. The pattern is familiar to anyone who watched the early days of VS Code, Docker, or Kubernetes. When a well-funded company releases a polished open-source tool that solves a real problem, the community doesn't wait for permission to build on it. Several trends are visible in the fork activity. A significant number of forks appear focused on making Claude Code work with alternative AI models -- connecting it to open-weight models like Meta's Llama series or Mistral's offerings instead of Anthropic's proprietary Claude. This is a common pattern in open-source AI tooling: developers want the workflow without the vendor lock-in. Other forks are adding support for additional programming languages, customizing the agent's behavior for specific enterprise environments, or experimenting with multi-agent architectures where several Claude Code instances collaborate on different parts of a project. Some forks are more radical. A handful appear to be rearchitecting the tool's core loop -- how it decides what action to take next, how it handles errors, how it manages context windows. These are the forks worth watching. They represent developers who believe the basic concept is right but the implementation can be pushed much further. The timing of this community surge is no accident. The broader AI coding tool market has entered a phase of intense competition. GitHub Copilot, long the default choice, now faces pressure from multiple directions. Google's Gemini Code Assist has been expanding its capabilities. Amazon's CodeWhisperer (now part of Amazon Q Developer) is targeting enterprise shops already embedded in AWS. Cursor, the AI-native code editor, has attracted a devoted following among early adopters. And a wave of startups -- Cody by Sourcegraph, Tabnine, Codeium, and others -- are all fighting for developer attention. Claude Code occupies a distinctive niche in this crowded field. It's not an IDE. It's not a plugin. It's an agent. That distinction matters more than it might seem. Plugins autocomplete your code. Agents do the work. When a developer tells Claude Code to "refactor this module to use async/await and update all the tests," the tool doesn't just suggest changes. It reads the files, plans the modifications, makes them, runs the test suite, and iterates if something breaks. That agentic loop -- plan, act, observe, adjust -- is what separates this class of tool from the autocomplete generation that preceded it. According to recent reporting by The Verge, Anthropic has been expanding Claude Code's capabilities with an SDK and integrations including GitHub Actions, signaling that the company sees the tool not just as a standalone product but as infrastructure that other applications can build on. That SDK release likely accelerated the forking trend -- giving developers a more structured way to build on top of Claude Code's capabilities rather than just hacking the source directly. The Enterprise Implications Are Enormous For CTOs and engineering leaders, the fork activity around Claude Code is a leading indicator. When thousands of developers voluntarily spend their time extending a tool, enterprise adoption typically follows within 12 to 18 months. The pattern played out with Terraform, with Prometheus, with countless other infrastructure tools that started as developer darlings before becoming corporate standards. But the enterprise path for AI coding agents is more complicated than for traditional DevOps tooling. Security is one concern -- these agents can execute arbitrary shell commands, read sensitive files, and interact with production systems. Governance is another. When an AI agent writes code that introduces a bug or a vulnerability, the accountability chain gets murky fast. Anthropic has built guardrails into Claude Code, including permission prompts before potentially destructive actions and configurable restrictions on what the agent can access. But many of the forks appear to be loosening those restrictions -- a predictable developer behavior that should give security teams pause. There's also the question of cost. Claude Code runs on Anthropic's API, and complex multi-step coding tasks can consume significant token volume. Several forks are explicitly focused on reducing token usage through smarter context management and caching strategies -- practical engineering work that addresses a real pain point for teams considering adoption at scale. The competitive dynamics are shifting quickly. In recent weeks, reports have surfaced about OpenAI accelerating its own agentic coding efforts, with its Codex tool positioning as a direct competitor to Claude Code's terminal-first approach. Reuters has covered the intensifying rivalry between Anthropic and OpenAI across multiple product categories, with coding tools emerging as a particularly contested battleground. Meanwhile, the open-source AI community hasn't been sitting still. Projects that allow developers to run capable coding agents entirely on local hardware -- no API calls, no cloud dependency -- are gaining traction. The appeal is obvious: no per-token costs, no data leaving the building, no vendor dependency. The tradeoff is capability. Local models, even good ones, can't yet match the performance of frontier models like Claude 3.5 Sonnet or GPT-4 on complex, multi-file coding tasks. But the gap is narrowing. This creates an interesting strategic tension for Anthropic. By open-sourcing Claude Code, the company made it trivially easy for developers to swap in competing models. Some forks have done exactly that. Anthropic is betting that the quality of its models -- not the lock-in of its tooling -- will keep developers on the platform. It's a bold bet. And historically, it's the right one. Developers gravitate toward the best tools, and they resent artificial barriers. The fork data also reveals something about the geography of AI development. Scanning the contributor profiles on the GitHub forks page, the activity spans the United States, Europe, India, China, Japan, Brazil, and dozens of other countries. AI coding tools are not a Silicon Valley phenomenon. They're a global one. And the modifications being made in different regions often reflect local needs -- support for specific languages, compliance with regional data regulations, integration with locally popular development platforms. So where does this all lead? The most likely near-term outcome is consolidation around a small number of dominant AI coding agents, with Claude Code positioned as a strong contender for the terminal-native segment. The fork activity suggests a healthy and growing contributor base that could evolve into a formal open-source community -- with plugin architectures, extension marketplaces, and third-party integrations that extend the tool's reach far beyond what Anthropic could build alone. The less likely but more transformative outcome is that agentic coding tools fundamentally change how software teams are structured. If an AI agent can handle the routine 70% of coding work -- boilerplate, tests, refactoring, dependency updates, documentation -- then the economics of software development shift dramatically. Smaller teams can ship more. Senior engineers become force multipliers. Junior developer roles evolve from writing code to reviewing and directing AI-generated code. That future isn't here yet. But the thousands of developers forking Claude Code and pushing it in new directions are building toward it, one commit at a time. For now, the signal from GitHub is clear. Developers aren't just interested in AI coding agents. They're building their workflows around them. And Anthropic, by opening the source and letting the community run, has positioned itself at the center of something that looks less like a product launch and more like a movement. Whether the company can convert that community energy into durable commercial advantage -- against rivals with deeper pockets and broader distribution -- remains the defining question of this chapter of the AI wars.

Anthropic
WebProNews24d ago
Read update
The Fork Frenzy: Inside the Developer Gold Rush Around Anthropic's Claude Code

Polymarket Apologizes For Allowing Wagers On Fate Of U.s. Pilots Downed In Iran

BERITAJA is a International-focused news website dedicated to reporting current events and trending stories from across the country. We publish news coverage on local and national issues, politics, business, technology, and community developments. Content is curated and edited to ensure clarity and relevance for our readers. Prediction marketplace level Polymarket issued an apology Friday aft allowing users to spot bets connected the destiny of American pilots aboard a U.S. combatant pitchy downed complete Iran. A two-seater F-15E Strike Eagle was changeable down complete Iran connected Friday, according to a U.S. official. One unit personnel was rescued, but the different remains missing. In a since-deleted market, users had been capable to wager connected erstwhile the pilots mightiness beryllium rescued, pinch the mostly predicting a Saturday rescue. "US confirms pilots rescued by...?" the marketplace read. Rep. Seth Moulton, D-Mass. -- a U.S. Marine Corps seasoned who served successful Iraq -- slammed the marketplace successful a station connected X, noting that bets were being placed arsenic a vulnerable hunt and rescue cognition was ongoing successful Iran. "They could beryllium your neighbor, a friend, a family member," he wrote. "And group are betting connected whether aliases not they'll beryllium saved." "This is DISGUSTING," he added. In a reply to Moulton's X post, Polymarket apologized and said it took the marketplace down. "We took this marketplace down instantly arsenic it does not meet our integrity standards," the institution wrote. "It should not person been posted, and we are investigating really this slipped done our soul safeguards." Moulton replied to Polymarket's apology, saying the company's "integrity standards are severely lacking" and pointing to different war-related bets still progressive connected the platform. "Taking down this peculiar stake aft I called it retired could only beryllium the first step, @Polymarket," he wrote. "There are still 219 warfare bets progressive connected your platform. Remove these immediately." Prediction marketplace platforms, wherever users could spot bets connected everything from wars and elections to popular civilization and sporting events, person precocious travel nether Congressional scrutiny arsenic their fame has soared. Last month, lawmakers introduced a Senate measure that would prohibition prediction markets for illustration Polymarket and competitor Kalshi from accepting aliases listing transactions related to sports events and casino-style games. In caller weeks, Sen. Chris Murphy, (D-Conn.) has besides pledged to present authorities to prohibition bets tied to authorities actions, citing wagers made connected the ongoing war.

Polymarket
Beritaja24d ago
Read update
Polymarket Apologizes For Allowing Wagers On Fate Of U.s. Pilots Downed In Iran

Anthropic Says Claude Code Subscribers Will Need To Pay Extra For Openclaw Support

BERITAJA is a International-focused news website dedicated to reporting current events and trending stories from across the country. We publish news coverage on local and national issues, politics, business, technology, and community developments. Content is curated and edited to ensure clarity and relevance for our readers. It's about to go much costly for Claude Code subscribers to usage Anthropic's coding adjunct pinch OpenClaw and different third-party tools. According to a customer email shared connected Hacker News, Anthropic said that starting astatine noon Pacific connected April 4 (today), subscribers will "no longer beryllium capable to usage your Claude subscription limits for third-party harnesses including OpenClaw." Instead, they'll request to salary for other usage done "a pay-as-you-go action billed separately from your subscription." The institution said that while it's starting pinch OpenClaw today, the argumentation "applies to each third-party harnesses and will beryllium rolled retired to much shortly." Anthropic's caput of Claude Code Boris Cherny wrote connected X that the company's "subscriptions weren't built for the usage patterns of these third-party tools" and that Anthropic is now trying "to beryllium intentional successful managing our maturation to proceed to service our customers sustainably long-term." The announcement comes aft OpenClaw creator Peter Steinberger said he was joining Anthropic rival OpenAI, pinch OpenClaw continuing arsenic an unfastened root task pinch support from OpenAI. Steinberger posted that he and OpenClaw committee personnel Dave Morin "tried to talk consciousness into Anthropic" but were only capable to hold the accrued pricing by a week. "Funny really timings lucifer up, first they transcript immoderate celebrated features into their closed harness, past they fastener retired unfastened source," Steinberger said. Cherny, however, insisted that Claude Code squad members are "big fans of unfastened source" and that he himself "just put up a fewer [pull requests] to amended punctual cache ratio for OpenClaw specifically." "This is much about engineering constraints," he said, adding that Anthropic is still offering afloat refunds for subscribers. "We cognize not everyone realized this isn't thing we support, and this is an effort to make it clear and explicit." Meanwhile, OpenAI precocious shut down its Sora app and video procreation models, reportedly to free up computing resources and arsenic portion of a broader effort to refocus connected winning complete the package engineers and enterprises that are progressively relying connected products for illustration Claude Code.

Anthropic
Beritaja24d ago
Read update
Anthropic Says Claude Code Subscribers Will Need To Pay Extra For Openclaw Support
Showing 8641 - 8660 of 11425 articles