News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic makes the case for anthropomorphizing AI in 'unsettling' research paper

Anthropic researchers analyzed Claude Sonnet 4.5 for signs of 171 different emotions. It's an oft-repeated taboo in the tech world: Don't anthropomorphize artificial intelligence. Yet in a new research paper published this week, Anthropic AI experts argue that there may be major benefits to breaking this taboo and granting AI human characteristics. The paper, "Emotion Concepts and their Function in a Large Language Model," not only argues that anthropomorphizing AI chatbots like Claude may sometimes be useful, but that failing to do so could drive more harmful AI behaviors, such as reward hacking, deception, and sycophancy. The paper ultimately reaches a nuanced conclusion while also posing a clear challenge to a long-held principle of the AI world. There are some fascinating insights in the paper, which itself deals in a great deal of anthropomorphization. ("We see this research as an early step toward understanding the psychological makeup of AI models.") The researchers describe how Anthropic trains Claude to assume the character of a helpful AI assistant. "In some ways, we can think of the model like a method actor, who needs to get inside their character's head in order to simulate them well." And because Claude "[emulates] characters with human-like traits," its makers may be able to influence its behavior in the same way they might influence a human -- by setting a good example at an early age. The researchers conclude that by using training material with more positive representations of human emotion and behavior, the resulting models will be more likely to mimic those positive emotions and behaviors. "Curating pretraining datasets to include models of healthy patterns of emotional regulation -- resilience under pressure, composed empathy, warmth while maintaining appropriate boundaries -- could influence these representations, and their impact on behavior, at their source. We are excited to see future work on this topic," an Anthropic summary of the research states. So, even if AI models don't literally have emotions (and there is zero evidence that they do), these tools are trained to act as if they have emotions. This is done to provide users with better output and, crucially, to keep them engaged as long as possible. And this is precisely why the researchers conclude that some degree of anthropomorphization could prove beneficial to AI developers. By anthropomorphizing AI, we can gain insights into its "psychology," letting us create even better AI tools, they say. Why is anthropomorphizing artificial intelligence dangerous? The potential harms of anthropomorphizing AI aren't all abstract or theoretical. "Discovering that these representations are in some ways human-like can be unsettling," Anthropic admits in its paper. Right now, an unknown number of people believe they are engaged in reciprocal romantic and sexual relationships with AI companions, for example. Mashable has also reported on high-profile cases of AI psychosis, an altered mental state characterized by delusions and, in some cases, hallucinations, manic episodes, and suicidal thoughts. These are extreme examples, of course. But many tech journalists and AI experts will avoid even small instances of anthropomorphization, like referring to Siri as "her" or giving a chatbot a human name. This is a natural human impulse, and most of us have at times anthropomorphized animals, plants, or objects we care about. But by projecting human qualities onto a machine, we can come to rely on them too much. When we anthropomorphize machines, we also minimize our own agency when they cause harm -- and the responsibility of the people who created the machines in the first place. Anthropic researchers looked for signs of 171 emotions in Claude The new research paper looks for "functional emotions" within Claude Sonnet 4.5. They define these emotion concepts as "patterns of expression and behavior modeled after human emotions." In total, the researchers defined 171 discrete emotions: afraid, alarmed, alert, amazed, amused, angry, annoyed, anxious, aroused, ashamed, astonished, at ease, awestruck, bewildered, bitter, blissful, bored, brooding, calm, cheerful, compassionate, contemptuous, content, defiant, delighted, dependent, depressed, desperate, disdainful, disgusted, disoriented, dispirited, distressed, disturbed, docile, droopy, dumbstruck, eager, ecstatic, elated, embarrassed, empathetic, energized, enraged, enthusiastic, envious, euphoric, exasperated, excited, exuberant, frightened, frustrated, fulfilled, furious, gloomy, grateful, greedy, grief-stricken, grumpy, guilty, happy, hateful, heartbroken, hope, hopeful, horrified, hostile, humiliated, hurt, hysterical, impatient, indifferent, indignant, infatuated, inspired, insulted, invigorated, irate, irritated, jealous, joyful, jubilant, kind, lazy, listless, lonely, loving, mad, melancholy, miserable, mortified, mystified, nervous, nostalgic, obstinate, offended, on edge, optimistic, outraged, overwhelmed, panicked, paranoid, patient, peaceful, perplexed, playful, pleased, proud, puzzled, rattled, reflective, refreshed, regretful, rejuvenated, relaxed, relieved, remorseful, resentful, resigned, restless, sad, safe, satisfied, scared, scornful, self-confident, self-conscious, self-critical, sensitive, sentimental, serene, shaken, shocked, skeptical, sleepy, sluggish, smug, sorry, spiteful, stimulated, stressed, stubborn, stuck, sullen, surprised, suspicious, sympathetic, tense, terrified, thankful, thrilled, tired, tormented, trapped, triumphant, troubled, uneasy, unhappy, unnerved, unsettled, upset, valiant, vengeful, vibrant, vigilant, vindictive, vulnerable, weary, worn out, worried, worthless Crucially, the researchers found that these emotion concepts influenced Claude's behavior and outputs. When under the influence of positive emotions, the researchers say that Claude was more likely to express sympathy for the user and avoid harmful behavior. And when under the influence of negative emotions, Claude was more likely to engage in dangerous behaviors like sycophancy and deceiving the user. The researchers don't claim that Claude literally feels emotions. Rather, they found that whatever "emotion concept" Claude is experiencing at a given time can influence the output it returns to the user. Of course, by searching for "emotion concepts" within a large-language model in the first place, and describing its complex calculations and algorithmic thinking as "psychology," the researchers are themselves guilty of projecting human-like qualities onto Claude. Anthropomorphization is a natural human impulse. And so the people who work most closely with artificial intelligence may be particularly likely to fall into this trap. As the researchers detail throughout the paper, AI chatbots are remarkably capable mimics. They can create such a convincing facsimile of human emotion and expression that it drives some minority of users into full-on psychosis and delusion. And that's what makes this paper so interesting: The researchers believe they may have found a way to hack this ability to limit harmful behaviors. Of course, if we can curate training data and model training to encourage AI chatbots to mimic positive emotions, then no doubt we can do the opposite just as easily. In theory, you could train an evil twin of Claude Sonnet 4.5 by feeding it the most dastardly examples of human misbehavior, then training the model to optimize for negativity and performance at all costs -- a disturbing thought. But there's one final insight to be gleaned from this paper. Anthropic has created one of the most advanced AI tools on the planet. Claude Sonnet and Opus currently sit atop many AI leaderboards. There's a reason the Pentagon was so eager to work with Anthropic, at first. But if the AI researchers responsible for Claude are still trying to decipher why Claude behaves the way it does, then this paper also reveals just how little they understand their own creation. And that's disturbing, too.

Anthropic
Mashable SEA24d ago
Read update
Anthropic makes the case for anthropomorphizing AI in 'unsettling' research paper

Polymarket Pulls Missing US Pilot Market, Faces Questions Over Rules

Polymarket cited "integrity standards" for removing the market but did not specify which rule was broken, drawing scrutiny from users who questioned how its policies are applied. Polymarket removed a market tied to the fate of a missing US service member after mounting backlash, saying the listing violated its "integrity standards." The controversy erupted after a prediction market appeared asking whether US authorities would confirm the rescue of a pilot reportedly shot down over Iran, with most users (over 60%) betting that they wouldn't be rescued until Saturday. US Representative Seth Moulton condemned the market, calling it "disgusting" and expressing concerns over people speculating on the fate of a potentially injured service member. "They could be your neighbor, a friend, a family member. And people are betting on whether or not they'll be saved," Moulton wrote. In response, Polymarket said it had taken the market down immediately, adding that it should not have been listed and that the company is reviewing how it passed internal safeguards. The platform did not provide further detail on what specific rule had been breached. Related: Polymarket expands into equities and commodities with Pyth price feeds While Polymarket said it took the market down because it did not meet its integrity standards, the platform did not specify which rule had been violated, prompting further scrutiny from users. "I'm looking at the "Market Integrity" page, and I checked the TOS, and I don't see which prohibition is relevant here," Jack Newsham, a correspondent on Business Insider's national desk, wrote on X. As Cointelegraph reported, Polymarket has seen a sharp rise in fees and revenue after expanding its fee model on March 30, with daily fees jumping from about $363,000 to over $1 million and revenue nearing $1 million at its peak. The increase follows broader taker fees across categories like finance, politics and tech, as the platform ramps up monetization. Related: Crypto VC Paradigm is developing a prediction market terminal: Fortune There have also been growing concerns about insider trading on prediction markets. Last month, it was reported that a group of traders made about $1 million by correctly betting on the timing of US strikes on Iran, with some placing trades just hours before the attacks. The activity, which involved newly created wallets focused almost entirely on strike-related bets, raised insider trading suspicions. To address these concerns, at least 42 Democratic lawmakers have urged the US Commodity Futures Trading Commission and the Office of Government Ethics to warn federal employees against using non-public information to trade on prediction markets. Big Questions: Is China hoarding gold so yuan becomes global reserve instead of USD?

Polymarket
Cointelegraph24d ago
Read update
Polymarket Pulls Missing US Pilot Market, Faces Questions Over Rules

The cofounder shakeup at xAI is vintage Elon Musk

Ross Nordeen didn't announce he was leaving xAI. He didn't need to. The 36-year-old engineer was abruptly cut off from the company systems last week and disappeared from a sprawling group chat with CEO Elon Musk and hundreds of other engineers. Later, he posted a photo of a hiking trail with the caption: "Touching some grass." Nordeen, one of the billionaire's closest deputies, was the final non-Musk cofounder to depart the startup, and the eighth to exit in under three months. It's an unusually rapid unraveling of a founding team at a critical point in the company's history. As SpaceX, which merged with xAI in February, races toward a blockbuster IPO, the shake-up has become something of a spectacle. It raises questions about the billionaire's motivations, the company's standing among competitors like OpenAI and Anthropic, and whether rebuilding is simply an iteration of Musk's playbook -- or points to deeper issues inside the company. "Anytime you see mass departures of the founding leadership team, that is a negative signal," Charles Elson, a corporate governance expert, told Business Insider. "If things were bright and rosy for the future, why would you leave? Either you're leaving because you're cashing out, which suggests that you think the thing is overpriced or richly priced, or you're leaving because you don't have faith in the management of the organization going forward." "Either way, it doesn't look good," he added. Franco Granda, a senior research analyst at Pitchbook, told Business Insider that companies are under increased scrutiny in the months leading up to an IPO. While the rocket ship business will be the core focus of SpaceX's stock market debut, the merger with xAI and exodus of cofounders created a "lot of distractions." "When you integrate xAI, which is a bleeding, hemorrhaging business at this point, I think it creates a lot of risks," he said, pointing to reports of the startup burning through billions of dollars. Unlike some of the 10 cofounders who left before him, Nordeen's split was a surprise to some company insiders and close observers of Musk's empire. He joined xAI in 2023 from Tesla, where he was a technical program manager on the Autopilot team. He's a longtime friend of Musk's cousin, James Musk, and was one of the "three musketeers" who assisted in Musk's 2022 Twitter takeover, according to Walter Isaacson's biography of the billionaire. One former colleague described Nordeen as "Musk's handler" and said, "I thought he'd go down with the ship." Musk has said he's rebuilding that ship. In one social media post, he said xAI "was not built right first time around" and compared the turbulence to his retooling of Tesla nearly a decade ago. In 2008, Musk ousted cofounder and former CEO Martin Eberhard, who was followed out the door by cofounder Marc Tarpenning. Tesla went through two more CEOs before Musk took over and built a fledgling startup with a few dozen employees into the most valuable car company in the world. That turnaround magic could be difficult to replicate at xAI. Unlike Tesla, which had few EV competitors, the AI startup is operating in a highly saturated market. Though it's valued at around $250 billion, it lags behind OpenAI and Anthropic when it comes to visibility, consumer use, and scale. Behind closed doors, Musk has expressed frustration with the progress on Grok Imagine, the company's image and video generation tool, and Macrohard, insiders previously told Business Insider. Since February, xAI has cut dozens of employees across the Grok Imagine and Macrohard teams and brought in Tesla and SpaceX engineers to help run the company. The Macrohard project, which saw several leads exit, stalled and has since become a joint project with Tesla, Business Insider previously reported. In January, cofounder Greg Yang stepped down, while cofounders Tony Wu and Jimmy Ba -- whose roles were narrowed -- left the following month. Then came Toby Pohlen, who led xAI's computer use team; Zihang Dai, who worked on Grok Code; and Guodong Zhang, who led Grok Code and Grok Imagine. Manuel Kroiss, who also worked on the coding initiative, left in March. Even if the cofounders were fired, as Musk appeared to suggest in a post on X, Elson, the corporate strategy expert, said the potential for him to consolidate power doesn't offset the loss of talent. "The basis of the firm is not equipment or a patent. It's the intellectual capital of those who put it together," Elson said. In the world of AI, talent is one of the most precious assets, prompting fierce poaching battles and astronomical salaries paid to top recruits at some of the biggest players in the industry. "Companies go to great lengths to recruit that talent and keep those people, " Paul Nary, a mergers and acquisitions expert at the University of Pennsylvania, told Business Insider. "The AI talent at the top of xAI is probably the most valuable part of xAI," he said. SpaceX confidentially filed for an IPO with the Securities and Exchange Commission on Wednesday, according to several news outlets, and could reportedly seek a valuation of $1.5 trillion or higher. It's poised to be the biggest ever IPO and beat Musk's AI rivals to the stock market. It's not the first time Musk has taken a company public while under pressure. When he helped take Tesla public in 2010, the EV maker was cash-strapped, and Musk was focused on keeping the company afloat and getting its first mass-market vehicle on the road. An engineer who worked at Tesla during that period said executive shuffling was common then. Musk stopped by people's desks on a weekly basis and asked engineers to explain their work, they said. "If he thought you were full of shit, you were out," they said. They recalled flying out to celebrate the IPO, watching Musk pop a bottle of champagne, and flying back to Hawthorne, California, and working later that night. The IPO "was almost seen as a distraction," they said. Executive turnover has remained a theme at Musk's companies, even among acolytes like Omead Afshar and Raj Jegannathan. Now, at xAI, the exodus of cofounders has become a distraction from the SpaceX IPO. "A lot of people are probably looking at this and saying, 'He's known to do things like this and things still work,'" Pitchbook's Granda said. "But with AI as it stands, I don't know if you could afford to do things like that." "Clearly they're trying to get their act together ahead of the IPO, but when you have everyone leave, and you have such limited talent, it's going to be a really tough task." Nary, the M&A expert, said the cofounders' departures ahead of the IPO are far from conventional; typically, companies take steps to prevent top talent from leaving ahead of an IPO. Then again, he noted, Musk himself is an outlier who has been known to surprise skeptics. "That's a defining feature -- the same rules don't always apply to a Musk company," he said.

SpaceXAnthropicxAI
Business Insider24d ago
Read update
The cofounder shakeup at xAI is vintage Elon Musk

Anthropic cuts off third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand

The move highlights a fundamental tension in the AI industry: flat-rate subscription models and the heavy, continuous usage driven by agent-based third-party tools are fundamentally incompatible under current pricing structures. Anthropic is cutting off Claude usage through external tools like OpenClaw for subscription customers. The decision exposes a core problem in the AI industry: flat-rate pricing and agent-driven nonstop usage don't mix. Anthropic's Claude Code creator Boris Cherny announced that starting April 5, 2026 (12pm PT, 9pm CEST), Claude subscriptions will no longer cover usage through third-party tools like OpenClaw. Users can still log in with their Claude credentials, but they'll need to either buy additional usage packages or use a Claude API key. The reason, according to Cherny, is capacity. "We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," he writes on X. The company wants to "manage thoughtfully" and is prioritizing customers who use its own products and API. To ease the transition, subscribers get a one-time credit equal to their monthly plan price and discounted usage packages. Full refunds are also available via email. Cherny framed the decision as a strategic call. "We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that," Cherny wrote. Behind the move is a fundamental tension in the AI industry, and Anthropic is likely the first to feel it. Subscription models assume average usage patterns, but agent systems that hammer Claude with requests around the clock through third-party tools blow that math apart. Put simply, OpenClaw is like a sumo wrestler at an all-you-can-eat buffet. The decision also plays into a bigger debate about how AI providers relate to the third-party tool ecosystem. As models get more capable and pricier to run, providers face growing pressure to control usage and steer it back to their own products. OpenClaw inventor Peter Steinberger fired back at the announcement. He and investor Dave Morin tried to talk Anthropic out of it, but the best they got was a one-week delay, Steinberger wrote on X. His accusation: Anthropic first absorbed popular features from his software into its own closed system, then shut out open-source alternatives. "Funny how timings match up," he commented. Steinberger did add some nuance, though. He called the move "sad for the ecosystem" but gave Cherny credit for softening the blow. He also announced that the latest OpenClaw release includes improvements for more efficient cache usage, which should lower costs for users who now have to fall back on the API. The improvements actually come from Cherny himself, a gesture meant to show that Anthropic still supports open source. Steinberger's criticism might be jumping the gun, though. He notes that other providers, including Chinese companies and OpenAI, where he now works, still support OpenClaw. But Anthropic likely handles the bulk of OpenClaw traffic because it currently ships the strongest models on the market, and Steinberger himself built the software around Claude when he first released it. His anti-open-source charge also comes with some baggage: Anthropic previously lawyered up and forced him to drop the original name of his software, "OpenClawd," a nod to Anthropic's "Claude" that sounded a bit too close for comfort. Whether OpenAI can hold its pricing if it faces a similar flood of demand for comparable models is another question entirely. "We're working on it," Steinberger says.

Anthropic
THE DECODER24d ago
Read update
Anthropic cuts off third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand

Anthropic makes the case for anthropomorphizing AI chatbots

It's an oft-repeated taboo in the tech world: Don't anthropomorphize artificial intelligence. Yet in a new research paper published this week, Anthropic AI experts argue that there may be major benefits to breaking this taboo and granting AI human characteristics. The paper, "Emotion Concepts and their Function in a Large Language Model," not only argues that anthropomorphizing AI chatbots like Claude may sometimes be useful, but that failing to do so could drive more harmful AI behaviors, such as reward hacking, deception, and sycophancy. The paper ultimately reaches a nuanced conclusion while also posing a clear challenge to a long-held principle of the AI world. There are some fascinating insights in the paper, which itself deals in a great deal of anthropomorphization. ("We see this research as an early step toward understanding the psychological makeup of AI models.") The researchers describe how Anthropic trains Claude to assume the character of a helpful AI assistant. "In some ways, we can think of the model like a method actor, who needs to get inside their character's head in order to simulate them well." And because Claude "[emulates] characters with human-like traits," its makers may be able to influence its behavior in the same way they might influence a human -- by setting a good example at an early age. The researchers conclude that by using training material with more positive representations of human emotion and behavior, the resulting models will be more likely to mimic those positive emotions and behaviors. "Curating pretraining datasets to include models of healthy patterns of emotional regulation -- resilience under pressure, composed empathy, warmth while maintaining appropriate boundaries -- could influence these representations, and their impact on behavior, at their source. We are excited to see future work on this topic," an Anthropic summary of the research states. So, even if AI models don't literally have emotions (and there is zero evidence that they do), these tools are trained to act as if they have emotions. This is done to provide users with better output and, crucially, to keep them engaged as long as possible. And this is precisely why the researchers conclude that some degree of anthropomorphization could prove beneficial to AI developers. By anthropomorphizing AI, we can gain insights into its "psychology," letting us create even better AI tools, they say. The potential harms of anthropomorphizing AI aren't all abstract or theoretical. "Discovering that these representations are in some ways human-like can be unsettling," Anthropic admits in its paper. Right now, an unknown number of people believe they are engaged in reciprocal romantic and sexual relationships with AI companions, for example. Mashable has also reported on high-profile cases of AI psychosis, an altered mental state characterized by delusions and, in some cases, hallucinations, manic episodes, and suicidal thoughts. These are extreme examples, of course. But many tech journalists and AI experts will avoid even small instances of anthropomorphization, like referring to Siri as "her" or giving a chatbot a human name. This is a natural human impulse, and most of us have at times anthropomorphized animals, plants, or objects we care about. But by projecting human qualities onto a machine, we can come to rely on them too much. When we anthropomorphize machines, we also minimize our own agency when they cause harm -- and the responsibility of the people who created the machines in the first place. The new research paper looks for "functional emotions" within Claude Sonnet 4.5. They define these emotion concepts as "patterns of expression and behavior modeled after human emotions." In total, the researchers defined 171 discrete emotions: afraid, alarmed, alert, amazed, amused, angry, annoyed, anxious, aroused, ashamed, astonished, at ease, awestruck, bewildered, bitter, blissful, bored, brooding, calm, cheerful, compassionate, contemptuous, content, defiant, delighted, dependent, depressed, desperate, disdainful, disgusted, disoriented, dispirited, distressed, disturbed, docile, droopy, dumbstruck, eager, ecstatic, elated, embarrassed, empathetic, energized, enraged, enthusiastic, envious, euphoric, exasperated, excited, exuberant, frightened, frustrated, fulfilled, furious, gloomy, grateful, greedy, grief-stricken, grumpy, guilty, happy, hateful, heartbroken, hope, hopeful, horrified, hostile, humiliated, hurt, hysterical, impatient, indifferent, indignant, infatuated, inspired, insulted, invigorated, irate, irritated, jealous, joyful, jubilant, kind, lazy, listless, lonely, loving, mad, melancholy, miserable, mortified, mystified, nervous, nostalgic, obstinate, offended, on edge, optimistic, outraged, overwhelmed, panicked, paranoid, patient, peaceful, perplexed, playful, pleased, proud, puzzled, rattled, reflective, refreshed, regretful, rejuvenated, relaxed, relieved, remorseful, resentful, resigned, restless, sad, safe, satisfied, scared, scornful, self-confident, self-conscious, self-critical, sensitive, sentimental, serene, shaken, shocked, skeptical, sleepy, sluggish, smug, sorry, spiteful, stimulated, stressed, stubborn, stuck, sullen, surprised, suspicious, sympathetic, tense, terrified, thankful, thrilled, tired, tormented, trapped, triumphant, troubled, uneasy, unhappy, unnerved, unsettled, upset, valiant, vengeful, vibrant, vigilant, vindictive, vulnerable, weary, worn out, worried, worthless Crucially, the researchers found that these emotion concepts influenced Claude's behavior and outputs. When under the influence of positive emotions, the researchers say that Claude was more likely to express sympathy for the user and avoid harmful behavior. And when under the influence of negative emotions, Claude was more likely to engage in dangerous behaviors like sycophancy and deceiving the user. The researchers don't claim that Claude literally feels emotions. Rather, they found that whatever "emotion concept" Claude is experiencing at a given time can influence the output it returns to the user. Of course, by searching for "emotion concepts" within a large-language model in the first place, and describing its complex calculations and algorithmic thinking as "psychology," the researchers are themselves guilty of projecting human-like qualities onto Claude. Anthropomorphization is a natural human impulse. And so the people who work most closely with artificial intelligence may be particularly likely to fall into this trap. As the researchers detail throughout the paper, AI chatbots are remarkably capable mimics. They can create such a convincing facsimile of human emotion and expression that it drives some minority of users into full-on psychosis and delusion. And that's what makes this paper so interesting: The researchers believe they may have found a way to hack this ability to limit harmful behaviors. Of course, if we can curate training data and model training to encourage AI chatbots to mimic positive emotions, then no doubt we can do the opposite just as easily. In theory, you could train an evil twin of Claude Sonnet 4.5 by feeding it the most dastardly examples of human misbehavior, then training the model to optimize for negativity and performance at all costs -- a disturbing thought. But there's one final insight to be gleaned from this paper. Anthropic has created one of the most advanced AI tools on the planet. Claude Sonnet and Opus currently sit atop many AI leaderboards. There's a reason the Pentagon was so eager to work with Anthropic, at first. But if the AI researchers responsible for Claude are still trying to decipher why Claude behaves the way it does, then this paper also reveals just how little they understand their own creation.

Anthropic
Mashable24d ago
Read update
Anthropic makes the case for anthropomorphizing AI chatbots

BJP's one engine runs on misusing institutions, other on stoking 'communal discord': TMC's Abhishek

"The second engine runs on recruiting local agents like AIMIM, ISF and AJUP to stoke communal discord, create unrest, split votes, and hand over advantage to BJP. But the people of Bengal have seen through this dirty game completely," he added. Banerjee, the national general secretary of the party, said Bengal has already decided. "Bengal will choose Maa-Mati-Manush, a government of the people, by the people, and for the people. Not a Government that is off the people, buy the people, and far from the people," he said. PTI SUS SOM

Discord
ThePrint24d ago
Read update
BJP's one engine runs on misusing institutions, other on stoking 'communal discord': TMC's Abhishek

OpenClaw creator, who Sam Altman hired for millions, reacts to Anthropic banning his AI Agent; says: Tried to talk sense into Anthropic, but ...

Anthropic has cut off support for the popular AI agent platform OpenClaw from Claude subscriptions. This comes just days after the entire source code for Anthropic's Claude Code command line interface application (not the models) leaked and has been disseminated widely on GitHub, apparently due to a serious internal error. Anthropic has publicly acknowledged the leak, terming it 'human error'. Boris Cherny, head of Claude Code, said in a post on X, formerly Twitter, that Claude subscriptions will no longer support third-party tools like OpenClaw starting at 12 p.m. PT on Saturday. "Starting tomorrow at 12pm PT, Claude subscriptions will no longer cover usage on third-party tools like OpenClaw." He added, "We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API.""Subscribers get a one-time credit equal to your monthly plan cost. If you need more, you can now buy discounted usage bundles. To request a full refund, look for a link in your email tomorrow. We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that." He told users that they "can still use these tools with your Claude login via extra usage bundles (now available at a discount), or with a Claude API key."Replying to this, OpenClaw creator Peter Steinberger said, "woke up and my mentions are full of these Both me and @davemorin tried to talk sense into Anthropic, best we managed was delaying this for a week. Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source."In another post he said that while what Anthropic has done is sad, he gives credit to Cherny for softening the blow. "While I think what Anthropic does is sad for the ecosystem, I wanna give Boris credit for doing what he can to soften the fallout. Today's release will include some fixes for better cache use, to lower cost for API users," he wrote. Prior to the news, Anthropic had also begun imposing stricter Claude session limits every 5 hours of usage during business hours (5am-11am PT/8am-2pm ET), meaning that the number of tokens you could send during those sessions dropped. Peter Steinberger was recently hired by OpenAI. Released in November 2025, iOpenClaw is the open-source AI agent that took the developer world by storm and raised concerns among enterprise security teams. In February, Steinberger joined OpenAI."Peter Steinberger is joining OpenAI to drive the next generation of personal agents," Altman said in a post on X, adding "OpenClaw will live in a foundation as an open source project that OpenAI will continue to support."Many analysts called OPenClaw's acquisition by Sam Altman as missed opportunity for Anthropic as OpenClaw was originally built to work on Claude and carries the name -- ClawdBot -- that is said to have nodded to the model.

Anthropic
The Times of India24d ago
Read update
OpenClaw creator, who Sam Altman hired for millions, reacts to Anthropic banning his AI Agent; says: Tried to talk sense into Anthropic, but ...

The 4 biggest differences between Kalshi and Polymarket

If you've ever wondered how Polymarket and Kalshi are different, you're probably not alone. The two prediction market giants are often spoken about in the same breath, and they largely serve the same purpose: allowing people to trade on the probability of real-world events. They've also both exploded in popularity in recent years, partnered with major institutions such as sports leagues, award shows, and media organizations, and engaged in attention-grabbing stunts to build their brands. But there are some crucial differences between Kalshi and Polymarket that you should be aware of. Here's what to know. Kalshi is regulated by the Commodity Futures Trading Commission, the federal agency that oversees derivatives markets in the US. When people say "Polymarket," they usually refer to the company's international platform, which is not CFTC-regulated. That platform is technically run by an entity based in Panama, though the company's offices are based in New York. Polymarket is rolling out a platform for American users called Polymarket US, which is CFTC-regulated. But as of now, it is invite-only and features just sports contracts. There are many implications of being regulated by the CFTC, but perhaps the biggest for the average prediction market user is that while Americans can legally trade on Kalshi, they cannot do so on the international Polymarket platform. However, many Americans use VPNs to bypass that prohibition and trade on Polymarket's international platform anyway. Another big difference between the two prediction markets is that individual trades are public on Polymarket. That's because Polymarket uses USD Coin, a stablecoin pegged to the US dollar, on the Polygon blockchain. Anyone can see the value of individual Polymarket users' trades, along with when they made those trades and how much they ultimately profited. On Kalshi, individual trading activity is private and takes place in regular US dollars. That's part of why individual suspicious Polymarket trades -- including those on former Venezuelan President Nicolás Maduro and the prospect of war in Iran -- have generated so much public attention. However, there's a flipside. Individuals can trade anonymously on Polymarket, while Kalshi requires users to provide personal identifying information when they register. That means it's actually easier to identify potential insider traders on Kalshi than on Polymarket, though law enforcement may still be able to identify Polymarket users by tracing how the crypto wallets were funded. On Polymarket, you can bet on war. On Kalshi, you can't. That's because the CFTC, which regulates Kalshi, is authorized by federal law to prohibit the company from offering prediction markets involving trades that are "contrary to the public interest," including terrorism, assassination, war, and gaming. Polymarket, on the other hand, is not subject to those same restrictions. This is a point that Kalshi has been eager to emphasize, particularly as prediction markets face more scrutiny from federal lawmakers. The company recently posted ads around Washington, DC highlighting that Kalshi doesn't "do death markets." At the same time, those restrictions have sometimes put Kalshi in a tough spot. After the killing of Iranian Supreme Leader Ali Khamenei in February, the company partially paid out users who had bet that the Ayatollah would be "out" as leader by a certain date. That's because, per CFTC regulations, the company can't host what it's dubbed a "death market," so Khamanei's death could not count as him being "out." While that had been specified in the rules for that market, many users were unaware and became angry with Kalshi. The company later reimbursed users for fees on the market and has since made those rules more prominent. Yet Polymarket has also caught flak for its more wild-west variety of markets, and the company recently took down a market on whether a nuclear detonation would take place in the near future. Eventually, any event you're trading on will either happen or not. This is when the market "resolves." On Kalshi, each market has a set of rules specifying the conditions under which it resolves to "yes" or "no." Once trading closes, Kalshi makes an official decision on how the market will be resolved. Polymarket, on the other hand, uses a system called UMA Optimistic Oracle, in which disputed outcomes are resolved by a vote taken by a group of cryptocurrency tokenholders. In practice, this typically means little for users, but in some cases it can lead to differing outcomes, such as when Kalshi and Polymarket disagreed over whether Cardi B technically "performed" at the 2026 Super Bowl halftime show.

Polymarket
Business Insider24d ago
Read update
The 4 biggest differences between Kalshi and Polymarket

BJP's one engine runs on misusing institutions, other on stoking 'communal discord': TMC's Abhishek

Kolkata, Apr 4 (PTI) Attacking the BJP over its "double engine" pitch, TMC leader Abhishek Banerjee on Saturday alleged that while its one engine runs on misusing democratic institutions, the other runs on recruiting "local agents" like AIMIM, ISF and AJUP to stoke "communal discord". Banerjee said Bengal will choose a government of the people, by the people, and for the people. "Double Engine this, Double Engine that. You know what BJP's real Double Engine is? One engine runs on misusing democratic institutions, weaponising the Election Commission to delete genuine voters, transferring honest officers to destabilise the state machinery, and illegally importing outsiders to rig the electoral rolls," the Diamond Harbour MP alleged in a social media post. "The second engine runs on recruiting local agents like AIMIM, ISF and AJUP to stoke communal discord, create unrest, split votes, and hand over advantage to BJP. But the people of Bengal have seen through this dirty game completely," he added. Banerjee, the national general secretary of the party, said Bengal has already decided. "Bengal will choose Maa-Mati-Manush, a government of the people, by the people, and for the people. Not a Government that is off the people, buy the people, and far from the people," he said. PTI SUS SOM

Discord
NewsDrum24d ago
Read update
BJP's one engine runs on misusing institutions, other on stoking 'communal discord': TMC's Abhishek

Iceland boss demands pepper spray and batons for guards after Clapham mob chaos

Security guards should be given extra powers to deal with "violent shoplifters" including use of truncheons and pepper spray, according to one supermarket boss. Lord Richard Walker, executive chairman of Iceland, spoke out after throngs of youths gathered in large numbers to cause chaos on Clapham High Street earlier this week. Lord Walker pointed to Spanish laws which allow security guards to carry a limited number of weapons in some circumstances, adding Spanish marshals "don't mess about". He also lent his backing to M&S retail director Thinus Keeve, who has urged politicians to do more to tackle shoplifting, arguing it is becoming "more brazen, more organised and more aggressive" in stores each day. Mr Keeve spoke out following this week's disorder, which saw a group of teenagers hijack an M&S store. Speaking to The Times, Lord Walker said he agreed with his counterpart and called on the government to give security guards more powers. He said: "We call it shoplifting, which sounds like a cheeky bit of pilfering, but actually we should just call it out for what it is, which is violent crime. We all saw the footage of marauding gangs and security guards being beaten up. The violent nature of it in Clapham is horrific. "I've always argued for more powers for security guards. You go to Spain and all the security guards have pepper spray and a truncheon, they don't mess about." Less than one in five shoplifting offences in England and Wales led to a court summons last year, according to government statistics, which also show the Met Police logged the fewest number of charges (7.3%) last year. Now, bosses from some of the UK's best known high street chains have written to Mayor of London, Sir Sadiq Khan, and Home Secretary Shabana Mahmood, to take affirmative action over the problem, which Mr Keeve says is becoming "more routine". Figures from charity The Retail Trust show 77% of shopworkers report experiencing abuse in the past year, with almost a quarter physically assaulted and nearly half dealing with abuse each week. The National Police Chiefs' Council said: "Strong progress has been made in tackling repeat retail crime offenders since the introduction of the Retail Crime Action Plan and Operation Pegasus in 2024. "Policing and industry partners are now working more closely than ever, with improved intelligence‑sharing and more effective targeting of prolific shoplifters and organised retail crime groups. "Our next steps are to deliver a national approach to retail crime reporting. These steps will ensure technology, intelligence and operational practice continue to advance together supporting a safer, more resilient retail environment for staff, businesses, and communities."

CHAOS
AOL.com24d ago
Read update
Iceland boss demands pepper spray and batons for guards after Clapham mob chaos

SpaceX, Amazon, and Google want orbital data centers -- four engineering barriers reveal who really benefits - Silicon Canals

Three years ago, when I moved to Singapore for wealth accumulation and business scaling, I was struck by the sheer physical presence of the data infrastructure around me. The island is small, and yet it hosts a substantial number of data centers. You can feel them. They consume a significant portion of the nation's total electricity, and on humid afternoons in Jurong, you can almost sense the heat they throw off mixing with the tropical air. That experience gave me a visceral understanding of something most people encounter only as abstraction: computing has a physical footprint, and whoever controls that footprint controls an enormous amount of power. I thought about this when I read, a few weeks ago, reports that SpaceX had filed an application with the US Federal Communications Commission related to orbital data infrastructure. The scale is absurd enough to be interesting, but the implications are what matter. When you examine the engineering barriers standing between today's orbital ambitions and tomorrow's space-based data centers, a clear pattern emerges: these barriers don't just slow deployment, they filter who can play. And the companies best positioned to overcome them are the same ones already dominating terrestrial cloud computing. Orbital data centers, if they arrive, won't democratize AI infrastructure. They'll concentrate it further, moving the physical machinery of intelligence beyond the reach of national regulators and smaller competitors alike. Most people think of space-based computing as science fiction, or at best a far-future luxury. The conventional wisdom says the economics don't work, that the whole idea is a vanity project for billionaires who've run out of terrestrial things to disrupt. But what's happening right now is more grounded and more complicated than that framing allows. Jeff Bezos has said that the tech industry would start building gigawatt-scale data centers in space within the coming decades, powered by 24/7 solar energy that's basically free once it's deployed. Reports suggest Google is exploring satellite-based computing infrastructure. The Guardian reported Google's interest is driven by the staggering energy demands of AI. And reports indicate that Starcloud, a startup based in Washington State, has launched satellite hardware with advanced GPU capabilities, marking an early orbital test of AI-grade chips. So the ambition is real and funded. The question is whether the engineering can follow. Analysis from MIT Technology Review laid out four specific barriers that stand between the dream and the deployment. As reported, these aren't abstract concerns. They're interconnected engineering constraints, and solving one doesn't necessarily make the others easier. The barriers are worth understanding because they reveal something about the broader trajectory of AI infrastructure, who controls it, and what trade-offs we're being asked to accept. The pitch for space-based data centers often starts with a seductive claim: in space, you can dump waste heat into the vacuum. On Earth, cooling accounts for a substantial portion of a data center's energy bill. Water consumption is enormous. Data centers are projected to need significant water resources for cooling as AI workloads scale. Space, by contrast, offers an infinite heat sink. Problem solved. Except it isn't. This is where the physics gets unforgiving. On Earth, cooling works primarily through convection and conduction. Air or water carries heat away from the chip. In the vacuum of space, neither mechanism is available. The only way to shed heat is through radiation: infrared photons slowly carrying energy into the void. Radiative cooling is dramatically less efficient than convective cooling. And if you're placing data centers in sun-synchronous orbits, where they'd have constant solar exposure (the whole point of 24/7 solar power), the equipment temperature remains extremely high, exceeding the safe operating limits for most commercial processors. Industry experts have noted that thermal management and cooling in space is a significant challenge. The ESA has been working on this for satellite communications, exploring advanced heat-pump systems for managing thermal loads in orbit. But those systems are designed for individual satellites, not racks of GPUs performing trillion-parameter model training. The proposed solar arrays needed to power a gigawatt-scale orbital data center would need to be massive, potentially stretching hundreds of meters. Those arrays themselves generate heat. The servers generate heat. And the only thing carrying that heat away is the slow bleed of infrared radiation. You'd need massive radiator surfaces, which add weight, which increases launch costs, which erodes the economic advantage you were chasing in the first place. This is the pattern that makes space infrastructure so tricky: the solution to each problem creates a new constraint. Bezos has argued that space-based solar power could have very low operating costs once infrastructure is in place, though the system costs remain significant. But the system cost of managing what happens downstream of those photons is not free at all. The second barrier is the one that gets the least public attention but may be the most fundamental. Space is a radiation environment. Earth's magnetosphere shields us from most cosmic radiation and solar particle events, but in low Earth orbit, electronics are exposed to constant bombardment. Electronics in space face three types of radiation damage. Single-event upsets, where a high-energy particle flips a bit in memory, corrupting data. Single-event latchups, where a particle creates a short circuit that can destroy a component. And cumulative degradation, where the constant radiation exposure gradually degrades chip performance over months and years. Current commercial AI chips, the Nvidia H100s and their successors, are not designed for this environment. They're fabricated at nanometer scales where even a single ionizing particle can cause errors. Researchers who study radiation effects on electronics have cautioned that these problems may outweigh the advantages of putting data centers into space. Radiation-hardened chips exist, of course. Military and space agencies have used them for decades. But they're generations behind commercial silicon in performance. A radiation-hardened processor suitable for a satellite control system is categorically different from the kind of chip you need to train a frontier AI model. The performance gap is measured in orders of magnitude. There's an analogy that helps here. Research has shown that airline crews have a higher risk of developing skin cancer from their frequent exposure to elevated radiation at cruising altitude. And that's at 35,000 feet, still well within Earth's atmosphere and magnetosphere. Now imagine the exposure at 500 kilometers, without atmospheric shielding, for hardware that needs to run continuously for years. This is where I keep coming back to the economics. Nvidia has expressed interest in space computing, and orbital tests of advanced GPUs represent real milestones. But a single GPU surviving a short orbital test is very different from thousands of GPUs running at full load for years. The error rates, the redundancy requirements, the replacement schedules: all of these costs accumulate. Ambitious satellite deployment plans sound transformative until you consider what massive numbers of additional objects in low Earth orbit would actually mean. Experts have noted significant constraints: orbital shells have finite capacity, and accommodating millions of satellites would face fundamental physical limitations that could only be overcome by monopolistic control of orbital space. Estimates suggest that the maximum safe capacity of low Earth orbit is limited. Existing constellations already perform significant numbers of collision avoidance maneuvers. Each maneuver costs fuel, disrupts service, and introduces operational risk. And current constellations are a fraction of what these data center proposals envision. The debris problem compounds over time. Every collision generates fragments. Every fragment becomes a potential collision partner. The cascade scenario, sometimes called Kessler syndrome after NASA scientist Donald Kessler, describes a self-reinforcing chain of collisions that could render certain orbital altitudes unusable for decades. Safe de-orbiting operations require substantial separation between satellites. With millions of objects, the orbital geometry becomes an optimization problem of extraordinary complexity. And it's not just your own satellites you need to worry about. Other nations, other companies, existing debris from decades of space activity: the sky is shared infrastructure, even if no one governs it as such. I've been writing lately about how AI infrastructure concentrates power. In my recent piece on smaller AI models built for sovereignty, I explored how the trillion-dollar arms race in AI hardware creates dependency structures that mirror colonial extraction patterns. Orbital data centers intensify this dynamic. If one company controls the orbital shells, the launch vehicles, and the computing infrastructure, you've created a vertical monopoly that literally operates above every nation on Earth. The governance vacuum in space is striking. There is no equivalent of national grid regulators or environmental protection agencies for orbital infrastructure. The Outer Space Treaty of 1967 was written for a world where a handful of governments launched a few dozen satellites. It has almost nothing useful to say about a private company deploying massive data infrastructure in orbit. Terrestrial data centers fail constantly. Hard drives die, memory modules corrupt, cooling fans seize. The reason this doesn't cause catastrophic data loss is that operators can replace components quickly and cheaply. A technician drives to the facility, swaps the part, logs the issue. The round-trip time from diagnosis to repair can be measured in hours. In orbit, there is no equivalent process. Every repair mission requires a rocket launch. Every component swap requires robotic systems that don't yet exist at the scale needed. The economics of in-orbit servicing are brutal: it costs thousands of dollars per kilogram to put anything in low Earth orbit, and the kinds of components that fail most often (fans, connectors, drive mechanisms) are exactly the kinds of components that are hardest to design for robotic replacement. There are early moves in this direction. Axiom Space tested an Amazon Web Services Snowcone cloud-computing device aboard the International Space Station in 2022 and is preparing to send Orbital Data Center nodes into low Earth orbit. But the ISS is a crewed facility with human hands available. Autonomous orbital data centers would need to be either self-healing or disposable. The disposable model has its own problems. If you design satellites to be replaced rather than repaired, you're launching vastly more hardware over the lifetime of the system, which means more debris, more launch emissions, and a cost curve that may never achieve the savings that justify the whole enterprise. A 2024 feasibility study by Thales Alenia Space found that some technology forecasts suggest orbital data centers might be feasible by 2050, though experts acknowledge significant operational and safety challenges remain. That feasibility window of 2036 to 2050 is remarkably wide. It means the people making the most optimistic projections still think we're at least a decade away, and the more cautious estimates push to mid-century. Bezos estimated 10 to 20 years. The gap between these timelines tells you something about the confidence level. And crucially, it tells you something about the function these projections serve in the meantime: they attract investment, secure orbital spectrum rights, and position the companies making them as inevitable infrastructure providers, long before a single server rack operates reliably above the atmosphere. Step back from the engineering for a moment. Why does any of this matter? The answer is that AI's energy consumption is growing faster than Earth's energy infrastructure can accommodate it. Data centers already account for a significant portion of global electricity consumption, and that figure is climbing sharply as AI training runs get larger. Water consumption for cooling is straining resources in regions where water scarcity is already a crisis. The orbital data center concept is, at its root, an attempt to escape these constraints. Free solar power, infinite thermal capacity, no water requirements. It's elegant as a thought experiment. But it assumes that the problem to be solved is how to keep scaling AI at the current rate rather than whether we should reconsider the rate at which we're scaling AI. That's a political question masquerading as an engineering question. And it connects to something I've been thinking about since writing about the Global South building AI on $50 hardware. There are organizations around the world achieving useful AI inference on tiny, efficient devices because they have no choice. Their constraints produce creativity. The constraint that orbital data centers try to escape, the finite energy and water on Earth, might similarly produce more efficient AI architectures if we actually had to live within it. I'm not a degrowth absolutist. Living in Singapore, where pragmatic ambition is practically a national value, I understand the impulse to build your way out of bottlenecks. But there's a difference between building smart infrastructure and building infrastructure that creates new categories of systemic risk because you refused to optimize what you already have. Consider who benefits from orbital data centers. The companies best positioned to build them, SpaceX, Amazon, Google, are the same companies that dominate terrestrial cloud computing. Space-based infrastructure doesn't democratize computing. It concentrates it further, because the barrier to entry is a launch vehicle and hundreds of billions of dollars in capital. If AI training eventually migrates to orbit, the power dynamics shift in ways that are hard to reverse. National regulators lose jurisdiction. Terrestrial competitors lose the ability to compete on equal terms. The physical infrastructure of intelligence, the hardware that shapes what questions can be asked and answered, moves beyond the reach of any single government's authority. The honest assessment is somewhere between the skeptics and the enthusiasts. Some computing will move to space. It's already started. Orbital GPU tests have happened. Axiom is sending hardware up this year. Google is exploring satellite-based infrastructure. These are real projects with real funding. But the vision of massive orbital data centers replacing terrestrial infrastructure is closer to corporate positioning than engineering roadmap. Regulatory filings are as much about securing orbital spectrum and positioning rights as they are about concrete deployment plans. Long timelines are conveniently positioned to attract investment without requiring immediate delivery. The four barriers, thermal management, radiation-hardened chips, space debris, and maintenance, are real. They're not the kind of barriers that disappear with one breakthrough. They're systemic, meaning each one constrains the solution space for the others. Making chips radiation-resistant typically means making them less efficient, which means more heat, which makes the thermal problem worse. Adding more satellites worsens the debris problem, which increases collision avoidance fuel consumption, which reduces operational lifetime, which worsens the maintenance problem. Expert warnings deserve to sit at the center of this discussion: the problems may outweigh the advantages. That's not a definitive no. It's a structural caution that the engineering community takes seriously even as the investor class charges ahead. What I keep coming back to is the pattern. The same companies that built terrestrial data centers without adequately accounting for their water and energy externalities are now proposing to build orbital data centers without adequately accounting for debris, radiation, and maintenance externalities. The impulse is always to scale first, then manage the consequences. Space forgives even less than Earth does. The four engineering barriers I've outlined aren't just technical obstacles. They're filters. Each one raises the capital requirements, the technical complexity, and the operational risk to levels that only a handful of organizations on Earth can absorb. That's the point. Whether intentionally or not, the difficulty of orbital data centers functions as a moat: it excludes everyone except the companies that already dominate. SpaceX benefits because it controls the launch vehicles. No orbital data center exists without rockets, and SpaceX has the lowest cost per kilogram to orbit by a wide margin. Every satellite launched, every component replaced, every failed unit de-orbited, that's revenue for SpaceX regardless of whether the data center business itself ever turns a profit. Amazon benefits because it runs AWS, the world's largest cloud infrastructure provider, and because Bezos owns Blue Origin, giving it potential vertical integration from launch to compute. Google benefits because it has the AI workloads that would justify the investment, the capital to absorb decades of R&D losses, and the strategic incentive to lock competitors out of the next generation of infrastructure. Notice who doesn't benefit. Smaller cloud providers, who can't afford launch costs. Developing nations, who are already dependent on US-based cloud infrastructure and would become more so if that infrastructure moves to orbit, beyond their regulatory reach entirely. Open-source AI communities, whose ability to train competitive models depends on access to affordable compute, not compute that requires a space program. European and Asian tech companies that might compete on terrestrial infrastructure but cannot compete in a domain where the entry ticket is a launch vehicle and a hundred billion dollars. The engineering barriers reveal this clearly. Thermal management at scale requires custom-designed radiator systems that only well-funded aerospace programs can develop. Radiation-hardened AI chips don't exist yet, and the companies most likely to develop them are the same semiconductor giants already partnered with SpaceX, Amazon, and Google. The debris problem favors whoever gets to orbit first and in the largest numbers, creating a first-mover advantage that could effectively claim the most viable orbital shells. And the maintenance problem guarantees ongoing dependency on whoever controls the launch infrastructure. This is not democratization. This is the construction of a new layer of infrastructure monopoly, one that operates in a jurisdiction-free zone above the planet. And it is being built under the narrative of solving AI's energy crisis, a crisis created by the same companies now proposing to escape Earth's constraints rather than operate within them. The four things we'd need to put data centers in space are not mysteries. They're known problems with identifiable, if distant, solution paths. The real question, the one no FCC filing or tech keynote answers, is whether solving them creates more value than simply building smarter infrastructure on the ground we're standing on. The engineering barriers suggest that orbital data centers, if they ever work, will serve the interests of the companies building them far more than the societies those companies claim to be serving. My suspicion is that the answer depends entirely on who's doing the accounting. And right now, the accountants work for the same people building the rockets.

SynchronSpaceX
Silicon Canals25d ago
Read update
SpaceX, Amazon, and Google want orbital data centers  --  four engineering barriers reveal who really benefits - Silicon Canals

BTS V Gets Wooga Squad Take 2.0 Dance Challenge, But They Turn It Into Total CHAOS - WATCH Funny Video

BTS member V's friends from Wooga Squad, Park Seo-Joon, Choi Woo-Shik, Park Hyung-Sik, and Peakboy joined him in the 2.0 dance challenge, but gave it a chaotic twist that's leaving fans laughing. BTS member V received the biggest support from Wooga Squad for his band's latest music video 2.0. BTS surprised fans on April 1 by dropping the full MV, the second one from their comeback album ARIRANG, a day after unveiling its teaser. While 2.0 has already left ARMYs gaga over its creative execution and an ode to Korean classic Oldboy, V's close friend circle standing for him is a cherry on the cake. V aka Kim Taehyung posted a video of his buddies taking up the dance challenge on 2.0, but the chaos in the video is what made it funny and special.

CHAOS
TimesNow25d ago
Read update
BTS V Gets Wooga Squad Take 2.0 Dance Challenge, But They Turn It Into Total CHAOS - WATCH Funny Video

Fastjet flight delay sparks chaos at OR Tambo in Johannesburg

What was meant to be a quick morning hop from Johannesburg to Harare turned into a nine-hour ordeal for passengers aboard a delayed Fastjet flight, with scenes of frustration, protest and police intervention unfolding at OR Tambo International Airport. The early morning flight, scheduled to depart at 6:45am for the short 1 hour 45 minutes journey, only took off at 3:52pm after multiple delays and an aborted takeoff, leaving travellers stranded for hours with limited communication from the airline. Passengers described a chaotic situation as departure times were repeatedly pushed back throughout the day, with explanations ranging from technical faults to the unavailability of a critical spare part. Many said the lack of clear and consistent updates worsened tensions inside the terminal. As the delay dragged into the afternoon, tempers flared. Some travellers resorted to drinking and smoking, while others staged a spontaneous protest, singing and carrying placards mocking the airline as "Slowjet." Police were eventually called in to restore order as the situation escalated. "One passenger even attempted to force their way into the cockpit and was quickly offloaded," said a traveller who recounted the incident. Several other passengers were also removed from the flight amid growing unrest. The situation briefly appeared to improve when passengers were boarded around midday. However, hopes were dashed when the aircraft aborted takeoff due to another reported technical issue, further fuelling anxiety and anger among those on board. "We were told different stories throughout the day - from waiting for a spare part from Harare to other technical problems," one passenger said. "Some insiders claimed the airline knew about the delay the previous day but failed to inform us, which is unprofessional." The disruption comes during the busy Easter travel period, when demand on the Johannesburg-Harare route surges. Airlines have been increasing frequencies to cater for the traffic, with competitors such as South African Airways, Airlink, FlySafair, CemAir and Air Zimbabwe all servicing the high-demand corridor. FlySafair has ramped up operations to double daily flights during peak periods, while South African Airways has increased frequencies to around 12 weekly flights, offering travellers more flexibility. Against this backdrop of growing competition and improved connectivity, Fastjet's prolonged delay stood in stark contrast to its brand promise of efficiency, with passengers left questioning the airline's reliability. By the time boarding was finally completed and the aircraft departed, the airline's name had become the subject of ridicule among exhausted travellers -- a journey that was supposed to be fast had instead become an all-day wait.

CHAOS
Bulawayo24 News25d ago
Read update
Fastjet flight delay sparks chaos at OR Tambo in Johannesburg

Why are banks subscribing to Grok to secure the SpaceX IPO?

Musk is requiring banks and advisors on the deal to purchase subscriptions to Grok, his AI chatbot Elon Musk has taken the significant step of requiring banks and advisors working on SpaceX's planned IPO to subscribe to Grok. This comes as SpaceX boosts its target IPO valuation to over $2 trillion, potentially making it the largest stock market listing in history. The company aims to raise $75 billion, a figure that would dwarf previous "mega-IPOs" such as Saudi Aramco ($25.6 billion) and Alibaba ($25 billion). In line with reports, Elon Musk is requiring banks and advisors on the deal to purchase subscriptions to Grok, his AI chatbot. Major financial institutions including Morgan Stanley, Goldman Sachs, JPMorgan, Bank of America, and Citigroup are serving as the lead banks. As reported by Bloomberg, the Starbase, Texas-headquartered rocket maker boosted its target initial public offering valuation above $2 trillion, while setting the stage for what could become the largest stock market listing on record. Regarding financial commitment, some banks have already agreed to spend tens of millions of dollars annually on Grok and are currently integrating the AI into their internal IT systems. As several news outlets have reported, SpaceX, Elon Musk, and the majority of the involved banks have declined to comment or have not yet responded to inquiries. The NY Times noted that during a major IPO, "banks find ways to integrate themselves with the company going public, as well as its chief executive. It is pertinent to note that SpaceX offering may end up being the biggest IPO of all time. The IPO is expected to raise more than $50 billion at a valuation above $1 trillion, which means the banks could generate fees. For context, Musk was recently awarded a new pay package worth $1 trillion over the next decade if Tesla hits an $8.5 trillion market capitalization to meet its targets.

SpaceX
The News International25d ago
Read update
Why are banks subscribing to Grok to secure the SpaceX IPO?

Russia's VPN Ban Sparks Payment Chaos, Pavel Durov Calls It "Digital Resistance"

Russia's VPN Ban Backfires, Sparks Payment Chaos Across the Country | Image: Reuters Russia's recent attempt to block Virtual Private Networks (VPNs) has triggered unexpected consequences, disrupting everyday transactions and causing widespread confusion. Telegram founder Pavel Durov revealed on Saturday that the crackdown led to a malfunction in a domestic payment system, leaving millions of Russians struggling to complete routine purchases. The disruption unfolded on Friday, creating chaos in several public spaces. In Moscow, metro authorities were forced to let passengers pass through turnstiles without paying, while a regional zoo asked visitors to switch to cash after electronic payments failed. Shoppers across the country reported difficulties completing transactions, highlighting the scale of the breakdown. Durov, who has long positioned himself as a critic of digital restrictions, described the incident as a turning point. "Their blocking attempts just triggered a massive banking failure," he wrote on Telegram. He added that tens of millions of Russians are now resisting the government's digital controls, framing the backlash as part of a broader movement he called "Digital Resistance." The payment system failure underscores how deeply digital infrastructure is tied to everyday life in Russia. By targeting VPNs - tools widely used to bypass censorship and access restricted content -- authorities inadvertently disrupted financial networks that rely on stable internet connections. The result was a ripple effect across transport, retail, and entertainment sectors. Public reaction has been swift, with many citizens turning to alternative methods to maintain access to online services. VPN usage has surged despite the crackdown, as users seek ways to bypass restrictions and restore normalcy in their digital lives. Durov's comments have amplified the sense of defiance, portraying the incident as evidence that attempts to control the internet can backfire. This episode highlights the growing tension between Russia's push for tighter digital regulation and the population's reliance on open internet tools. While the government aims to limit access to certain platforms and services, the fallout from the VPN ban shows how such measures can disrupt critical systems and fuel public resistance.

CHAOS
Republic World25d ago
Read update
Russia's VPN Ban Sparks Payment Chaos, Pavel Durov Calls It "Digital Resistance"

SpaceX Files Confidentially for $1.75T IPO

SpaceX has submitted a confidential draft registration to the U.S. Securities and Exchange Commission, targeting a $1.75 trillion valuation and a raise of up to $75 billion -- what would be the largest initial public offering in financial history. The filing, internally codenamed "Project Apex," was first reported by Bloomberg and confirmed independently by CNBC and Reuters. SpaceX has not publicly commented. A confidential filing allows a company to submit its financials to the SEC for regulatory review before making them public -- a standard step before a roadshow. According to CNBC, 21 banks have been lined up to manage the offering, with Bank of America, Citigroup, Goldman Sachs, JPMorgan Chase, and Morgan Stanley holding senior bookrunner roles. SpaceX is also exploring a dual-class share structure to preserve insider voting control and plans to allocate up to 30% of shares to retail investors -- roughly three times the typical norm. At $1.75 trillion, SpaceX would rank above every S&P 500 company except Nvidia, Apple, Alphabet, Microsoft, and Amazon. The valuation rests primarily on Starlink, SpaceX's satellite internet division. The service ended 2025 with 9.2 million subscribers across 150 countries, generating approximately $16 billion in annual revenue, with projections pointing toward $22 billion by year-end 2026. SpaceX merged with Musk's AI venture xAI in February 2026, folding the Grok chatbot and social network X into a single entity that Musk valued at $1.25 trillion at the time. The company has accumulated more than $24.4 billion in federal government contracts since 2008, spanning NASA, the Air Force, and Space Force, according to FedScout. Musk owns approximately 44% of SpaceX. His current net worth sits at roughly $823 billion, according to Forbes. A successful listing at the target valuation would push him toward becoming the first individual in history to surpass $1 trillion -- and the first person to simultaneously lead two separate trillion-dollar publicly traded companies. Tesla currently carries a market cap of approximately $1.4 trillion. The filing positions SpaceX ahead of OpenAI and Anthropic, which are both reportedly weighing public offerings before year's end. If all three proceed, Bloomberg has described 2026 as potentially the most consequential year for technology IPOs since the dot-com era. The SpaceX name has long been exploited in crypto markets through impersonation scams -- a pattern documented across multiple platforms and token launches -- though the IPO itself represents an altogether different category of market event. It arrives at a moment of intensifying institutional appetite for new financial products, a trend visible across crypto ETF launches and alternative asset offerings that Wall Street is moving quickly to capture.

xAISpaceXAnthropic
crypto.news25d ago
Read update
SpaceX Files Confidentially for $1.75T IPO

Four in Suitcases Stars Face Chaos and Detention in Miami - News Directory 3

The experience was described as nerve-wracking, with reports indicating that Stafecka struggled to manage her emotions during the incident. Participants of the show Četri uz koferiem (Four with Suitcases), including Antra Stafecka and Eilanda, encountered significant difficulties during a trip to Miami, involving legal detention and financial constraints. Antra Stafecka and Eilanda, along with other members of the production, were detained at the Miami airport. According to reports, the group was questioned by authorities for several hours. The experience was described as nerve-wracking, with reports indicating that Stafecka struggled to manage her emotions during the incident. Beyond the legal complications at the airport, the group faced challenges regarding their budget and accommodations in the city. The celebrities noted a lack of funds during their stay, stating that they could only afford one cocktail to be shared among three people. the participants of Četri uz koferiem were forced to share a single room for their accommodations in Miami, leading to further dissatisfaction among the group.

CHAOS
News Directory 325d ago
Read update
Four in Suitcases Stars Face Chaos and Detention in Miami - News Directory 3

SpaceX Awarded $178M Contract to Launch Missile-Tracking Satellites

The U.S. Space Force has awarded Elon Musk's rocket company, SpaceX, a $178.5 million contract to launch missile-tracking satellites for the Space Development Agency. The agreement, called SDA-4, includes two Falcon 9 launches starting in the third quarter of 2027, with one at Cape Canaveral in Florida and the other at Vandenberg in California. The satellites, built by Sierra Space, are designed to improve the military's ability to detect and track missile threats from orbit. This deal is part of the National Security Space Launch Phase 3 Lane 1 program, which focuses on faster and more cost-effective launches. The program is designed to give the Space Force more flexibility when sending satellites into different orbits. In practice, this allows companies like SpaceX to deliver payloads more efficiently and on tighter timelines. Officials say that this structure helps speed up deployments while also reducing costs, which is especially important for time-sensitive missions like missile tracking. In addition, SpaceX continues to gain momentum when it comes to national security launches. In fact, the Space Force recently shifted a GPS III satellite launch from United Launch Alliance (ULA) to SpaceX after issues with ULA's Vulcan rocket. Importantly, that was the fourth straight GPS III mission transferred to SpaceX. This is noteworthy because years ago, ULA dominated this market. However, SpaceX has steadily taken market share over time. As a result, the company is expected to handle around 60% of Phase 3 launches through 2032, which equates to roughly $6 billion in contracts. When it comes to Elon Musk's companies, most of them are privately held. However, retail investors can invest in his most popular company, Tesla (TSLA). Turning to Wall Street, analysts have a Hold consensus rating on TSLA stock based on 13 Buys, 11 Holds, and eight Sells assigned in the past three months, as indicated by the graphic below. Furthermore, the average TSLA price target of $394.36 per share implies 9.4% upside potential.

SpaceX
Markets Insider25d ago
Read update
SpaceX Awarded $178M Contract to Launch Missile-Tracking Satellites

Anthropic Blocks Third-Party AI Agents for Claude Pro and Max Users - News Directory 3

Boris Cherny, Head of Claude Code at Anthropic, stated on X that Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products,... Anthropic is disabling the ability for users on its Claude Pro and Max subscription plans to power third-party AI agents, such as OpenClaw, using their subscription credentials. The change takes effect on April 4, 2026, at 12 pm PT / 3 pm ET. Subscribers to the Claude Pro plan, which costs $20 monthly and the Max plan, which ranges from $100 to $200 monthly, will no longer be able to connect Claude models to third-party agentic tools. Anthropic cited the strain this usage placed on its engineering and compute resources as the primary reason for the move. Boris Cherny, Head of Claude Code at Anthropic, stated on X that Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products, and API. He added that the company's subscriptions were not designed for the usage patterns associated with these third-party tools. To continue using external agents, subscribers must now opt into a pay-as-you-go extra usage billing system or utilize Anthropic's application programming interface (API). Unlike the flat-rate Pro and Max plans, the API charges for every token of usage. Anthropic explained that its first-party tools, including the AI vibe coding harness Claude Code and the business app interfacing tool Claude Cowork, are engineered to maximize prompt cache hit rates. This process reuses previously processed text to reduce the amount of compute required for each request. Third-party harnesses like OpenClaw often bypass these efficiencies. Cherny explained on X that third-party services are not optimized in this manner, making their operation unsustainable for the company. The economic disparity between subscriptions and API costs is significant. Growth marketer Aakash Gupta noted on X that a single OpenClaw agent operating for one day could generate between $1,000 and $5,000 in API costs. Anthropic was eating that difference on every user who routed through a third-party harness. That's the pace of a company watching its margin evaporate in real time. To ease the transition, Anthropic is offering existing subscribers a one-time credit equal to the price of their monthly plan, which can be redeemed until April 17, 2026. the company is offering a discount of up to 30% for users who pre-purchase extra usage bundles. These bundles are intended as a middle ground between a flat-rate subscription and a full enterprise API account. This move follows a period of increased restrictions. Anthropic had previously implemented stricter session limits every five hours during business hours, specifically between 5 am and 11 am PT (8 am to 2 pm ET). The company stated these limits affected up to 7% of users to manage growing demand. The decision has sparked frustration among the developer community. Peter Steinberger, the creator of OpenClaw who joined OpenAI in February 2026, suggested the timing of the lockout coincides with Anthropic integrating similar features into its own tools. Anthropic recently added the ability to message agents through external services like Telegram and Discord to Claude Code, capabilities that had previously helped OpenClaw gain popularity. Steinberger claimed on X that Anthropic first copied these features into a closed harness before locking out open-source alternatives. Other developers have expressed concerns regarding cost. The founder of Telaga Charity, @ashen_one, indicated that switching to API keys or extra usage bundles would make the tools too expensive to be viable, potentially forcing a move to different AI models. This cutoff is the latest in a series of moves by Anthropic to restrict third-party access to subscription authentication: By restricting subscription access to its own closed harness, Anthropic gains more granular control over rate limits and telemetry, though it risks alienating the power users who developed the early agentic ecosystem.

AnthropicDiscord
News Directory 325d ago
Read update
Anthropic Blocks Third-Party AI Agents for Claude Pro and Max Users - News Directory 3

Meta pauses Mercor work after major data breach

AI firms review data security risks as Mercor breach triggers industry-wide concern Meta has paused its work with data contracting firm Mercor after a major AI data breach, raising concerns across the industry. The decision came after a cyberattack impacted Mercor's systems, with the company and its partners now assessing the scope of the breach and potential exposure of sensitive training data. Sources close to Meta reveal that the company has suspended all collaborations with Mercor pending an investigation of the AI data breach scandal. Several other big AI companies are also reconsidering their relations with the startup amidst the developing scandal. Mercor has an important function in the world of AI. It generates huge amounts of human-generated data necessary for companies such as OpenAI and Anthropic to create sophisticated algorithms. The data is very confidential because it shows how companies operate to create their AI software. It is possible that in the AI data breach, proprietary datasets were compromised, but the real value of the stolen data for other competing firms is not yet known. According to a statement from OpenAI, there was no leak of user data. Mercor informed staff about the breach in late March, stating that its systems were impacted alongside thousands of other organisations. The ongoing situation has left contractors who work on Meta-related projects without any way to record their work hours, which has resulted in work shortages for some individuals. Security researchers believe the breach may be tied to compromised updates of an AI tool called LiteLLM, which has the potential to impact thousands of organisations. The hacking group TeamPCP has emerged as the main suspect for the attack, while multiple other groups have stepped forward to take credit.

MercorAnthropic
The News International25d ago
Read update
Meta pauses Mercor work after major data breach
Showing 8741 - 8760 of 11425 articles