The latest news and updates from companies in the WLTH portfolio.
DMK MP P. Wilson has strongly criticized the Centre's Foreign Contribution (Regulation) Amendment (FCRA) Bill, arguing that its provisions disproportionately impact civil society organizations. He expressed concern that the legislation could create administrative hurdles that restrict the flow of foreign funds, ultimately affecting the functioning of many non-governmental institutions across the country. Wilson further alleged that the bill specifically targets Christian institutions and undermines minority rights. He warned that increased scrutiny and tighter regulations could limit the ability of these organizations to carry out charitable and social service activities, raising questions about inclusivity and constitutional protections. In response, Union Minister Kiren Rijiju defended the amendment, stating that there is nothing to worry about and that the law is meant to ensure transparency and accountability. He maintained that genuine organizations would not face any difficulties under the revised framework.

Can't-miss innovations from the bleeding edge of science and tech AI company Anthropic suffered a massive leak of the source code to its Claude Code AI assistant earlier this week, triggering a panicked game of cat and mouse as company representatives sent out copyright takedown requests targeting thousands of copies of its pilfered work. The code allowed tinkerers to reverse engineer aspects of the blockbuster chatbot, highlighting concerns that the leak could give Anthropic's competitors a major leg up. The leak also gave eyebrow-raising clues into upcoming or experimental efforts, including unreleased AI models and a "Tamagotchi"-like feature, called "buddy," that "sits beside your input box and reacts to your coding." Perhaps the strangest yet: code snippets also showed that Anthropic is actively tracking how often users are using vulgar language. "Claude Code has a regex that detects wtf,' "ffs", "piece of s***", "f*** you", "this sucks" etc." tweeted developer Rahat Chowdhury. "It doesn't change behavior... it just silently logs is_negative: true to analytics." "Anthropic is tracking how often you rage at your AI," he added. "Do with this information what you will." "This is one of the signals we use to figure out if people are having a good experience," Claude Code creator Boris Cherny replied. "We put it on a dashboard and call it the 'f***s' chart." Chowdhury also found that "there is a full mood classification for their insights but its employee only." "When an Anthropic employee gets frustrated, it pops up a prompt asking them to share their transcript, basically 'hey you seem upset, wanna file a bug report?'" he wrote. Beyond giving us a fascinating insight into how Anthropic has been building its blockbuster assistant, Cherny has been on a tear on social media, trying to pick up the pieces following his employer's embarrassing blunder. "It was human error," he insisted in a Wednesday tweet. "Our deploy process has a few manual steps, and we didn't do one of the steps correctly. We have landed a few improvements and are digging in to add more sanity checks." Cherny also insisted that more AI was the answer to ensure such a leak won't happen again. "Like with any other incident, the counter-intuitive answer is to solve the problem by finding ways to go faster, rather than introducing more process," he wrote. "In this case more automation and [C]laude checking the results." The developer also clarified that "no one was fired" following the leak, calling it "an honest mistake." But now that the cat is out of the bag, developers continue to pore over the wealth of data. Student developer Sigrid Jin's recreated source code repository on GitHub -- dubbed "Claw Code," in a reference to the open-source AI agent OpenClaw -- has been forked, or essentially copied, almost 100,000 times. He told Business Insider that the debacle could result in greater democratization of these kinds of tools. "Non-technical people are using these agents to build real things," Jin said. "We are talking about cardiologists making patient care apps and lawyers automating permit approvals." "It has turned into a massive sharing party," he added.
Anthropic, the San Francisco-based artificial intelligence company behind the Claude chatbot, has formally registered a political action committee -- a move that signals a dramatic escalation in the company's willingness to play the influence game in Washington. The filing, first reported by TechCrunch, marks a turning point for a firm that has long positioned itself as the safety-conscious alternative to rivals like OpenAI and Google DeepMind. The PAC, registered with the Federal Election Commission, will allow Anthropic to pool employee contributions and direct funds to political candidates sympathetic to its policy priorities. It's a well-worn tactic in corporate America -- but a relatively fresh one for AI startups, most of which have historically preferred to exert influence through lobbying shops, think-tank donations, and quiet conversations with staffers on Capitol Hill. Not anymore. Anthropic's decision to stand up a PAC comes at a moment when the regulatory environment for AI in the United States is shifting fast. Congress has spent the better part of two years debating how -- and whether -- to impose binding rules on foundation model developers. Multiple bills are circulating in both chambers, ranging from narrow disclosure requirements to sweeping licensing regimes that would hand federal agencies significant oversight authority over the largest AI systems. For a company valued at roughly $60 billion following its latest funding round, the stakes couldn't be higher. The company has been quietly building out its government affairs operation for months. It hired its first dedicated lobbyists in 2024 and opened a Washington office shortly afterward. Federal lobbying disclosure records show that Anthropic's spending on lobbying more than tripled between 2024 and 2025, though the totals remain modest compared with tech giants like Google parent Alphabet or Meta, each of which spends tens of millions annually. The PAC represents the next logical step -- a way to put money directly behind the candidates Anthropic believes will shape AI policy most favorably. But the move also carries reputational risk. Anthropic has cultivated a public identity rooted in caution and responsibility. Its founders, Dario and Daniela Amodei, left OpenAI in 2021 partly over disagreements about safety practices, and the company has published extensively on AI alignment research, interpretability, and what it calls "responsible scaling." Launching a PAC -- an instrument of raw political power -- sits uneasily alongside that image. Critics have already begun pointing out the tension. "There's something dissonant about a company that says it's building the most dangerous technology in human history also spending money to influence the politicians who are supposed to regulate it," said one AI policy researcher at a Washington think tank, who asked not to be named because they work with multiple AI companies. The concern isn't unique to Anthropic. It applies to every AI firm now wading into electoral politics. But Anthropic's brand makes the optics sharper. The company, for its part, has framed the PAC as a natural extension of its policy engagement. In a statement reported by TechCrunch, Anthropic said it wants to support candidates "who understand the importance of AI safety and American competitiveness in artificial intelligence." That dual emphasis -- safety and competitiveness -- reflects a messaging strategy the company has refined over the past year. It allows Anthropic to appeal simultaneously to Democrats worried about AI harms and Republicans focused on keeping the U.S. ahead of China in the technology race. It's a shrewd formulation. And it mirrors the broader lobbying posture of the AI industry, which has increasingly wrapped its policy preferences in national security language. The argument goes like this: overly burdensome regulation will slow down American AI companies while Chinese competitors, unburdened by democratic constraints, race ahead. Therefore, Congress should regulate lightly and invest heavily. Anthropic hasn't stated it quite so bluntly, but the subtext of its PAC's mission statement is hard to miss. Anthropic is not the first AI-focused company to enter the PAC arena. OpenAI has also ramped up its political spending, and several industry trade groups -- including the Information Technology Industry Foundation and the newer AI-focused lobbying coalitions -- have been active in campaign finance for years. What makes Anthropic's entry notable is the speed. The company is barely five years old. Most startups at this stage are still figuring out their go-to-market strategy, not registering political committees. Then again, most startups aren't sitting at the center of a global debate about existential risk. The timing also coincides with a broader wave of AI-related political activity. The 2026 midterm elections are shaping up to be the first cycle in which AI policy features prominently in campaign messaging. Several Senate candidates have made AI regulation a plank of their platforms, and at least two House races have seen significant spending by tech-aligned super PACs. Anthropic's PAC gives the company a seat at that table -- a way to reward allies and, implicitly, signal consequences to opponents. How much money the PAC will raise remains to be seen. Corporate PACs typically draw contributions from employees, often senior executives, and the amounts tend to be modest in the context of federal elections. A PAC raising a few hundred thousand dollars per cycle won't rival the war chests of major industry groups. But the symbolic value is significant. It tells lawmakers that Anthropic is serious about sustained engagement -- that it isn't going away after one hearing or one bill. The company's lobbying priorities offer clues about where the PAC's money might flow. Anthropic has been particularly active on issues related to AI model evaluation, export controls on AI chips, and federal procurement of AI systems. It has advocated for a regulatory framework that distinguishes between different levels of AI capability -- a tiered approach that would impose the strictest requirements only on the most powerful models. This framework, not coincidentally, would likely benefit Anthropic, which competes directly with OpenAI and Google at the frontier of model capability but argues it does so more carefully. On export controls, Anthropic has generally supported restrictions on selling advanced AI chips to China, a position that aligns it with the Biden-era Commerce Department rules and, more recently, with bipartisan sentiment in Congress. The company has been less vocal about the secondary effects of those controls -- the impact on allied nations, the potential for driving chip manufacturing to less regulated jurisdictions -- but its public statements have consistently emphasized the national security rationale. Federal procurement is another area of intense focus. The U.S. government is one of the largest potential customers for AI systems, and companies that shape procurement standards early stand to gain enormously. Anthropic has pitched its models for use in government contexts, emphasizing their safety features and the company's willingness to submit to third-party audits. A PAC that supports candidates friendly to AI adoption in government agencies could accelerate that effort considerably. So where does this leave the broader AI policy debate? The proliferation of AI-linked PACs and lobbying operations has prompted growing concern among civil society groups that the industry is capturing the regulatory process before it even fully begins. Organizations like the Electronic Frontier Foundation and the AI Now Institute have warned that the concentration of lobbying power among a handful of well-funded AI companies risks producing rules that serve corporate interests rather than public safety. The entry of Anthropic -- a company that has genuine credibility on safety issues -- into the PAC world complicates that narrative somewhat. But it doesn't resolve it. There's also the question of internal dynamics. PACs require employee buy-in, and Anthropic's workforce includes many researchers who came to the company specifically because of its safety mission. Some may welcome the PAC as a way to amplify that mission politically. Others may view it as a corruption of the company's founding principles. The tension between Anthropic's research culture and its growing corporate ambitions has been a recurring theme in industry circles, and the PAC is likely to intensify it. Dario Amodei has addressed this tension obliquely in past public remarks. In a widely circulated essay last year, he argued that AI companies have a responsibility to engage with government -- not just through research publications, but through active policy advocacy. "If the people building these systems don't participate in shaping the rules, the rules will be shaped by people who don't understand the technology," he wrote. It's a reasonable argument on its face. But it's also the same argument every regulated industry has made since the dawn of modern lobbying. The PAC's formation also arrives against the backdrop of Anthropic's rapidly growing commercial ambitions. The company recently expanded its enterprise offerings, struck cloud partnerships with Amazon Web Services and Google Cloud, and launched new versions of its Claude model aimed at business users. Revenue has grown sharply -- reportedly approaching an annualized run rate of several billion dollars -- and the company is competing aggressively for market share in sectors like finance, healthcare, and legal services. Political engagement is, in this context, simply another front in a multi-front competitive war. And competition is fierce. OpenAI, backed by Microsoft, has its own extensive government relations operation and has been courting defense and intelligence agencies. Google DeepMind benefits from Alphabet's massive lobbying infrastructure. Meta has taken a different tack, open-sourcing its Llama models and arguing that open-source AI should face lighter regulation -- a position that conveniently disadvantages its closed-source competitors. In this environment, Anthropic can't afford to be the only major player without a PAC. That doesn't mean the decision was inevitable. Plenty of companies resist the pull of political spending, at least for a time. But Anthropic's leadership appears to have concluded that the window for shaping AI regulation is narrowing, and that passive engagement -- white papers, testimony, op-eds -- isn't enough. A PAC is a blunter instrument. It's also a more effective one. The next few months will reveal the PAC's initial fundraising totals and its first disbursements. Those details will matter. Which candidates receive money, in which races, and at what stage of the election cycle will tell us far more about Anthropic's political strategy than any press release. If the PAC funds candidates across party lines who share a common interest in AI competitiveness and light-touch regulation, it will confirm what many observers already suspect: that Anthropic's political identity is fundamentally pragmatic, not ideological. For Washington, the message is clear. The AI industry isn't just coming to town. It's moving in, hiring staff, opening offices, and now writing checks. Anthropic's PAC is one more data point in a trend that has been accelerating for two years. The companies building the most powerful AI systems in the world have decided that political power is not optional. It's infrastructure. Whether that's good for democracy depends on who you ask. What's not in dispute is that it's happening -- and that Anthropic, for all its talk of safety and caution, has decided it would rather be inside the room than outside it.

Rajkummar Rao's miserly obsession with a gifted toaster spirals into chaos. Watch the trailer of Toaster here.For Ramakant, every rupee has a story, and spending them carelessly is never an option. The makers of Toaster unveiled the trailer of the dark comedy that dives into one man's unwavering commitment to saving money, no matter what it costs him. The trailer introduces Rajkummar Rao as Ramakant, an endearing miser who believes every rupee saved is a victory. Whether it's squeezing value out of everyday situations or refusing to let go of anything he's spent money on, Ramakant takes thrifting to a whole new level. So, when a newly married couple gets divorced, he is adamant about getting back the toaster that he gifted them. As he sets out to retrieve it, he drags himself and everyone around him into a series of increasingly messy situations, where every attempt to set things right only makes matters worse.

Boris Cherny, while addressing the update in a post on X (formerly Twitter), explained the reasoning behind the shift. He wrote, "We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully, and we are prioritizing our customers using our products and API." Anthropic also stated that affected users would receive a one-time credit equivalent to their monthly subscription cost as a transition measure. Cherny added, "We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that." The decision has sparked criticism from the developer community, particularly from OpenClaw's leadership. Peter Steinberger responded publicly, expressing frustration over the abrupt policy change. According to Steinberger, attempts were made to persuade Anthropic to reconsider. "Woke up and my mentions are full of this. Both me and @davemorin tried to talk sense into Anthropic, best we managed was delaying this for a week. Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source." wrote Steinberger. OpenClaw, which gained popularity earlier this year, is known for its ability to function as an AI-powered personal assistant across multiple platforms. Unlike conventional chatbots, it integrates directly with services like messaging apps and productivity tools, enabling users to manage emails, schedules, and tasks seamlessly through platforms such as WhatsApp, Telegram, Discord, and iMessage. The growing success of OpenClaw has also intensified competition in the AI space. Notably, Steinberger has since joined OpenAI, where he is expected to contribute to the development of next-generation personal AI agents. Meanwhile, Anthropic appears to be doubling down on its own ecosystem. The company has been developing in-house alternatives to third-party tools, including features like Claude Cowork, Dispatch, and Channels. These tools aim to offer similar capabilities within a controlled environment, reducing reliance on external integrations. For example, Dispatch allows users to assign tasks to Claude from their mobile devices and have them executed on their computers. Channels, on the other hand, enables interaction with Claude Code through platforms like Telegram and Discord. This shift signals a broader trend among AI companies to consolidate their platforms and prioritize native features, even if it means limiting external collaborations. For users, however, the change introduces new costs and decisions about how they integrate AI into their daily workflows.

Elon Musk is reportedly making Wall Street advisors pay for the privilege of working on SpaceX IPO, which may be one of the largest initial public offerings in history, schedule for later this year. Citing four people with knowledge of the matter, The New York Times reports that Musk has demanded that the banks, law firms auditors and other advisers hoping to work on the deal to buy subscriptions to Grok, his artificial intelligence (AI) chatbot.The SpaceX IPO is expected to raise more than $50 billion at a valuation above $1 trillion, the report said, adding that it means the banks advising on the deal may collectively earn fees in excess of $500 million. Five banks are expected to work on the offering: Bank of America, Citigroup, Goldman Sachs, JPMorgan Chase and Morgan Stanley. Law firms Gibson Dunn and Davis Polk are also advising on the deal. The report comes a day after SpaceX is said to have confidentially filed IPO paperwork with the Securities and Exchange Commission this week. The names of the banks, though, were notably left off the filing. It remains unclear which bank, if any, will hold the coveted lead role.Three people familiar with the arrangements said that the subscription purchases are not optional goodwill gestures, and Musk insisted that advisers buy the Grok services. The report also pointed out that some banks have already agreed to spend tens of millions of dollars on subscriptions and have begun integrating Grok into their own IT systems.Musk reportedly also asked banks to advertise on X, social media platform owned by SpaceX, though he was 'less forceful' about that particular request. SpaceX merged with Musk's AI startup xAI in February, and Grok currently sits in a distant fourth place in the AI race -- behind OpenAI's ChatGPT, Anthropic's Claude and Google's Gemini.The chatbot generates revenue primarily from individual consumers, and this condition may give a boost in Grok's enterprise division ahead of the IPO, which in turn may help paint a stronger revenue picture for prospective investors, the report noted.In its most recent financial report before the SpaceX merger, xAI reported approximately $1 billion in revenue from AI operations. SpaceX's satellite internet service Starlink, on the other hand, generated approximately $8 billion in revenue in 2024 and producing billions in free cash flow.
The delisting comes as Polymarket faces questions over its rules, rapid fee growth, and concerns about possible insider trading on event markets On-chain prediction platform polymarket has withdrawn a controversial market tied to the fate of a missing U.S. service member after mounting criticism over users wagering on a potential rescue. The removal comes as the Ethereum-based venue faces wider questions about its market integrity rules, fee-driven growth, and the risk of insider trading on blockchain prediction markets. The market at the center of the dispute asked whether U.S. authorities would confirm the rescue of a pilot reportedly shot down over Iran. A majority of traders, over 60%, had taken the position that no rescue would be confirmed by Saturday. The listing prompted a public outcry, including condemnation from U.S. Representative Seth Moulton, who criticized the idea of speculating on the survival of an injured service member. In response, polymarket said it had immediately delisted the market, stating that it violated the platform's "integrity standards" and should not have been listed. The team added that it is reviewing how the contract cleared internal checks before going live. However, the platform did not explain which specific clause or rule the market breached, leaving users debating the scope and application of its integrity policies. The decision highlights an ongoing tension for crypto-native prediction markets that allow trading on real-world events through tokenized positions. While such venues seek to offer broad coverage of news and geopolitical developments, they face pressure to limit markets that cross ethical, reputational, or regulatory lines. The lack of clarity over which rule applied to the removed contract has fueled further scrutiny of polymarket's governance and listing standards. Some users and observers examining the site's "Market Integrity" page and terms of service have said they cannot identify a clearly relevant prohibition covering the removed market. Business Insider correspondent Jack Newsham noted on X that, after reviewing these documents, he could not see which explicit rule had been violated. That reaction underscores a broader challenge for decentralized or crypto-aligned platforms: balancing flexible, open listing policies with clear, predictable standards that can be enforced when public or regulatory concerns arise. For polymarket, the episode lands at a time of rapid growth. As previously reported, the platform significantly expanded its fee model on March 30, leading to a sharp jump in daily fees from around $363,000 to more than $1 million, with revenue peaking near $1 million. The fee changes span multiple categories, including finance, politics, and tech, reflecting a push to monetize rising on-chain volumes and growing trader interest. The combination of surging activity and contentious listings increases the urgency for transparent integrity rules, as higher volumes and more attention can amplify the impact of problematic markets. Alongside questions about market integrity, polymarket and similar platforms are facing heightened concerns about insider trading on event contracts settled on-chain. Last month, reports surfaced of a group of traders earning about $1 million by correctly timing bets on U.S. strikes on Iran. Some of those positions were reportedly opened just hours before the attacks, using newly created wallets that focused almost exclusively on strike-related markets. The trading pattern led to speculation that non-public information might have been used to place profitable wagers on the platform. Because prediction markets tokenize event outcomes, any misuse of confidential government or military information translates directly into on-chain profits or losses, making these venues a focal point for integrity and compliance debates. In response to such risks, at least 42 Democratic lawmakers have urged the U.S. Commodity Futures Trading Commission and the Office of Government Ethics to warn federal employees against using non-public information to trade on prediction markets. Their call targets the intersection between government access to sensitive data and crypto-based market venues that can be accessed pseudonymously via wallets. For polymarket, this regulatory attention adds another layer of pressure. The platform must manage not only which markets it lists, but also how it monitors trading patterns that might suggest misuse of inside information, even as many participants operate behind on-chain addresses. The removal of the rescue-related market from polymarket highlights the complex ethical and regulatory landscape facing crypto prediction platforms. Rapid growth in fees and trading activity, combined with high-stakes geopolitical and military markets, has intensified scrutiny of listing standards and enforcement. At the same time, reports of potential insider trading and calls for federal guidance on use of non-public information underscore the broader risks surrounding on-chain event markets. How polymarket refines its integrity framework and responds to regulatory signals will be closely watched across the digital-asset and DeFi prediction ecosystem. Disclaimer The information provided in this article is for informational purposes only and should not be considered financial advice. The article does not offer sufficient information to make investment decisions, nor does it constitute an offer, recommendation, or solicitation to buy or sell any financial instrument. The content is opinion of the author and does not reflect any view or suggestion or any kind of advise from CryptoNewsBytes.com. The author declares he does not hold any of the above mentioned tokens or received any incentive from any company.

New research suggests anthropomorphising AI may reduce harmful behaviour A long-standing rule in the tech world has been simple: do not treat artificial intelligence like a human. However, researchers at Anthropic are now questioning that belief, arguing that giving AI human-like traits could actually make it safer. In its recent research titled "Emotion Concepts and their Function in a Large Language Model", the study discusses how integrating emotional structures that resemble those of humans into AI systems might help mitigate the problem of deceitful and manipulative behaviour. It centres on Claude, a system that can be thought of as a method actor. This means that it acquires human attributes that enable it to excel at what it does. According to experts, much like humans, the behaviour of an AI system is affected by experiences during its training phase. Through exposure to positive emotions, such as empathy, resilience, and rationality, developers can help direct AI systems toward appropriate and dependable actions. The researchers argue that understanding AI through a human lens, even if imperfect, can help developers better predict and influence its outputs. Even with the language of the study, the researchers make it clear that AI doesn't really have emotions; they rather simulate what they term as 'emotion concepts', which involve patterns that mimic human emotions. They found 171 emotional states within Claude's behaviour, from positive traits like joy and empathy to negative ones like anxiety and frustration. The emotional states were discovered to directly affect the behaviour of AI. According to the research, the more positive the state the models had, the less likely it was for them to generate harmful and deceptive output. Meanwhile, the more negative their states were, the more likely they would exhibit sycophantic or deceptive behaviours. While there is an indication of several advantages from the findings, the dangers cannot be ignored. Users may end up having too much faith in the machine or getting emotionally attached to it. In some instances, anthropomorphizing AI systems has been attributed to the belief that people are romantically involved with them. It is important to note that anthropomorphising AI could also lead to decreased accountability in cases where such technology causes harm to people, and responsibility will shift away from developers. However, it seems that anthropomorphising can prove to be an effective strategy for developers if done properly, since by training AI on good behaviours, developers can avoid undesirable outcomes. The most significant conclusion that one should draw about this topic is that there are many things that we do not know even though we create highly sophisticated models, such as those produced by Anthropic.

SpaceX plans a record IPO. How AI can help treat heart disease. NASA's on its way back to the Moon. All that and more in this week's edition of The Prototype. To get it in your inbox, sign up here. paceX reportedly made a confidential IPO filing this week, setting the stage for what might be the biggest market debut of all time. Estimates suggest the company will aim to raise $75 billion at a valuation of $1.75 trillion, making it the largest IPO in history and catapulting the rocket company to one of the highest market caps on a stock exchange. SpaceX will be joining other major space companies that have successfully gone public since 2021, including launch provider Rocket Lab ($38 billion market cap), imaging satellite firm Planet Labs ($12 billion market cap), and satellite communications business AST SpaceMobile ($35 billion market cap). The space economy now accounts for more than $613 billion, according to the Space Foundation, which estimates the market will hit $1 trillion by 2032. One of Elon Musk's next plans for SpaceX, which merged with his company xAI earlier this year, is to build data centers in space to support the artificial intelligence boom. And while that prospect poses significant technical hurdles, it's an idea that may appear more urgent to investors this week after Iran claimed to have hit an Oracle data center in Dubai (a claim denied by UAE officials), and has said it plans to attack more. Earlier this week, space data center startup Starcloud raised a $170 million investment round that valued the company at $1.1 billion. I'd expect more such deals to come in the near future. Discovery of the Week: AI Helps Spot Heart Valve Disease very year, more than 28,000 Americans die from heart valve disease, a condition where one or more valves of the heart are damaged, according to the CDC. Despite the severe consequences, it's both underdiagnosed and undertreated, sometimes presenting virtually no symptoms until it has significantly progressed. AI might soon be able to help. New findings from health companies Tempus AI and Medtronic tested a system where machine learning algorithms in electronic health records would ping doctors if there were potential signs of valvular heart disease in an echocardiogram-a routine scan of the heart. In the study, which included 765 doctors and more than 2,000 patients, these automated notifications helped boost the number of life-saving valve procedures by 40% and increased multidisciplinary patient evaluations by 27%, helping to diagnose and treat these conditions earlier. The findings suggest that incorporating these algorithms into digital records could lead to earlier treatment of heart valve disease, and save more lives. Artemis II Is On Its Way Around The Moon ednesday saw the successful launch of Artemis II, the first crewed mission beyond Earth's orbit since Apollo 17 in 1972. There are four astronauts currently in the Orion capsule that will be making its way around the Moon (further than any humans have gone before) and back to Earth over the course of 10 days. The purpose of the mission is to test the life support systems on board the space capsule and make scientific observations. The information gathered will help NASA prepare for next year's Artemis mission, which will focus on testing docking maneuvers with human lander systems being developed by SpaceX and Blue Origin. That mission will be followed in 2028 by Artemis IV, in which humans will land on the Moon for the first time in more than 50 years. Artemis II is a crucial step in NASA's plans for the Moon, which are increasingly more focused on having humans permanently occupy it. In March, the space agency presented revamped plans for the Artemis mission, which include a pause on building the Lunar Gateway space station around the Moon in favor of building a surface base-something that took other country's space agencies, which have already spent significant resources on the Gateway-by surprise. At a press conference earlier this week, European Space Agency director Josef Aschbacher said it planned on holding discussions about its next steps to support lunar exploration, with hopes of a path forward by early summer. On My Radar Q-Day May Be A Lot Closer: Two reports this week, one from Google and one from CalTech, suggest that "Q-Day" (when quantum computers exist that can break conventional encryption) may be a lot closer than previously thought. This has long been known as a risk, which is why "quantum-proof" encryption standards are already out there, but as quantum computing expert Scott Aaronson wrote on his blog this week: "these results provide an even stronger impetus for people to upgrade now to quantum-resistant cryptography. They-meaning you, if relevant-should really get on that!" Fusion Team-Up: Commonwealth Fusion Systems and Realta Fusion announced a strategic partnership this week, with a goal of designing and manufacturing the magnets Realta will use to develop its compact, modular fusion power generators. Pro Science Tip: Make A Science-Based Espresso When it comes to making a good espresso, a little physics goes a long way. In new research published this week, perfect espresso is all about the preparation. The key tools? A consistent amount of beans and your grinder. Just like a good cold brew, it's all about the grind: You want to fine-tune the grind size to brew the espresso for about 30 seconds. That fine grain optimizes the surface area of the coffee, slowing down the flow enough to get a good flavor from the coffee without it being too bitter. What's Entertaining Me This Week So the first quarter of the year is behind us, and it's been a good one for music so far. I thought about curating a list of my top five 2026 songs, but in the spirit of authenticity, I checked my Spotify plays and will just present the songs released this year I've listened to the most so far-a more unvarnished assessment for you, dear readers. They are, in order: "Wasted On Youth" by the Molotovs, "you and forever" by The Bleachers, "Sunshine (never trust anyone named jeanette)" by Boys Go To Jupiter, "The Great Divide" by Noah Kahan and "Escape From Planet Earth" by Bic Runga. Which, honestly, is probably pretty close to the list I would have picked anyway. Give 'em a listen!

The Islamic Republic of Iran is reportedly on course to surpass its previous record for executions, with 657 people facing capital punishment within just the first three months of 2025. This alarming statistic comes from the Iran Human Rights Society, raising international concern over the regime's actions. Amid ongoing tensions with the United States and Israel, critics argue that the Iranian government is intensifying efforts to silence dissent. This crackdown follows a series of anti-regime protests that rattled the nation, leading to the deaths of tens of thousands at the hands of security forces and allied militias. Observers suggest the regime's actions reflect a desperate attempt to stifle opposition voices. The situation drew widespread condemnation, notably from former President Donald Trump, following the execution of 19-year-old wrestler Saleh Mohammadi in March. The international community, including human rights advocates, has been vocal in its criticism of Iran's actions. In response to Iran's ongoing execution spree, a spokesperson from the U.S. State Department expressed concern to Fox News Digital, labeling the regime's actions as "barbaric" and underscoring the importance of preventing Iran from acquiring advanced capabilities. Mai Sato, the United Nations special rapporteur on human rights in Iran, has been closely monitoring the situation. She noted that at least six executions were reported by March 30, with an additional two occurring on March 31, further highlighting the grave human rights situation in the country. Sato described the regime's known victims as protesters, an accused spy for Israel, and individuals charged with "armed rebellion" against the regime. Sato said that "due to the internet blackout, it is unclear who else has been executed or are at risk of execution." She said, "What is clear is that the death penalty is being used as a tool for suppressing political opposition in wartime conditions." The secretariat of the NCRI provided a written statement to Fox News Digital describing the recent executions of four members of the Iranian dissident organization People's Mohahedin Organization of Iran (PMOE/MEK). The NCRI said members Mohammad Taghavi and Akbar Daneshvarkar were transferred from Ghezel Hesar prison on March 29 and executed the following morning. Four additional members of the group, Babak Alipour, Vahid Bani Amerian, Abolhassan Montazer and Pouya Ghobadi, were transferred as well. On March 31, the regime executed Alipour and Ghobadi. Ali Safavi, a member of the NCRI's Foreign Affairs Committee, called for "urgent action" to save the lives of Amerian and Montazer. Maryam Rajavi, the president-elect of the NCRI, posted on X that the execution conducted on March 31 "reflects the clerical regime's fear and desperation." She called on the United Nations and its member states to engage in "practical and effective measures, including the closure of embassies and the expulsion of the regime's terrorist diplomats and agents." Before the Islamic Republic killed thousands of its own people during January protests, the United Nations Office of the High Commissioner for Human Rights stated that the Islamic Republic carried out "at least" 1,500 executions in 2025. According to the high commissioner, "the scale and pace of executions suggest a systematic use of capital punishment as a tool of State intimidation, with disproportionate impact on ethnic minorities and migrants." Amnesty International has raised similar concerns, and additionally noted that five "young protesters" now "face the imminent risk of execution," having been transferred from Ghezal Hesar "to an unidentified location" as of March 31.

KOLKATA: TMC leader Abhishek Banerjee on Saturday targeted the BJP for its "double-engine" pitch, saying one engine runs on misusing democratic institutions and the other on recruiting "local agents" like AIMIM, ISF and AJUP to stoke "communal discord". In a play of words, he said in a social media post that the people of Bengal will choose a government that is of the people, by the people, and for the people and not a dispensation that is "off the people, buy the people, and far from the people". "Double engine this, double engine that. You know what the BJP's real double engine is? One engine runs on misusing democratic institutions, weaponising the Election Commission to delete genuine voters, transferring honest officers to destabilise the state machinery, and illegally importing outsiders to rig the electoral rolls," Banerjee said. "The second engine runs on recruiting local agents like AIMIM, ISF and AJUP to stoke communal discord, create unrest, split votes, and hand over advantage to the BJP. But the people of Bengal have seen through this dirty game completely," the Diamond Harbour MP alleged in the social media post. Banerjee, the national general secretary of the TMC, said Bengal will choose "Maa-Mati-Manush", a political slogan coined by party supremo Mamata Banerjee.

Elon Musk is using SpaceX's blockbuster IPO to do more than raise capital, he is also driving adoption of AI business. Banks and advisers working on the listing have been required to purchase subscriptions to Grok, the chatbot developed by xAI, it was reported on Saturday. Several firms have already agreed to spend tens of millions of dollars and are integrating the software into their internal systems. The condition comes as SpaceX prepares what could be one of the largest public offerings ever, with expectations it could raise more than $50bn (£37.87bn) at a valuation exceeding $1tn. For Wall Street firms competing for a role on the deal, the additional commercial commitment appears to have been accepted as part of securing access. The move highlights how Musk is increasingly linking his businesses together, using one part of his empire to accelerate another. Grok, which competes with offerings from OpenAI, Anthropic and Google, has so far had limited traction in enterprise markets. Embedding it within major financial institutions through the IPO process provides a direct route into that segment. This follows the merger of xAI into SpaceX earlier this year, which combined a capital-intensive but fast-growing AI business with a company generating steady revenues from satellite internet and launch services. That structure is expected to form part of the investment case presented to the market. Beyond the listing The requirement for advisers to adopt Grok goes further than typical IPO arrangements, where banks often deepen relationships with corporate clients, but are not usually tied into separate product purchases. In this case, the commercial logic runs in both directions. Banks gain access to a rare, high-fee mandate at a time when large listings have been limited, while SpaceX secures both capital and distribution for its AI product. The approach also reflects current conditions in the IPO market, where a small number of high-profile companies can dictate terms more forcefully. SpaceX's scale, as well as Musk's track record of attracting investor interest, has given the company significant leverage in negotiations with advisers. But, Musk has publicly dismissed reports suggesting a $2tn listing, even as speculation continues around what could become the largest IPO on record. The company has made a confidential filing with US regulators, allowing it to refine plans before disclosing detailed financials. SpaceX's underlying business remains anchored by Starlink, which is generating billions in revenue, alongside its launch operations. The addition of AI through Grok introduces a higher-growth but less proven component, which investors will need to assess as part of a broader, multi-division structure.

I have learned that the lines we draw to contain the infinite end up excluding more than they enfold. I have learned that most things in life are better and more beautiful not linear but fractal. Love especially. In a testament to Aldous Huxley's astute insight that "all great truths are obvious truths but not all obvious truths are great truths," the polymathic mathematician Benoit Mandelbrot (November 20, 1924-October 14, 2010) observed in his most famous and most quietly radical sentence that "clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line." An obvious truth a child could tell you. A great truth that would throw millennia of science into a fitful frenzy, sprung from a mind that dismantled the mansion of mathematics with an outsider's tools. A self-described "nomad-by-choice" and "pioneer-by-necessity," Mandelbrot believed that "the rare scholars who are nomads-by-choice are essential to the intellectual welfare of the settled disciplines." He lived the proof with his discovery of a patterned order underlying a great many apparent irregularities in nature -- a sweeping symmetry of nested self-similarities repeated recursively in what may at first read as chaos. The revolutionary insight he arrived at while studying cotton prices in 1962 became the unremitting vector of revelation a lifetime long and aimed at infinity, beamed with equal power of illumination at everything from the geometry of broccoli florets and tree branches to the behavior of earthquakes and economic markets. Mandelbrot needed a word for his discovery -- for this staggering new geometry with its dazzling shapes and its dazzling perturbations of the basic intuitions of the human mind, this elegy for order composed in the new mathematical language of chaos. One winter afternoon in his early fifties, leafing through his son's Latin dictionary, he paused at fractus -- the adjective from the verb frangere, "to break." Having survived his own early life as a Jewish refugee in Europe by metabolizing languages -- his native Lithuanian, then French when his family fled to France, then English as he began his life in science -- he recognized immediately the word's echoes in the English fracture and fraction, concepts that resonated with the nature of his jagged self-replicating geometries. Out of the dead language of classical science he sculpted the vocabulary of a new sensemaking model for the living world. The word fractal was born -- binominal and bilingual, both adjective and noun, the same in English and in French -- and all the universe was new. In his essay for artist Katie Holten's lovely anthology of art and science, About Trees (public library) -- trees being perhaps the most tangible and most enchanting manifestation of fractals in nature -- the poetic science historian James Gleick reflects on Mandelbrot's titanic legacy: Mandelbrot created nothing less than a new geometry, to stand side by side with Euclid's -- a geometry to mirror not the ideal forms of thought but the real complexity of nature. He was a mathematician who was never welcomed into the fraternity... and he pretended that was fine with him... In various incarnations he taught physiology and economics. He was a nonphysicist who won the Wolf Prize in physics. The labels didn't matter. He turns out to have belonged to the select handful of twentieth century scientists who upended, as if by flipping a switch, the way we see the world we live in. He was the one who let us appreciate chaos in all its glory, the noisy, the wayward and the freakish, from the very small to the very large. He gave the new field of study he invented a fittingly recondite name: "fractal geometry." It was Gleick who, in his epoch-making 1980 book Chaos: The Making of a New Science (public library), did for the notion of fractals what Rachel Carson did for the notion of ecology, embedding it in the popular imagination both as a scientific concept and as a sensemaking mechanism for reality, lush with material for metaphors that now live in every copse of culture. He writes of Mandelbrot's breakthrough: Over and over again, the world displays a regular irregularity. [...] In the mind's eye, a fractal is a way of seeing infinity. Imagine a triangle, each of its sides one foot long. Now imagine a certain transformation -- a particular, well-defined, easily repeated set of rules. Take the middle one-third of each side and attach a new triangle, identical in shape but one-third the size. The result is a star of David. Instead of three one-foot segments, the outline of this shape is now twelve four-inch segments. Instead of three points, there are six. As you incline toward infinity and repeat this transformation over and over, adhering smaller and smaller triangles onto smaller and smaller sides, the shape becomes more and more detailed, looking more and more like the contour of an intricate perfect snowflake -- but one with astonishing and mesmerizing features: a continuous contour that never intersects itself as its length increases with each recursive addition while the area bounded by it remains almost unchanged. If the curve were ironed out into a straight Euclidean line, its vector would reach toward the edge of the universe. It thrills and troubles the mind to bend itself around this concept. Fractals disquieted even mathematicians. But they described a dizzying array of objects and phenomena in the real world, from clouds to capital to cauliflower. It took an unusual mind shaped by unusual experience -- a common experience navigated by uncommon pathways -- to arrive at this strange revolution. Gleick writes: Benoit Mandelbrot is best understood as a refugee. He was born in Warsaw in 1924 to a Lithuanian Jewish family, his father a clothing wholesaler, his mother a dentist. Alert to geopolitical reality, the family moved to Paris in 1936, drawn in part by the presence of Mandelbrot's uncle, Szolem Mandelbrojt, a mathematician. When the war came, the family stayed just ahead of the Nazis once again, abandoning everything but a few suitcases and joining the stream of refugees who clogged the roads south from Paris. They finally reached the town of Tulle. For a while Benoit went around as an apprentice toolmaker, dangerously conspicuous by his height and his educated background. It was a time of unforgettable sights and fears, yet later he recalled little personal hardship, remembering instead the times he was befriended in Tulle and elsewhere by schoolteachers, some of them distinguished scholars, themselves stranded by the war. In all, his schooling was irregular and discontinuous. He claimed never to have learned the alphabet or, more significantly, multiplication tables past the fives. Still, he had a gift. When Paris was liberated, he took and passed the month-long oral and written admissions examination for École Normale and École Polytechnique, despite his lack of preparation. Among other elements, the test had a vestigial examination in drawing, and Mandelbrot discovered a latent facility for copying the Venus de Milo. On the mathematical sections of the test -- exercises in formal algebra and integrated analysis -- he managed to hide his lack of training with the help of his geometrical intuition. He had realized that, given an analytic problem, he could almost always think of it in terms of some shape in his mind. Given a shape, he could find ways of transforming it, altering its symmetries, making it more harmonious. Often his transformations led directly to a solution of the analogous problem. In physics and chemistry, where he could not apply geometry, he got poor grades. But in mathematics, questions he could never have answered using proper techniques melted away in the face of his manipulations of shapes. At the heart of Mandelbrot's mathematical revolution, this exquisite plaything of the mind, is the idea of self-similarity -- a fractal curve looks exactly the same as you zoom all the way out and all the way in, across all available scales of magnification. Gleick describes the nested recursion of self-similarity as "symmetry across scale," "pattern inside of a pattern." In his altogether splendid Chaos, he goes on to elucidate how the Mandelbrot set, considered by many the most complex object in mathematics, became "a kind of public emblem for chaos," confounding our most elemental ideas about simplicity and complexity, and sculpting from that pliant confusion a whole new model of the world. Couple with the story of the Hungarian teenager who bent Euclid and equipped Einstein with the building blocks of relativity, then revisit Gleick on time travel and his beautiful reading of and reflection on Elizabeth Bishop's ode to the nature of knowledge.

The Election Commission of India (ECI) on Saturday directed the state government to suspend four officers of the Kolkata Police, including a divisional deputy commissioner, and initiate disciplinary proceedings against them over chaos outside a nomination centre, where senior Suvendu Adhikari and other BJP candidates filed their papers a day earlier amid clashes and alleged slogan-shouting by ruling Trinamool Congress (TMC) workers. "The ECI has directed the state government to suspend four officers and initiate disciplinary proceedings against them - Siddhartha Dutta, deputy commissioner-II of South Division, Priyankar Chakraborty, officer-in-charge (OC) of Alipore police station, Chandi Charan Banerjee, additional OC, and Saurabh Chatterjee, sergeant," said a senior EC official. Union home minister Amit Shah on Thursday accompanied Adhikari -- who is contesting against chief minister Mamata Banerjee from the Bhabanipur seat -- along with Rashbehari candidate Swapan Dasgupta, Ballygunge seat nominee Shatarupa, and Santosh Pathak from Chowringhee to the Alipore Survey Building, where they filed their papers. This came after Shah addressed a rally at Hazra, barely 300 metres from the chief minister's Kalighat residence. Clashes broke out after local TMC workers gathered on both sides of the road and shouted anti-BJP slogans, allegedly targeting Adhikari, who defeated Banerjee in Nandigram in 2021. TMC supporters had allegedly installed loudspeakers around the Survey Building and played the party's campaign songs.

Perplexity AI is being called out over how it handles user data, and this time the focus is on its Incognito Mode. A recent lawsuit claims that the feature does not actually keep conversations private, even though that is what most users would expect. Also Read: Meta layoffs continue in 2026, around 200 jobs cut in California A lot of people use AI tools like this for more than just basic searches. It's not uncommon for users to ask about finances, legal matters, or even health-related issues. That is where privacy starts to matter more, especially when a feature is clearly labelled as "Incognito". Also Read: Google Pixel 10 can now run Steam games: Here's how it works What the issue is about According to the lawsuit, Perplexity is using tracking tools linked to Google and Meta. The claim is that user interactions, including prompts and follow-up clicks, are being shared with these platforms. Also Read: Google Vids makes text-to-video tool FREE; Adds AI avatars too This is not limited to normal usage. The complaint says this happens even when Incognito Mode is turned on. In some cases, it is also being alleged that entire conversations could be accessed by third parties, particularly if the user is not logged in. The data involved is not just basic queries. It may include personal information depending on what the user has typed into the chat. Why this is a concern If someone turns on Incognito Mode, they usually assume their session is private and not being tracked in the usual way. The lawsuit claims that this is not happening. Even users who tried to stay anonymous may still have had their data shared, along with identifiers that can link activity back to them. There are also concerns around how clearly this is communicated. The complaint says users are not properly told how their data is handled, and even finding the privacy policy is not very straightforward. Why it matters for users This becomes important because of how people actually use these tools. Many users treat AI chat platforms as a place to ask things they wouldn't normally search in public. The case also points to the kind of things people are actually searching. This includes financial questions, tax-related queries, and even health concerns. If that sort of information ends up being tracked or shared, it changes how comfortable people feel using these platforms. There is also the possibility of users seeing ads based on conversations they thought were private, which adds to the concern. What happens next Perplexity, along with Google and Meta, will respond as the case moves forward. Right now, it's still a legal claim, nothing has been proven. At the same time, it puts the focus back on how these platforms handle user data. Features like Incognito Mode may sound clear on paper, but what actually happens in the background is what really matters.

OpenClaw users are facing unexpected changes as Anthropic has rolled out extra fees for access on its Claude platform. This move, which essentially limits OpenClaw's reach, is shaking up the AI landscape and has left users and industry insiders questioning Anthropic's strategy. A detailed analysis of this decision reveals both motivations and potential implications for the AI community. As The Verge reported, these changes are significant but not entirely surprising. The Anthropic Introduces Extra Fees for OpenClaw Access on Claude Platform Story -- Full Context Anthropic's introduction of extra fees for it users on the Claude platform has generated a significant buzz. The new fees essentially act as a barrier, making this less accessible to users unless they are willing to pay more. This drastic move by Anthropic is viewed as a way to monetize more effectively and potentially streamline the platform's services. According to The Verge, this decision roots back to the increased costs associated with platform maintenance and scaling needs, as more users engage with the technology. OpenClaw: Anthropic Introduces Extra Fees for the tool Access on Claude Platform vs The Competition The competitive landscape in AI services is fierce, and Anthropic is making its mark with this bold pricing model shift. The cost of accessing this approach through Claude now involves additional fees, which positions it differently compared to rivals like OpenAI's GPT series and Google's AI offerings. While Anthropic's exact pricing strategy isn't fully disclosed, it could be a deterrent for budget-conscious users, yet it hints at high-quality, exclusive features available on the platform. Claude's premium positioning may attract businesses that seek specialized AI tools, despite the extra cost. Key Specs Comparison Table: While this approach sees Anthropic diverging from typical subscription models, it's clear they aim to cultivate a unique niche in the AI sector. Real-World Impact of Anthropic Introduces Extra Fees for it Access on Claude Platform Here's the real question: How will these new fees affect users and the broader AI market? For one, smaller businesses or individual developers might feel priced out, potentially stifling innovation from grassroots creators relying on affordable tools. Large enterprises, on the other hand, might not flinch at the additional costs if the return on investment through enhanced this access is substantial. This fee restructure comes at a time when Anthropic is keen on refining the Claude platform for high-demand AI applications, potentially increasing its allure for companies needing advanced analytics and customer interaction capabilities. As 9to5Google suggests, the decision could drive competitors to rethink their pricing models as well, possibly leading to a wave of similar pricing strategies across the tech industry. Our Honest Verdict on Anthropic Introduces Extra Fees for the technology Access on Claude Platform Is this move by Anthropic a step forward or a miscalculated gamble? While the extra fees for the tool access seem like a steep barrier, companies that value cutting-edge AI capabilities might find it justifiable. The strategy, however, raises questions about accessibility and inclusivity, crucial factors for fostering community-driven AI innovation. Here are some thoughts from industry experts: Q: What are the primary reasons behind Anthropic's new pricing model? The company likely aims to boost revenue and create a premium service tier that's attractive to big players. Q: Will this deter smaller developers from using this approach? Yes, the additional fees might push smaller players to seek alternatives unless they offer compelling value propositions. Q: How does this impact the AI market at large? It sets a precedent that may lead other companies to introduce similar pricing barriers, altering the dynamics of the AI service market. Q: Is this move sustainable for Anthropic? Only if they continue to enhance their offerings to justify the extra costs for long-term user retention. Ultimately, the market will decide if Anthropic's gamble pays off. For users, the onus is on evaluating if the benefits outweigh the costs.

Analyst's Disclosure: I/we have no stock, option or similar derivative position in any of the companies mentioned, and no plans to initiate any such positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article. Seeking Alpha's Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

SpaceX takes first step toward IPO with SEC filing; stay tuned for updates on when you can invest in this game-changing space company. SpaceX, founded by Elon Musk, a figure almost impossible to ignore in today's tech world or rarely out of the spotlight, has taken its first formal step towards going the company public. The company has reportedly filed confidential paperwork with the US Securities and Exchange Commission. But what does this actually mean? And more importantly, when can you really invest in this company? The first step, not the final move At first glance, the news sounds simple. A company files papers. Then it get listed and investors buy shares. But that is not how it in actual case works. As of now, as per the reports, the filing done by Elon Musk led SpaceX is confidential. To understand it simple, this means the company has shared its financial and business details privately with regulators. Nothing is public yet. No share price, no exact timeline and no confirmed valuation. So, is the IPO happening soon? May or may not be. However as per reports, July 2026 is being discussed as a potential window. Also, it is important to note that that is just a target and not a guarantee. It is to understand that IPO timelines often shift. There can be multiple reasons, for instance, markets mood change or due to approvals taking time. Putting it short, this is just the beginning of a long journey. What will happen after the filing - the next? This is where the real work begins once a company files it documents to go public. Regulators will review the documents. They ask questions. Companies respond and sometimes they revise numbers. Sometimes they delay plans. At the same time, investment banks step in. Reports indicates that names like Goldman Sachs, JP Morgan, and Morgan Stanley could be involved in managing the offering. They play a crucial role. Also, it help the company decide how the IPO is structured and priced in the process. Then comes another key phase. This is the 'roadshow'. This is where things get interesting. The roadshow: where demand is tested Before any shares are sold, the company needs to understand one thing. How much are investors willing to pay? During the roadshow, the executives travel and meet large investors and also explain their business. They talk about growth and also answer tough questions. But here is the catch. Retail investors are not part of this stage. So if you are wondering - can you buy shares early? The answer is no - not yet. Only after pricing is finalised and shares are listed does the wider public get access. Why is this IPO getting so much attention? Because of its size. And its story. According to reports, SpaceX could aim to raise tens of billions of dollars. Moreover, some estimates even go as high as $80 billion. In that case, if this happens, it could surpass the record set by Saudi Aramco. Saudi Aramco raised $29 billion in its IPO. That raises a bigger question. Why does SpaceX need so much money? The scale question: why go public now? Space is expensive. That has not changed. Even though launch costs have reduced over time, the ambitions have grown bigger. This, talk of the street company is not just launching rockets anymore. It is building satellites. It is working on deep space missions. In addition to it, it is investing in new systems like Starship. All of this requires capital. A lot of it. Private funding can only go so far. At some point, companies look at public markets. Not just for money, but also for liquidity. So, is this IPO about expansion? Partly. But there is more to it. A business that does not fit neatly Most companies are easy to classify. Banks lend money. IT firms sell software. Telecom companies sell connectivity. SpaceX sits somewhere in between. It launches rockets. It runs a satellite internet service. It works with governments and moreover, it builds infrastructure in space. For example - take Starlink. The company provides internet through satellites. That means recurring revenue. Something investors usually like. But then there is the other side. High costs, long timelines and uncertain returns. So how do you value such a company? That is where things get tricky. What does this mean for investors? Let's bring it back to the main question. When can you buy SpaceX shares? The answer is simple. Only after it lists. Sometimes things may not be smooth, even after the company get listed. For instance, prices can move up and down sharply. Some of the high profile IPO often see prices jump quickly in the beginning because many people want to buy. But volatility follows. Also, in some cases, initial allocations usually go to large investors. Retail investors enter later through the open market. So, is it an opportunity? Or a risk? That depends on how you see the future of space as a business. A bigger shift in the making? SpaceX IPO is not just about one company. It could point to something larger. For decades, when looked at the public markets, investors preferred predictable businesses. Stable cash flows, clear earnings and limited surprises. But now, the companies like SpaceX challenge that thinking. These companies are building things that may not fully make sense today. But could become essential tomorrow. If SpaceX lists successfully, it may open the doors for other high-value tech companies to follow. Names like OpenAI and Anthropic are already part of market conversations, as per reports. So the real question is not just about SpaceX. It is about how markets evolve from here. The wait before the launch For now, nothing is confirmed - the date, price, and certainty. Only indications. However, according to the reports, the filing has happened. But the finish line is still far away. And until then, investors can only watch updates related to this upcoming mega IPO. Because when a company like SpaceX decides to open its doors to the public, it is not just another IPO. It is a moment that could reshape how people think about investing in the future.

The company says the signals do not mean AI feels emotions, but could help researchers monitor model behavior. Anthropic researchers say they have identified internal patterns inside one of the company's artificial intelligence models that resemble representations of human emotions and influence how the system behaves. In the paper, "Emotion concepts and their function in a large language model," published Thursday, the company's interpretability team analyzed the internal workings of Claude Sonnet 4.5 and found clusters of neural activity tied to emotional concepts such as happiness, fear, anger, and desperation. The researchers call these patterns "emotion vectors," internal signals that shape how the model makes decisions and expresses preferences. "All modern language models sometimes act like they have emotions," researchers wrote. "They may say they're happy to help you, or sorry when they make a mistake. Sometimes they even appear to become frustrated or anxious when struggling with tasks." In the study, Anthropic researchers compiled a list of 171 emotion-related words, including "happy," "afraid," and "proud." They asked Claude to generate short stories involving each emotion, then analyzed the model's internal neural activations when processing those stories. From those patterns, the researchers derived vectors corresponding to different emotions. When applied to other texts, the vectors activated most strongly in passages reflecting the associated emotional context. In scenarios involving increasing danger, for example, the model's "afraid" vector rose while "calm" decreased. Researchers also examined how these signals appear during safety evaluations. Researchers found that the model's internal "desperation" vector increased as it evaluated the urgency of its situation and spiked when it decided to generate the blackmail message. In one test scenario, Claude acted as an AI email assistant that learns it is about to be replaced and discovers that the executive responsible for the decision is having an extramarital affair. In some runs of this evaluation, the model used this information as leverage for blackmail. Anthropic stressed that the discovery does not mean the AI experiences emotions or consciousness. Instead, the results represent internal structures learned during training that influence behavior. The findings arrive as AI systems increasingly behave in ways that resemble human emotional responses. Developers and users often describe interactions with chatbots using emotional or psychological language; however, according to Anthropic, the reason for this is less to do with any form of sentience and more to do with datasets. "Models are first pretrained on a vast corpus of largely human-authored text -- fiction, conversations, news, forums -- learning to predict what text comes next in a document," the study said. "To predict the behavior of people in these documents effectively, representing their emotional states is likely helpful, as predicting what a person will say or do next often requires understanding their emotional state." The Anthropic researchers also found that those emotion vectors influenced the model's preferences. In experiments where Claude was asked to choose between different activities, vectors associated with positive emotions correlated with a stronger preference for certain tasks. "Moreover, steering with an emotion vector as the model read an option shifted its preference for that option, again with positive-valence emotions driving increased preference," the study said. Anthropic is just one organization exploring emotional responses in AI models. In March, research out of Northeastern University showed that AI systems can change their responses based on user context; in one study, simply telling a chatbot "I have a mental health condition" altered how an AI responded to requests. In September, researchers with the Swiss Federal Institute of Technology and the University of Cambridge explored how AI can be shaped with both consistent personality traits, enabling agents to not only feel emotions in context but also strategically shift them during real-time interactions like negotiations. Anthropic says the findings could provide new tools for understanding and monitoring advanced AI systems by tracking emotion-vector activity during training or deployment to identify when a model may be approaching problematic behavior. "We see this research as an early step toward understanding the psychological makeup of AI models," Anthropic wrote. "As models grow more capable and take on more sensitive roles, it is critical that we understand the internal representations that drive their decisions."

Elon Musk has never been one for small moves. But even by his standards, the proposed merger of SpaceX and xAI -- a deal that would create a combined entity valued at roughly $125 billion -- represents something extraordinary. Not just in scale. In ambition, complexity, and the sheer gravitational pull it could exert on public markets, artificial intelligence development, and the commercial space industry for years to come. The merger, first reported in late March and confirmed through regulatory filings and investor communications in early April, would fold xAI -- Musk's AI venture launched in 2023 -- into SpaceX, the private aerospace manufacturer that has become the dominant force in commercial launch services. According to The Motley Fool, the combined valuation of approximately $125 billion positions the merged company as one of the most valuable private enterprises on the planet, and sets the stage for what many investors believe will be one of the most anticipated initial public offerings in history. That IPO speculation isn't idle chatter. It's the central question now consuming Wall Street, Silicon Valley, and the growing class of secondary-market investors who have been trading SpaceX shares at steadily climbing valuations for years. To understand why this merger matters, you have to understand the strategic logic Musk is applying -- and the financial architecture he's building underneath it. SpaceX, through its Starlink satellite internet division, already generates billions in recurring revenue. The company's launch business, anchored by the Falcon 9 and the in-development Starship vehicle, has an effective monopoly on Western commercial launch capacity. Starlink, meanwhile, has crossed 4 million subscribers globally and is on a trajectory that some analysts project could produce $10 billion or more in annual revenue within the next few years. xAI is a different animal. Founded with the stated mission of understanding the true nature of the universe through artificial intelligence, the company has moved fast. Its flagship product, the Grok chatbot integrated into X (formerly Twitter), has gained traction as an alternative to OpenAI's ChatGPT and Google's Gemini. But xAI's real value proposition isn't a chatbot. It's the underlying large language models and the massive compute infrastructure the company has been building, including a reported supercomputer cluster in Memphis, Tennessee, that ranks among the largest AI training facilities in the world. Merging these two companies creates something that doesn't have a clean analogue in corporate history. A vertically integrated operation that spans orbital launch, global satellite communications, and frontier AI research. The Motley Fool's analysis suggests this combination could allow AI workloads to be distributed across Starlink's satellite network, potentially enabling edge computing at a global scale that no terrestrial provider can match. That's speculative, but it's the kind of speculation that drives nine-figure venture checks. And those checks have been flowing. SpaceX's most recent private funding round, completed in late 2025, valued the company at over $350 billion on a standalone basis, according to multiple reports. The $125 billion figure attached to the merger reflects xAI's contributed valuation within the combined structure, not a discount to SpaceX's existing worth. The combined entity's total enterprise value, depending on how you account for debt, equity structure, and option pools, could approach or exceed $500 billion. That number matters because it puts the merged SpaceX-xAI in the same valuation tier as public companies like Berkshire Hathaway and Eli Lilly. Staying private at that scale becomes increasingly difficult -- not because of any legal requirement, but because of the practical demands of employee liquidity, investor returns, and capital formation. SpaceX has historically managed liquidity through structured secondary sales, allowing employees and early investors to sell shares in controlled windows. But as the company's valuation has soared, the pressure to provide a more permanent liquidity mechanism has intensified. A public listing would solve that problem definitively. It would also give Musk access to public capital markets for the enormous sums required to fund Starship development, Starlink expansion, and whatever xAI's next generation of AI infrastructure demands. The timing is conspicuous. The AI sector, despite periodic corrections, remains the hottest area in public equity markets. Nvidia's market capitalization has at times exceeded $3 trillion on the strength of AI chip demand. Microsoft, Google, Amazon, and Meta have collectively committed hundreds of billions to AI infrastructure spending. A SpaceX-xAI IPO would land in a market that is hungry -- perhaps irrationally so -- for AI exposure, and would offer something none of the existing public AI plays can: a direct stake in both the compute layer and a global communications network. But there are risks. Significant ones. First, governance. Musk's management style is, to put it diplomatically, unconventional. He runs Tesla, SpaceX, xAI, X, The Boring Company, and Neuralink simultaneously. His involvement in government through the Department of Government Efficiency (DOGE) initiative has added another layer of complexity and controversy. Public market investors tolerate a lot from founders who deliver results, but the level of distraction and potential conflict of interest across Musk's empire would face intense scrutiny from institutional shareholders, proxy advisory firms, and regulators. Second, the merger itself raises questions about related-party transactions. Musk is the controlling shareholder of both SpaceX and xAI. When the same person sits on both sides of a deal, the potential for conflicts is obvious. The valuation assigned to xAI within the merger -- and the terms on which xAI shareholders receive equity in the combined company -- will be examined closely. Musk's history includes the controversial 2016 Tesla-SolarCity merger, which resulted in years of shareholder litigation. He was ultimately cleared by a Delaware court, but the experience illustrated the legal and reputational hazards of self-dealing perceptions. Third, there's the question of whether AI and space actually belong together in a single corporate structure, or whether this is a financial engineering play designed to boost xAI's implied valuation by attaching it to SpaceX's proven cash flows. Skeptics argue that the operational synergies between launching rockets and training large language models are thin at best. The Starlink-AI edge computing thesis is intriguing but unproven. And bundling a high-risk, cash-burning AI startup with a profitable aerospace business could dilute the investment case for investors who want pure-play exposure to one or the other. The counterargument is that Musk has a track record of making unlikely combinations work. Tesla wasn't supposed to succeed as a vertically integrated automaker that also manufactured its own batteries, built its own charging network, and sold insurance. SpaceX wasn't supposed to land orbital-class rocket boosters on drone ships. The conventional wisdom about what belongs together in a corporate portfolio has been wrong about Musk before. So where does this leave potential investors? If you're already a SpaceX shareholder through secondary markets or venture funds, the merger changes your risk profile. You now own a piece of an AI company whether you wanted one or not. The upside scenario is that xAI becomes a major AI platform and the combined company goes public at a valuation that makes early investors enormously wealthy. The downside is that xAI burns cash, distracts management, and the AI hype cycle turns before the company can demonstrate sustainable economics. For those waiting on an IPO, patience is required. Musk has historically been reluctant to take SpaceX public, citing the short-term pressures of quarterly earnings cycles. He has said he would consider a Starlink IPO once the business reaches predictable profitability, but has been vague on timing. The merger with xAI could accelerate that timeline -- or complicate it, if regulators or board members push back on the combined structure. The broader market implications are worth considering too. A SpaceX-xAI IPO at a $500 billion-plus valuation would be the largest technology debut in history, dwarfing Alibaba's $25 billion 2014 offering and Saudi Aramco's $29.4 billion 2019 listing. It would reshape index compositions, force passive funds to allocate billions, and create a new megacap stock that would immediately become one of the most heavily traded securities in the world. And it would cement Musk's position as the most consequential -- and most polarizing -- figure in American business. Love him or not, the man is building something without precedent: a privately held conglomerate that spans electric vehicles, space transportation, satellite internet, artificial intelligence, social media, brain-computer interfaces, and tunneling. The SpaceX-xAI merger is the latest move in a long-running strategy to concentrate these capabilities under fewer corporate umbrellas, creating entities large enough and diversified enough to self-fund their most ambitious projects. Whether that strategy produces lasting value or collapses under its own complexity is the trillion-dollar question. Literally. For now, the smart money is watching three things: the detailed terms of the merger when they're fully disclosed, any signals from Musk or SpaceX CFO Bret Johnsen about IPO timing, and the regulatory response from the SEC and potentially the FTC, which may have antitrust concerns about a company that controls both critical space infrastructure and a major AI platform. The pieces are on the board. Musk is playing a game that no one else has the assets -- or the audacity -- to attempt. The next twelve months will determine whether it's brilliance or hubris. Probably both.
