The latest news and updates from companies in the WLTH portfolio.
Churachandpur: Unidentified youths shut down the executive officer's town office under the chief executive officer (CEO)/ADC in Churachandpur at around 2pm to 3pm on Tuesday and later set fire to the office furniture.According to reports, the youths asked all staff and others inside the office to vacate before locking it.The incident comes a day after three student organisations -- Kuki Students' Organisation (KSO), Zomi Students' Federation (ZSF), and Hmar Students' Association (HSA) -- issued a statement opposing the transfer order of Lalthajam MCS, CEO/ADC Churachandpur, issued by the Manipur govt on March 28. The groups argued that the transfer was made without consulting stakeholders and at a time when several pressing issues in the district remain unresolved.The student bodies urged CM Yumnam Khemchand Singh to retain Lalthajam MCS as CEO/ADC Churachandpur. They questioned why Churachandpur's CEO was singled out for transfer while no other district CEOs were affected, alleging that the move reflects disregard for the wishes of the people and aligns with MLAs opposed to forming a popular govt.The newly appointed CEO/ADC Churachandpur, Shokhongam Baite MCS, assumed charge on Tuesday.

IndiGo named Willie Walsh its chief executive officer, hoping the former British Airways leader can help India's biggest airline recover after mass flight cancellations in December triggered one of the nation's worst aviation crises. Walsh, currently director general of the International Air Transport Association, is expected to join no later than Aug. 3, according to [...]

Rapid Spread: An unofficial GitHub mirror of the leaked code surpassed 1,100 stars and 1,900 forks within hours of the disclosure. A packaging error revealed Anthropic's entire Claude Code codebase, spanning nearly 1,900 TypeScript files and over 512,000 lines of code, after a source map file shipped in the tool's public npm package. Security researcher Chaofan Shou reported the finding on X on March 31, 2026. With Claude Code's full source now circulating online, Anthropic faces its third accidental source map shipment in npm packages. According to Anthropic's February 2026 financial disclosures, Claude Code generates over $2.5 billion in annualized revenue and is used by companies including Uber, Netflix, Spotify, Salesforce, and Snowflake. Repeated build pipeline failures for such a commercially vital product raise questions about the company's release controls. Anthropic has not issued a formal public statement about the incident. Source map files are standard JavaScript development artifacts that map minified code back to original source. Build systems routinely generate them during compilation, but they should not be included in production packages. In this case, a .map file shipped inside the @anthropic-ai/claude-code npm package contained a link to an R2 storage bucket hosting the complete original TypeScript source. As an npm-distributed package, Claude Code is accessible to any developer with a Node.js 18+ environment. Build artifacts like source maps, if not explicitly excluded via .npmignore or package.json configuration, ship directly to end users when published to the registry. Unlike compiled binaries, npm packages are zip files of the build output directory, making any misconfiguration in the exclusion rules immediately visible to anyone who installs the package. Developer and security analyst Gabriel Anhaia, who analyzed the leaked code in detail on DEV Community, identified a packaging misconfiguration as the root cause. "A single misconfigured .npmignore or files field in package.json can expose an entire proprietary codebase to the public." After the disclosure, Anthropic removed the source map and unpublished affected versions from the npm registry. However, cached copies had already been downloaded and redistributed across multiple platforms, including unofficial GitHub mirrors, limiting the effectiveness of the takedown effort. Moreover, npm's publication model places the burden of source exclusion entirely on the developer, with no automated checks for accidentally included debugging artifacts. For a company distributing proprietary code through a public registry, a single misconfigured exclusion rule becomes a single point of failure. Anhaia's analysis provides an unusually detailed look at the architecture of a commercially dominant AI coding tool. Claude Code runs on Bun, the JavaScript runtime Anthropic acquired in December 2025, rather than Node.js. It uses React with Ink for terminal UI rendering and Zod v4 for schema validation. By choosing Bun over Node.js, Anthropic optimized for startup speed and lower memory consumption, while the React/Ink terminal rendering layer provides a component-based UI model unusual for command-line tools. Zod v4 for runtime schema validation suggests a defense-in-depth approach to data integrity across the tool's integrations. Furthermore, approximately 40 built-in tools, each permission-gated, form the core of Claude Code's capabilities, with the base tool definition spanning 29,000 lines of TypeScript. A separate query engine at 46,000 lines handles all large language model (LLM) API calls, streaming, caching, and sophisticated orchestration. Combined, these two subsystems account for roughly 75,000 lines of the total 512,000-line codebase. Several unreleased features stand out among the discoveries. Claude Code's source reveals a multi-agent orchestration system with sub-agents called "swarms" for complex parallelizable tasks. A bidirectional IDE bridge connects VS Code and JetBrains extensions via JWT-authenticated channels, enabling operation across terminal and editor environments simultaneously. In particular, JWT-based authentication for IDE connections points to a zero-trust security model between Claude Code's terminal process and editor extensions, a design choice that separates it architecturally from competitors like GitHub Copilot that rely on tighter editor integration. References to codenames BUDDY (an AI pet companion), KAIROS (a persistent assistant), and ULTRAPLAN (cloud-based planning) suggest features in active development that have not been publicly announced. KAIROS in particular suggests Anthropic is building toward a model where Claude Code retains context across sessions rather than starting fresh each time, addressing a common developer complaint about AI coding assistants losing context between interactions. Meanwhile, ULTRAPLAN appears designed to offload complex planning tasks to cloud infrastructure, potentially allowing Claude Code to handle larger-scale refactoring and architectural analysis that would exceed local compute constraints. If shipped, these features would position Claude Code as a persistent, multi-modal development environment rather than a session-based coding assistant, marking a significant strategic shift for Anthropic's developer tooling roadmap. Anthropic has shipped source maps in its npm packages before. Earlier versions, including v0.2.8 and v0.2.28, released in 2025, also included full source maps. Anthropic removed those versions from the registry after the issues were flagged, but cached copies remained accessible through npm's mirror infrastructure and local developer caches. The current leak therefore represents the third known occurrence of the same class of build pipeline failure, according to Anhaia's DEV Community analysis. Anthropic's aggressive stance on protecting Claude Code's intellectual property makes this recurring pattern particularly notable. In April 2025, the company issued a takedown notice against a developer who reverse-engineered Claude Code, which is distributed under a restrictive non-open-source license. Accidentally exposing the very codebase it has actively defended drew pointed commentary from the developer community. Within hours of the disclosure, a GitHub mirror surpassed 1,100 stars and 1,900 forks, and the story was rapidly discussed on Hacker News. Beyond the source map incidents, Claude Code has faced a broader pattern of security concerns. In October 2025, researcher Johann Rehberger reported a Files API exfiltration vulnerability to Anthropic, demonstrating that malicious actors could use the tool to steal sensitive data from developer environments. The vulnerability was publicly disclosed in January 2026. That same month, a security flaw in Anthropic's Claude Cowork tool resurfaced just days after its launch, raising questions about Anthropic's pre-release security review process. Separately, Anthropic implemented strict technical safeguards to prevent third-party applications from spoofing Claude Code and moved to block unauthorized Claude harnesses, signaling awareness of the growing attack surface around its developer tooling. According to Anthropic's financial reports, Claude Code's annual recurring revenue more than doubled between January and February 2026. For a product scaling at that pace, a third source map leak suggests build pipeline controls have not kept pace with commercial growth. Exposed architecture details and unreleased feature references give competitors and security researchers a detailed view of Claude Code's internals, lowering the barrier to studying its tool system design, multi-agent orchestration approach, and IDE integration architecture. While the code was already partially reconstructable through reverse engineering, having the full annotated TypeScript source with original variable names, comments, and module structure represents a qualitatively different level of exposure. For enterprise customers that rely on Claude Code as part of their daily development infrastructure, the repeated exposure raises immediate questions about supply chain trust. Anthropic has yet to publicly detail what specific remediation steps it will take beyond unpublishing the affected npm versions or whether its CI/CD pipeline will be updated with automated source map detection to prevent a fourth occurrence.

It's been a particularly tough month to travel by air. The partial government shutdown, which started on February 14, left TSA workers without pay for nearly two months, leading to staffing shortages and hourslong security lines at some US airports. While TSA workers are starting to see their pay and lines are beginning to subside, the travel chaos caused by the shutdown has already affected millions of Americans' travel plans, including my own. On a recent trip on March 24, I made the last-minute decision to avoid the airport drama and take the long route instead. I canceled my $100 one-way flight from Chicago to New York -- a route with an average flight time of 2 hours -- and booked a $200 coach ticket aboard Amtrak's Lake Shore Limited train, which would get me between the two cities in 20 hours. While I could've reached my destination 10 times faster by air, I would've also missed out on stunning views, hours of relaxation, and other perks that made me reconsider adding more train trips to my travel schedule. Here's what the experience was really like, along with all the amenities that came with it.
Elon Musk's SpaceX is preparing for what could be the largest IPO in history and sources say Morgan Stanley's ETrade platform is set to handle most of the retail allocation. The move sidelines popular retail brokerages Robinhood and SoFi, leaving investors to watch the company's next move closely. According to Reuters, Morgan Stanley, which serves as the main underwriter for the SpaceX initial public offering, plans to distribute most of the through E*Trade. SpaceX will allocate up to 30% of its shares for retail investors, according to industry sources, although a significant portion will go to private wealth and high-net-worth clients. has established itself as a leader in aerospace innovation and private space travel. The upcoming IPO will enable retail investors to join space exploration and technological development efforts scheduled for later this year.

Anthropic inadvertently published parts of the source code for its AI coding tool, Claude Code. Developers discovered more than 500,000 lines of source code and over 1,000 related files on NPM, a public repository where developers share JavaScript software packages. When publishing Claude Code as an NPM package, Anthropic accidentally included far more internal files than intended, including details about how the tool works and references to unreleased models and features. Anthropic says the leak was caused "by human error," not a security vulnerability, and that no customer data was affected. The company is working on measures to prevent similar incidents. This is Anthropic's second leak in just days, coming right on the heels of internal blog posts about their new Mythos AI model accidentally slipping out.

Anthropic filed a lawsuit in a California federal court, alleging that Defense Secretary Pete Hegseth exceeded his authority by labeling the company a national security supply-chain risk LOS ANGELES, California: Anthropic got some relief, albeit temporary, on March 26 in its case against the U.S. government when a federal judge blocked the Pentagon's blacklisting of the AI company. This was the latest turn in the Claude maker's high-stakes confrontation with the U.S. military over AI safety on the battlefield. Anthropic filed a lawsuit in a California federal court, alleging that Defense Secretary Pete Hegseth exceeded his authority by labeling the company a national security supply-chain risk. This label is usually used for companies that could expose military systems to hacking or sabotage. Anthropic said the government punished it for its views on AI safety, violating its First Amendment right to free speech. It also said it was not given a chance to challenge the decision, which violated its Fifth Amendment right to due process. U.S. District Judge Rita Lin agreed with Anthropic in a detailed ruling but delayed the effect of her decision for seven days so the government can appeal. The dispute began after Anthropic refused to let the military use its AI chatbot, Claude, for surveillance or autonomous weapons. Because of this, the company was blocked from some military contracts, which it says could cost billions of dollars and damage its reputation. Anthropic argued that AI was not reliable enough for use in weapons and opposes domestic surveillance on rights grounds. However, the Pentagon said private companies should not limit military decisions. In her ruling, Judge Lin said the government's actions seemed more like punishment than a move to protect national security. She wrote that Anthropic appeared to be targeted for publicly criticizing the government, calling it illegal retaliation against free speech. An Anthropic spokesperson said the company was pleased with the decision and remained committed to working constructively with the government to promote safe and reliable AI. This is the first time a U.S. company has been publicly labeled a supply-chain risk under a little-known law meant to protect military systems from foreign threats. Anthropic's lawsuit, filed on March 9, said the decision was unlawful, not based on facts, and went against the military's earlier praise of Claude. The Justice Department argued that Anthropic's refusal to change its restrictions could create confusion for the Pentagon and even risk disrupting military systems during operations. The government said the decision was about contract terms, not the company's views. Anthropic is also fighting a second case in Washington, D.C., over a similar designation that could stop it from getting civilian government contracts.

Claude down and when will Anthropic AI chatbot be back up? Thousands of users reported problems accessing Claude Chat, according to Downdetector. The outage reports appeared while the official status page from Anthropic showed systems operational. The incident raised concerns about service reliability, user impact, and recovery timelines for Claude AI users across regions. Claude down and when will Anthropic AI chatbot be back up? Claude outage is being reported by users in various regions. Data from Downdetector showed a surge in problem reports, mainly related to chat access. At the same time, the official service status from Anthropic continued to show normal operations. The situation created confusion for users who rely on Claude AI for daily tasks, development work, and enterprise services. The incident highlights the growing dependence on AI tools and the need for clear communication during outages. Reports showed that the AI platform Claude faced a possible outage. The information came from Downdetector, a platform that collects outage reports from users. Reports of access problems led users to search for recovery timelines and official updates. Monitoring data from Downdetector showed a rise in complaints, while Anthropic did not immediately confirm a disruption. Recovery timing depends on technical fixes, system checks, and confirmation that services have returned to stable operation for Claude AI users. Many users reported chat loading failures, login problems, and interruptions during conversations. These reports suggested a temporary disruption affecting the chatbot interface. Such incidents often occur due to system load, infrastructure issues, or regional connectivity problems that affect access for a portion of users. Claude down became a trending search after the spike in outage reports. According to Downdetector, more than 2,400 users reported problems with the service as of 8:30 a.m. PT. Most reports focused on issues with Claude Chat access. Downdetector collects user-submitted data and system signals to identify service disruptions. A sudden increase in reports often indicates a possible outage. The rise in complaints suggested that the issue affected many users at the same time. Users reported that the chatbot was not responding or loading correctly. Some users said conversations stopped mid-session. Others said they could not access the platform at all. Claude down remained unclear because the official status checker showed "All Systems Operational." This created confusion for users who experienced issues. The AI platform is developed by Anthropic. The company provides an official status page to track service performance. When the outage reports appeared, the status page did not show a confirmed disruption. This difference between user reports and official status data raised questions about the cause of the issue. It also highlighted how outages can appear before confirmation from providers. Claude down gained attention because the service is widely used. The platform provides conversational AI tools and language models. Claude AI offers multiple AI models. These include Opus, Sonnet, and Haiku. The service also provides an API for developers and enterprise users. Businesses use Claude AI for automation, writing, coding, research, and customer support. The outage affected individual users and companies that rely on the chatbot for daily tasks. The service is part of a growing market for AI assistants. Interruptions can slow workflows and affect productivity for users worldwide. Claude down became a major search because the main issue involved Claude Chat. Many reports focused on chat performance. Common issues reported by users included: These problems suggested a possible service disruption or temporary technical issue. Claude down gained visibility due to Downdetector tracking. The platform collects outage data from multiple sources. Downdetector uses: When many users report problems at the same time, the system shows a spike. This helps identify outages quickly before official confirmation. Claude down became confusing because the official status page did not confirm the outage. There are several reasons why this can happen. Possible reasons include: This means a service can appear operational while users still face problems. Claude down affected more than casual users. Many developers use the Claude API in apps and services. When AI tools stop working, businesses may face: These disruptions can affect companies that depend on AI tools daily. When will Anthropic AI chatbot be back up remains the main question for users. At the time of the outage reports, no official timeline was provided. When outages occur, restoration time depends on: In many cases, temporary outages are resolved within hours. However, official updates are required to confirm full recovery. Users often check: These sources provide updates during service disruptions. Claude down reflects a wider trend. AI tools are now part of daily work for many users. Outages show how dependent users have become on AI assistants. As usage grows, service reliability becomes more important. Companies are investing in infrastructure to reduce downtime. Monitoring tools and status pages play a role in keeping users informed. Claude down leads users to seek solutions. During outages, users can take several steps. Recommended actions include: Temporary outages are common in online services. Most disruptions are resolved after technical fixes. Q1: What caused the sudden spike in Claude outage reports? A large number of Claude users submitted problem reports within a short time, which triggered outage detection systems. Such spikes often indicate service disruptions affecting login, chat responses, or platform access. Q2: How do monitoring platforms detect Claude service disruptions? Claude outage trackers collect user submissions, network signals, and performance data. When reports increase sharply in a specific period, the system flags a possible disruption before official confirmation from the service provider. (You can now subscribe to our Economic Times WhatsApp channel)
It would be ideal if satellites in a massive communications constellation didn't just spontaneously explode, but here we are. SpaceX announced that one of its Starlink satellites "experienced an anomaly on-orbit" on Sunday, which is a gentle way of saying that it blew to smithereens. This isn't the first time an Elon Musk internet box has detonated in low Earth orbit, with a similar incident in December. I would be surprised if Sunday's explosion was the last. SpaceX claimed that the loss of Starlink satellite 34343 poses no new risk to the International Space Station or NASA's planned launch of Artemis II this week. According to The Verge, the incident created a debris field of "tens of objects." The debris should burn up in the atmosphere in a few weeks. Starlink satellites are already designed to die and completely disintegrate at the end of their service life. Hopefully, the debris doesn't cause any chaos in orbit before re-entry.

Anthropic has accidentally leaked the source code for its popular coding tool Claude Code. The leak comes just days after Fortune reported that the company had inadvertently made close to 3,000 files publicly available, including a draft blog post that detailed a powerful upcoming model that presents unprecedented cybersecurity risks. The model is known internally as both "Mythos" and "Capybara," according to the leaked blog post obtained by Fortune. The source code leak exposed around 500,000 lines of code across roughly 1,900 files. When reached for comment, Anthropic confirmed that "some internal source code" had been leaked within a "Claude Code release." A spokesperson said: "No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The latest data leak is potentially more damaging to Anthropic than the earlier accidental exposure of the company's draft blog post about its forthcoming model. While the latest security lapse did not expose the weights of the Claude model itself, it did allow people with technical knowledge to extract additional internal information from the company's codebase, according to a cybersecurity professional Fortune asked to review the leak. Claude Code is perhaps Anthropic's most popular product and has seen soaring adoption rates from large enterprises. At least some of Claude Code's capabilities come not from the underlying large language model that powers the product but from the software 'harness' that sits around the underlying AI model and instructs it how to use other software tools and provides important guardrails and instructions that govern its behavior. It is the source code for this agentic harness that has now leaked online. The leak potentially allows a competitor to reverse-engineer how Claude Code's agentic harness works and use that knowledge to improve their own products. Some developers may also seek to create open-source versions of Claude Code's agentic harness based on the leaked code. The leaked code also provided further evidence that Anthropic has a new model with the internal name "Capybara" that the company is actively preparing to launch, according to Roy Paz, a senior AI security researcher at LayerX Security. It revealed that the company has a "fast" and "slow" version of the new model and that it will likely be a replacement for Opus, Anthropic's most advanced model on the market. Currently, Anthropic markets each of its models in three different sizes. The largest and most capable model versions are branded Opus; while slightly faster and cheaper, but less capable, versions are branded Sonnet; and the smallest, cheapest, and fastest are called Haiku. In the draft blog post obtained by Fortune last week, Anthropic describes "Capybara" as a new tier of model that is even larger and more capable than Opus, but also more expensive. The newest leak, first made public in an X post, appears to have happened after Anthropic uploaded all of Claude Code's original code to NPM, a platform developers use to share and update software, instead of only the finished version that computers actually run. The mistake looks like a "human error" after someone took a shortcut that bypassed normal release safeguards, Paz said. "Usually, large companies have strict processes and multiple checks before code reaches production, like a vault requiring several keys to open," he told Fortune. "At Anthropic, it seems that the process wasn't in place and a single misconfiguration or misclick suddenly exposed the full source code." Paz also raised concerns about how the tool connects to Anthropic's internal systems. Even without special encrypted access keys that would normally be required to access such systems, it appears possible to access internal services that should be restricted, Paz said. He warned this could give malicious actors, including nation-states, new opportunities to exploit Anthropic's models to build more powerful cyberattack tools and bypass the safeguards meant to constrain them. Anthropic's current most powerful model, Claude 4.6 Opus, is already classed by the company as a dangerous model when it comes to cybersecurity risks. Anthropic has said its current Opus models are capable of autonomously identifying zero-day vulnerabilities in software. While these capabilities are intended to help companies detect and fix flaws, they could also be weaponized by hackers, including nation-states, to find and exploit vulnerabilities. This isn't the first time Anthropic has inadvertently leaked details about its popular Claude Code tool. In February 2025, an early version of Claude Code accidentally exposed its original code in a similar breach. The exposure showed how the tool worked behind the scenes as well as how it connected to Anthropic's internal systems. Anthropic later removed the software and took the public code down.

A recent results update from EnQuest highlights Kraken upside, Magnus drilling plans and decommissioning milestones. EnQuest aims to mature the Kraken Enhanced Oil Recovery (EOR) project in the northern UK North Sea, the company said in a results review. The company sees potential to increase the field's recoverable resource by more than 40MMbbl. Following initial polymer testing, the current focus is on ensuring the compatibility of reservoir chemicals with the Armada Kraken FPSO's topside process equipment. EnQuest's team is also working on a fuel gas import project that would involve a subsea tie-back of a gas well on the undeveloped Bressay field to the Armada Kraken FPSO. This would provide an alternative to the diesel currently used to power operations, potentially delivering a marked reduction in the FPSO's emissions and operating costs. The Bressay gas well could form part of an expanded well program, comprising a resumption of drilling at Kraken and P&A of subsea wells. In addition, EnQuest has awarded EnerMech a five-year contract to provide the FPSO operator Bumi Armada with crane management and lifting services on the Armada Kraken. Elsewhere in the UK northern North Sea, EnQuest has finally completed all P&A activities at the Heather and Thistle fields. Last year, Allseas' Pioneering Spirit vessel removed the Heather topsides from the field, with the jacket set for removal in 2027. Preparations continued for Thistle's removal under the next phase of heavy-lift operations. Teams from EnQuest and Saipem have been collaborating since last April on engineering and planning for the pre-disembarkation preparation phase. Subsea campaigns included the use of a specially engineered conductor drill and pinning tool for the Final disembarkation from the platform should take place before mid-year. This summer, the Alba field in the central UK North Sea, originally developed by Chevron in the 1990s, should reach cessation of asset production this summer. EnQuest has also progressed planning and engineering work on the Kittiwake platform wells and subsea wells in the same sector and at the Magnus field in the East Shetland Basin. In May a new six-well infill drilling campaign should begin at Magnus, continuing into 2027. The line-up includes well targets in the Lower Kimmeridge Clay Formation (LKCF) reservoir, thought to hold around 325 MMbbl of oil in place: EnQuest aims to access 10MMbbl of production from the next phase LKCF program. Thereafter, its sees potential to deliver a further 28MMboe via low-cost drilling and well intervention opportunities. Following storm damage earlier this year at the third-party operated Ninian Central Platform ('NCP') which led to a five-week outage for all tied-in fields, including Magnus, EnQuest is seeking to reduce future risks by implementing a bypass alternative to the NCP during 2027.

Also Read: Claude can now perform your tasks on computer for you, internet says 'It's so over for entry-level jobs' Within hours, the massive 512,000-line TypeScript codebase was copied across GitHub and studied by thousands of developers. For Anthropic, this is not just a small mistake. With a reported $19 billion annualized revenue run-rate as of March 2026, the leak is seen as a major loss of valuable intellectual property. Developers found a three-layer memory system described as a "Self-Healing Memory" system. The system also treats its own memory as a "hint," meaning it verifies information instead of blindly trusting it. Also Read: Pentagon official sees little chance to revive Anthropic AI deal The leak also revealed a feature called KAIROS which allows Claude Code to run as a background agent. Through a process called autoDream, the system improves and organizes its memory while the user is inactive and making it more efficient when work resumes. Internal model details were also exposed, including codenames like Capybara, Fennec and Numbat. The data shows that even advanced models still face challenges with some versions having a higher false claims rate than earlier ones. Another feature, "Undercover Mode," suggests the AI can contribute to public projects without revealing its identity. The system includes instructions such as, "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

Current Limits: Cowork is available via the Frontier opt-in program and lacks local computer use and third-party integrations found in standalone Claude Cowork. Microsoft is using Anthropic's Claude to grade OpenAI's GPT homework inside Copilot. On March 30, the company announced Copilot Cowork through its Frontier early access program, alongside a new Critique feature that pits the two AI models against each other to improve research quality. Two parallel developments underpin the launch: Copilot Cowork, which delegates long-running, multi-step tasks using Anthropic's Claude, and a Critique feature where Claude reviews GPT-generated research before it reaches users. According to Microsoft's January 2026 earnings disclosure, only 15 million paid Copilot seats exist across 450 million commercial Microsoft 365 users, a 3.3% adoption rate that underscores the pressure to demonstrate tangible value from AI tools. Inside Copilot's Researcher agent, the new Critique in Researcher separates generation from evaluation. GPT drafts responses to research queries, and Claude then reviews them for accuracy, completeness, and citation quality before delivery. Rather than relying on a single model to both produce and assess its own output, Microsoft applies the same principle that academic peer review uses: an independent second opinion from a fundamentally different system. Critique will become the default experience in Researcher when users select Auto in the model picker, embedding multi-model review into the standard workflow. Copilot Cowork itself operates as an orchestrator for long-running workflows within Microsoft 365. Users can initiate multiple tasks simultaneously and manage them through a new dashboard, handling everything from monthly budget reviews to calendar management and meeting preparation. Microsoft has described Cowork as Claude Code for knowledge workers, running in a sandboxed cloud environment that keeps enterprise data within Microsoft's security boundaries. Unlike a traditional chatbot interaction, Cowork can execute tasks that unfold over hours or days, checking back with users at key decision points rather than requiring constant input. Nicole Herskowitz, Corporate Vice President for Microsoft 365, noted that having multiple AI vendors in Copilot is only the starting point. Making the models collaborate rather than simply offering users a choice between them, she said, represents the real differentiator for the platform. In contrast, a single-model approach leaves evaluation to the same system that produced the output. Separating the roles of drafter and critic creates a structural check that catches errors one model might consistently miss, positioning multi-model review as a quality floor rather than an optional enhancement. According to Microsoft, the multi-model approach delivers measurable gains on research quality. Researcher with Critique turned on scores 57.4 on the DRACO benchmark, an industry standard for deep research quality based on 100 complex tasks across 10 domains. Microsoft's internal testing places that score above Claude Opus 4.6 at 42.7 and Perplexity Deep Research at 50.4. "It is this multi-model advantage that makes Copilot different," Charles Lamanna, President of Business Applications and Agents at Microsoft, said at Cowork's initial announcement. According to Microsoft's blog post, the Critique approach yields a 13.8% improvement over the previous single-model configuration. The largest gains fall in breadth and depth of analysis, followed by presentation quality and factual accuracy. However, no independent third party has verified these results. DRACO evaluations were scored using GPT-5.2 as an automated judge model across five independent runs per question, raising questions about whether an OpenAI-built evaluator judging a system built partly on OpenAI technology introduces systematic bias. Furthermore, Microsoft reported statistically meaningful improvements in eight of ten DRACO domains, with a paired t-test yielding p-values below 0.0001. Until independent researchers replicate these results using neutral evaluation models, the benchmark numbers remain a marketing claim rather than an industry-validated finding. Separately, the company is rolling out a Model Council feature that runs Anthropic and OpenAI models simultaneously, producing standalone reports with an automated judge evaluating where the responses agree and diverge. Council gives users direct control over model comparison, letting them see how different systems approach the same research query before choosing which output to use. Only 3.3% of Microsoft's commercial user base pays for Copilot, and the company needs features compelling enough to justify the $99 per user per month E7 AI subscription tier. Copilot Cowork is currently available as an opt-in experimental feature through the Frontier program before broader rollout. Previously limited to a small group of users in Research Preview, Cowork has now expanded to the wider Frontier audience. Microsoft 365 customers with eligible licenses can opt in through their admin portal to test Cowork and the new Critique capabilities ahead of general availability. Capital Group, one of the world's largest investment management firms, is among the early enterprise adopters currently testing Cowork in a regulated financial services environment. "This isn't about generating content or answers. It's about taking real action, connecting steps, coordinating tasks, and following through across everyday workflows. Because Cowork operates on our enterprise data and within our security and risk boundaries, we can experiment, learn, and scale with confidence. That allows us to move faster and focus AI in places where it actually delivers value." Warner's emphasis on security boundaries reflects a broader enterprise concern: AI tools that operate outside an organization's data governance create compliance risks that outweigh productivity gains. By running Cowork within Microsoft's tenant architecture, the company positions it as a safer alternative to standalone AI agents that require local system access. For regulated industries like financial services, this containment model could prove more persuasive than raw capability benchmarks. Jared Spataro, Chief Marketing Officer for Microsoft AI at Work, characterized the launch as a shift from Copilot as an assistant to Copilot as an autonomous agent capable of executing multi-step workflows independently. As a result, for enterprise buyers weighing the E7 tier, the central question is whether agentic task delegation and multi-model research verification deliver enough measurable productivity gains to justify the premium subscription cost at scale across large global organizations. Copilot Cowork is built on technology from Anthropic's Claude Cowork, which Anthropic launched for mainstream users in January 2026. Microsoft first added Claude as an OpenAI alternative in Microsoft 365 Copilot in September 2025, then deepened the partnership through a multi-billion-dollar alliance with Nvidia and Anthropic to scale Claude on Azure two months later. Copilot Cowork represents the deepest integration yet between the two companies, embedding Claude's reasoning engine directly into core productivity workflows rather than offering it as merely an alternative model choice within the existing interface. However, Copilot Cowork does not yet match the standalone Claude Cowork's full capabilities. It lacks local computer use, cannot interact directly with local files or applications, and has no native integrations with third-party tools outside Microsoft 365. For organizations that rely on non-Microsoft productivity tools, those gaps limit Cowork's ability to serve as a true end-to-end workflow agent. Meanwhile, Microsoft has acknowledged the constraints and says it expects the Critique process to eventually run in both directions, with Claude drafting and GPT critiquing, giving users the option to run the process in both directions. Broader availability beyond the invite-only Frontier program has not been given a firm public timeline. Whether expanded access arrives fast enough to move the adoption needle beyond 3.3% will ultimately determine whether the multi-model strategy becomes a lasting competitive advantage or remains an expensive experiment.

Decentralized repos made the leak effectively permanent and uncontrollable. Anthropic didn't mean to open-source Claude Code. But on Tuesday, the company effectively did -- and not even an army of lawyers can put that toothpaste back in the tube. It started with a single file. Claude Code version 2.1.88, pushed to the npm registry in the early hours of Tuesday morning, shipped with a 59.8MB JavaScript source map -- a debug file that can reconstruct the original code from its compressed form. These files are generated automatically and are supposed to stay private. But a single line in the ignore settings let it go out with the release. Intern and researcher Chaofan Shou, who appears to be among the first to spot the file, posted a download link to X around 4:23 a.m. ET, and watched 16 million people descend on the thread. Anthropic yanked the npm package, but the internet had already archived 512,000 lines of code across 1,900 different files that make up a major part of the project. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Decrypt. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leak exposed the full internal architecture of what is arguably one of, if not the most sophisticated AI coding agent on the market: LLM API orchestration, multi-agent coordination, permission logic, OAuth flows, and 44 hidden feature flags covering unreleased functionality. Among the finds: Kairos, an always-on background daemon that stores memory logs and performs nightly "dreaming" to consolidate knowledge. And Buddy, a Tamagotchi-style AI pet with 18 species, rarity tiers, and stats including debugging, patience, chaos, and wisdom. There's a teaser rollout for this "Buddy" apparently planned for April 1-7. Then there's the detail that made everyone on Hacker News cackle. Per leaker Kuberwastaken, buried inside the code was "Undercover Mode" -- a whole subsystem designed to prevent the AI from accidentally leaking Anthropic's internal codenames and project names when contributing to open-source repositories. The system prompt injected into Claude's context literally says: "Do not blow your cover." Apparently, Anthropic began issuing DMCA takedowns against GitHub mirrors. That's when things got interesting. A Korean developer named Sigrid Jin -- featured in the Wall Street Journal earlier this month for having consumed 25 billion Claude Code tokens -- woke up at 4 a.m. to the news. He sat down, ported the core architecture to Python from scratch using an AI orchestration tool called oh-my-codex, and pushed claw-code before sunrise. The repo hit 30,000 GitHub stars faster than any repository in history. It's basically a translation of all the code from the original language to Python, so technically not the same thing, right? We'll leave that to lawyers and tech philosophers. The legal logic here is sharp. Gergely Orosz, founder of The Pragmatic Engineer newsletter, argued in a post on X: "This is either brilliant or scary: Anthropic accidentally leaked the TS source code of Claude Code. Repos sharing the source are taken down with DMCA. BUT this repo rewrote the code using Python, and so it violates no copyright & cannot be taken down!" It's a clean-room rewrite. A new creative work. DMCA-proof by design. The copyright angle gets thornier when considering the legal status of AI-generated work, and how muddy the criteria gets when lawyers have to rule whether or not it carries automatic copyright. The DC Circuit upheld that position in March 2025, and the Supreme Court declined to hear the challenge. If significant chunks of Claude Code were written by Claude itself -- which Anthropic's own CEO has implied -- then the legal standing of any copyright claim gets murkier by the day. Decentralization adds another layer of permanence. The account @gitlawb mirrored the original code to Gitlawb, a decentralized git platform, with a simple message: "Will never be taken down." The original remains accessible there. A separate repository has compiled all of Claude's internal system prompts, which is something that prompt engineers and jailbreakers will appreciate as it gives more insights into the way Anthropic conditions its models. This matters beyond the drama. DMCA takedowns work against centralized platforms. GitHub complies because it has to. Decentralized infrastructure -- which powers Gitlawb, torrents, and cryptocurrency itself -- doesn't have the same single point of failure. When a company tries to pull something back from the internet, the only question is how many mirrors exist and on what kind of infrastructure. The answer here, within hours, was: enough.

A fuel crisis in Bangladesh, triggered by the West Asia conflict, has led to chaotic queues and hoarding. Authorities have recovered hoarded fuel, while India has supplied 15,000 tons of diesel to ease the shortage. The government is seeking new import sources. The escalating security situation in West Asia and the Gulf region has affected energy supplies across parts of world, with Bangladeshis giving a call to the government to ensure protection against chaos and confusion. Add Asianet Newsable as a Preferred Source Syed Sajjadul Karim Kabul, Convenor of Bangladesh Petrol Pump Owners' Association told ANI, "The demand and buyer of the fuel are abnormal... I have requested the government to give enough support in terms of what to say regarding the protection against any chaos and confusion. The government has specifically given specialised officers in each and every filling station to monitor the sale and distribution..." He added, "We have suggested to give a timeframe so that we can deliver properly and opportunists should not hoard the product." Crackdown on Fuel Hoarding Earlier, queues were seen lined outside petrol pumps in Bangladesh. Dhaka Tribune reported on Tuesday, citing officials that district administrations across 64 districts recovered 87,700 liters of illegally hoarded fuel in 24 hours. As per Dhaka Tribune, during the drives, 191 cases were filed, and fines totalling Tk 935,070 were imposed. The details were shared by Monir Hossain Chowdhury, spokesperson and joint secretary of the Energy and Mineral Resources Division, at a press briefing at the Secretariat. He said 391 drives were conducted during the period based on reports from district administrations. It was also reported that seven individuals were sentenced during the operations. One person in Satkhira received a two-month jail term, one in Chandpur was sentenced to one year, and one in Gazipur received one month. According to Dhaka Tribune, of the recovered fuel, 67,400 liters were diesel, 6,444 liters octane, and 13,856 liters petrol. India Extends Support Amid the energy crisis in Bangladesh caused by the conflict in West Asia, India has supplied an additional 5,000 tons of diesel, a senior government official said on Friday night. "An additional 5,000 tons of diesel have arrived in Bangladesh from India. With this, Bangladesh has now received a total of 15,000 tons of diesel from India in recent times," Md. Murshed Hossain Azad, General Manager (Commercial), Bangladesh Petroleum Corporation (BPC), told ANI over the phone." "In the coming month of April, India has proposed to supply 40,000 tons of diesel to Bangladesh. We have officially accepted this proposal," Azad said, without elaborating. Diversification Efforts and Price Hikes Bangladesh imports diesel primarily from India, Singapore, and the Middle East. Meanwhile, the Daily Star reported that the government is moving to diversify its fuel imports. It has reached out to Singapore, Malaysia, Nigeria, Azerbaijan, Kazakhstan, Angola, Australia and the US for potential fuel and gas supplies. The country is also expecting two additional shipments at around 6,000 tonnes from Indonesia. Also in March, the price of aviation fuel (jet fuel) was increased for the second time in a month in Bangladesh. Due to the ongoing conflict in the West Asia, the fuel crisis in Bangladesh has taken on a severe form, especially at various fuel stations, where there are long queues and all sorts of chaotic conditions, with petrol pump owners are also expressing serious concerns. (Except for the headline, this story has not been edited by Asianet Newsable English staff and is published from a syndicated feed.) Read Full Article

Two satellite anomalies in three weeks. That's the uncomfortable reality now confronting SpaceX as its Starlink mega-constellation -- the largest satellite network ever assembled -- shows signs of strain that could complicate the company's ambitious plans for global broadband dominance and its increasingly critical role in national security. On July 14, SpaceX disclosed that a Starlink satellite launched just days earlier had experienced an anomaly that left it unable to maintain its orbit. The satellite, part of a batch deployed from a Falcon 9 rocket on July 9, will reenter Earth's atmosphere and burn up, the company confirmed in a post on X. This followed a strikingly similar incident less than three weeks prior, when another newly launched Starlink satellite suffered what SpaceX described as an onboard anomaly shortly after deployment, as first reported by Futurism. SpaceX, characteristically, framed both events as manageable. The company emphasized that the affected satellites were designed to safely deorbit and that the constellation's overall performance remained unaffected. "The satellite will reenter Earth's atmosphere and fully demise," SpaceX wrote on its official Starlink account, a formulation that has become boilerplate for these disclosures. But two anomalies in rapid succession raises questions that a press release can't easily answer. The timing matters. SpaceX has been launching Starlink satellites at a blistering pace -- sometimes multiple missions per week -- as it races to complete its second-generation constellation and stay ahead of emerging competitors like Amazon's Project Kuiper, which plans its first large-scale deployments later this year. The company now has more than 6,000 Starlink satellites in orbit, making it the operator of roughly 60% of all active spacecraft. At that scale, occasional failures are statistically inevitable. SpaceX has previously acknowledged losing batches of satellites to geomagnetic storms and individual units to manufacturing defects. What's different now is the cadence. And the context. SpaceX's Starlink network has become far more than a commercial broadband service. It is a backbone of U.S. military communications, with the Department of Defense relying on a classified variant called Starshield for sensitive operations. The network proved its strategic value during the early months of Russia's invasion of Ukraine, providing connectivity when terrestrial infrastructure was destroyed. Any pattern of hardware failures -- even minor ones -- carries implications that extend well beyond subscriber internet speeds. The company hasn't disclosed root causes for either recent anomaly, and it's unclear whether the two incidents share a common technical thread. SpaceX's general approach has been to iterate rapidly, treating satellite losses as an acceptable cost of its high-volume manufacturing and launch model. Each Starlink satellite costs a fraction of traditional communications satellites, and the constellation is designed with enough redundancy that losing individual units doesn't degrade service. This philosophy -- build cheap, launch often, replace failures quickly -- has been central to SpaceX's ability to deploy at a pace no competitor can match. Still, the math gets harder as the constellation grows. SpaceX has authorization from the Federal Communications Commission to deploy up to 12,000 satellites in its first-generation constellation, and has applied for permission to operate as many as 42,000. Managing a fleet of that size requires extraordinary quality control on the production line and precise coordination in orbit. Satellites must be actively deorbited at end of life to avoid contributing to the growing problem of space debris -- a concern that regulators, rival operators, and astronomers have raised with increasing urgency. The European Space Agency has flagged Starlink satellites as a leading source of conjunction alerts -- close approaches that force other spacecraft to maneuver out of the way. Each uncontrolled reentry, even when a satellite burns up harmlessly, represents a unit that couldn't be guided to a targeted disposal. Two such events in three weeks, both involving brand-new hardware, suggests something went wrong early in the satellites' operational lives. SpaceX's competitors are watching closely. Amazon's Project Kuiper, backed by Jeff Bezos's deep pockets, has positioned itself as a more methodical alternative, though it has yet to prove it can manufacture and deploy satellites at anything approaching SpaceX's volume. Telesat, OneWeb (now owned by Eutelsat), and several Chinese state-backed ventures are also building or planning low-Earth-orbit constellations. For all of them, SpaceX's stumbles -- however minor -- offer a reminder that operating thousands of satellites simultaneously is an engineering challenge no one has fully mastered. Investors, too, have reason to pay attention. SpaceX is privately held and doesn't report quarterly earnings, but the company's valuation -- north of $350 billion as of its most recent funding round -- is built substantially on Starlink's projected revenue. Morgan Stanley has estimated that Starlink could eventually generate more than $30 billion annually. That projection depends on continuous service quality, regulatory goodwill, and a production pipeline that doesn't develop systemic flaws. So far, there's no evidence of a systemic problem. Two satellites out of thousands is a rounding error in pure numerical terms. But perception matters in an industry where government contracts and spectrum licenses hinge on demonstrated reliability. The FCC's conditions for SpaceX's constellation licenses include requirements for orbital debris mitigation, and repeated anomalies could invite closer regulatory scrutiny -- something Elon Musk's companies have not always welcomed gracefully. SpaceX's transparency on these events has been limited to terse social media posts. No detailed failure reports. No press conferences. The company's communication style mirrors its engineering philosophy: move fast, fix problems in the next iteration, don't dwell. That approach works until it doesn't -- until a pattern emerges that demands a more thorough public accounting. For now, the Starlink constellation continues to function, serving more than four million subscribers across dozens of countries. The two lost satellites will burn up in the atmosphere, leaving no debris. SpaceX will almost certainly launch replacements within weeks, if not days. The machine keeps running. But the question hanging over Hawthorne and Boca Chica is whether these anomalies are isolated hiccups or early indicators of a quality-control challenge that will only intensify as production scales toward tens of thousands of spacecraft. At the pace SpaceX is moving, the answer won't take long to reveal itself.

SpaceX is the most valuable private company on the planet, and almost nobody outside a narrow circle of insiders can buy shares in it. That tension -- between a company whose rockets light up public skies and a stock that remains stubbornly private -- has defined one of the strangest investment stories of the past decade. Now, as speculation about an eventual initial public offering builds again, ordinary investors are asking a simple question: Will the door ever open? The short answer is complicated. The longer answer involves the peculiar mechanics of pre-IPO share sales, the evolving regulatory treatment of private securities, and Elon Musk's own stated reluctance to take SpaceX public anytime soon. Understanding all of it matters, because the outcome will say something not just about one company but about who gets to participate in the wealth created by America's most ambitious enterprises. As The Motley Fool recently reported, individual investors face an uphill battle when it comes to participating in a SpaceX IPO -- if one ever happens. The company's valuation has soared past $350 billion in private tender offers, making it worth more than most publicly traded companies in the S&P 500. But that valuation has been set by a small group of institutional investors, sovereign wealth funds, and venture capital firms trading shares among themselves. Retail investors, the people who buy stocks through Fidelity or Schwab accounts, have been almost entirely excluded from the process. This isn't unusual for private companies. It is, however, unusually consequential. SpaceX isn't just any startup. It operates the world's dominant launch service through Falcon 9, is building the most powerful rocket ever constructed in Starship, and runs Starlink, a satellite internet constellation that already serves millions of customers in more than 70 countries. The company generates real revenue -- billions of dollars annually from launch contracts and Starlink subscriptions -- and its growth trajectory has made early investors fabulously wealthy. According to reporting from CNBC, SpaceX's valuation in recent secondary transactions has placed it as the second most valuable company in the United States by some measures, trailing only Apple and occasionally jockeying with Nvidia and Microsoft depending on the day. The gains have been staggering. Investors who bought into SpaceX's early funding rounds have seen returns exceeding 100x their initial capital. But those investors were, almost without exception, institutions or ultra-high-net-worth individuals who qualified as accredited investors under SEC rules. The wealth creation happened behind closed doors. So what happens when SpaceX does eventually go public? The Motley Fool's analysis suggests that even in an IPO scenario, individual investors may find themselves at a disadvantage. Here's why. In a typical IPO, shares are allocated first to institutional clients of the underwriting banks. Large mutual funds, pension funds, and hedge funds get first access at the offering price. Retail investors usually can only buy shares once trading begins on the open market -- often at a significant premium to the IPO price. The phenomenon is well-documented: IPO "pops" of 20%, 40%, even 100% on the first day of trading mean that ordinary buyers are paying far more than the insiders who got in at the offering price. And SpaceX would likely be one of the most oversubscribed IPOs in history. Musk himself has sent mixed signals about timing. He has repeatedly said he won't take SpaceX public until Starship is flying regularly to Mars, a milestone that could be years away. But he has also indicated that Starlink, the satellite broadband division, could be spun off as a separate public company sooner. That possibility has generated enormous interest on Wall Street and among retail investors who see Starlink as a more accessible entry point into the SpaceX story. The speculation isn't idle. In early 2025, reports from Bloomberg indicated that SpaceX had begun internal discussions about the structure a Starlink IPO might take, though no formal filing had been made. The company reportedly explored both a traditional IPO and a direct listing, each carrying different implications for retail investor access. A direct listing, the method used by Spotify and Coinbase, would theoretically allow individual investors to buy shares from the first moment of trading without the institutional allocation process. But direct listings come with their own complications, including potentially higher volatility on the first trading day and no price stabilization from underwriters. For now, the only way most individual investors can get exposure to SpaceX is through indirect means. Several closed-end funds and private market platforms have offered fractional interests in SpaceX shares purchased on the secondary market. Platforms like Forge Global and EquityZen have facilitated transactions in SpaceX stock, though minimum investments are typically $50,000 or more and the shares come with significant restrictions on resale. The Motley Fool noted that some publicly traded investment vehicles, including certain venture-focused ETFs and Cathie Wood's ARK Investment Management funds, hold small allocations to SpaceX acquired through secondary purchases. But these positions are diluted across broader portfolios, giving investors only marginal exposure to SpaceX's performance. The situation highlights a structural issue in American capital markets that has been growing for two decades. Companies are staying private longer. Much longer. In the 1990s, the median age of a technology company at IPO was about four years. Today it's closer to twelve. The result is that the most explosive period of value creation -- the years when a company goes from promising startup to dominant market player -- increasingly happens while shares are held exclusively by private investors. By the time a company goes public, much of the upside has already been captured. This trend has not gone unnoticed by regulators. The SEC under various administrations has debated expanding the definition of accredited investor to allow more individuals to participate in private markets. Currently, you must have a net worth exceeding $1 million (excluding your primary residence) or annual income above $200,000 to qualify. Some reform proposals would add criteria based on financial sophistication, professional certifications, or demonstrated investment knowledge. But progress has been slow, and the fundamental gatekeeping mechanism remains intact. SpaceX sits at the center of this debate because it represents the most extreme version of the trend. A company worth more than $350 billion that has never sold a single share to the general public. The contrast with an earlier era is stark. When companies like Microsoft, Amazon, and Google went public, they were worth a fraction of their current valuations, and ordinary investors who bought shares in those IPOs -- or even in the first years of trading -- generated life-changing returns. SpaceX's investors have already generated those returns. In private. There's another wrinkle that the Motley Fool piece touches on: even if SpaceX does IPO, the company's dual-class share structure would almost certainly give Musk and existing insiders overwhelming voting control. This is now standard practice among founder-led technology companies, but it means that public shareholders would have little say in corporate governance. You'd own a piece of the economics. Not the steering wheel. Recent developments have added new dimensions to the story. SpaceX completed another massive tender offer in late 2025, allowing employees and early investors to sell shares at a valuation that represented a roughly 25% premium over the previous round just months earlier. The pace of these tender offers has accelerated, with SpaceX now conducting them roughly every six months. Each one generates a fresh wave of headlines and a fresh wave of frustration from retail investors who can't participate. Meanwhile, Starlink's business continues to grow at a pace that makes the IPO question increasingly urgent. The service surpassed 5 million subscribers in early 2026, according to disclosures in SpaceX's investor materials reported by Reuters. Revenue from Starlink alone is projected to exceed $10 billion annually, making it one of the fastest-growing telecommunications businesses in the world. At some point, the pressure to provide liquidity to employees and early investors through a public offering may override Musk's preference for staying private. Employee stock options have expiration dates. Early venture investors have fund lifecycle constraints. The clock is ticking, even if Musk controls the alarm. For individual investors watching from the outside, the strategic calculus comes down to patience and positioning. If a Starlink IPO does materialize, the demand will be extraordinary. Getting an allocation at the offering price through a brokerage account is possible but unlikely for most retail clients -- those shares will go to the biggest institutional customers of the lead underwriters. Buying on the first day of trading is possible but risky, given the likelihood of a massive first-day premium. And buying after the initial frenzy subsides requires discipline and a willingness to wait for the stock to find its footing, which could take weeks or months. Some financial advisors have suggested a different approach entirely. Rather than chasing a SpaceX or Starlink IPO, investors can build exposure to the broader commercial space industry through publicly traded companies that are SpaceX suppliers, partners, or competitors. Rocket Lab, which trades on the Nasdaq, builds launch vehicles and spacecraft components. L3Harris Technologies provides satellite and defense systems. Amazon's Project Kuiper, while not separately traded, adds space-related optionality to Amazon's stock. None of these are substitutes for owning SpaceX directly, but they offer a way to participate in the same secular growth trends without the access constraints of private markets. But let's be honest. People don't want a substitute. They want SpaceX. The company occupies a unique position in the American imagination -- part defense contractor, part technology startup, part embodiment of the frontier spirit that Musk markets so effectively. It builds things that fly into space and land themselves on drone ships in the ocean. It is connecting rural communities in Montana and fishing villages in Indonesia to high-speed internet for the first time. It is, by any reasonable measure, one of the most consequential companies of the 21st century. And the fact that ordinary people can watch its rockets launch on YouTube but can't buy its stock on their phones feels, to many, like a fundamental unfairness baked into the system. Whether that changes -- and when -- depends on forces that no individual investor can control. Musk's timeline. The SEC's appetite for reform. The internal financial pressures building inside SpaceX as employee equity packages mature. What individual investors can control is their readiness. Having a brokerage account with a major underwriter. Understanding how IPO allocations work. Setting realistic expectations about first-day pricing. And recognizing that the best investment opportunities often require the most patience. SpaceX will go public eventually. Everything does. The question isn't if but when, and at what price, and who gets to participate on terms that still leave room for meaningful returns. For the millions of individual investors who have watched this company's ascent from the outside, that question isn't academic. It's personal. And the gate, for now, remains locked.

SpaceX has confirmed that one of its Starlink internet satellites suffered an unidentified "anomaly" in low-Earth orbit, resulting in a loss of communication. The company, owned by Elon Musk, reported that the Satellite 34343 experienced a problem at an altitude of approximately 560 kilometers. In a statement posted to X (formerly Twitter), SpaceX reassured the public and the scientific community that the crippled satellite poses no danger to human life in space.On Sunday, March 29, Starlink satellite 34343 experienced an anomaly on-orbit, resulting in loss of communications with the satellite at ~560 km above Earth.Latest analysis shows the event poses no new risk to the @Space_Station, its crew, or to the upcoming launch of NASA's Artemis II mission. We will continue to monitor the satellite along with any trackable debris and coordinate with @NASA and the @USSpaceForce.The event also posed no new risk to this morning's Transporter-16 mission, which was designed to avoid Starlink with payload deploys well above or well below the constellation.The SpaceX and Starlink teams are actively working to determine root cause and will rapidly implement any necessary corrective actions.While SpaceX has not yet identified the exact cause of the failure, outside experts are weighing in. LeoLabs, a firm that monitors orbital traffic using radar, suggested that the incident was likely caused by an "internal energetic source" rather than a collision with another object."LeoLabs detected a fragment creation event involving SpaceX Starlink 34343 on 29 March 2026. LeoLabs Global Radar Network immediately detected tens of objects in the vicinity of the satellite after the event, with a first pass over our radar site in the Azores, Portugal. Additional fragments may have been produced," it said."We've characterized this event as likely caused by an internal energetic source rather than a collision with space debris or another object. Due to the low altitude of the event, fragments from this anomaly will likely de-orbit within a few weeks," LeoLabs added."Our analysis indicates this event is similar to a previous event involving Starlink 35956 on 17 December 2025. These events illustrate the need for rapid characterization of anomalous events to enable clarity of the operating environment," they noted.In December, Starlink said that Satellite 35956 experienced a malfunction at an altitude of approximately 418 kilometers. While the satellite remains largely intact, it is currently "tumbling" and is expected to fall back through the atmosphere and burn up within a few weeks.SpaceX confirmed that the anomaly led to the "venting of the propulsion tank," which caused the satellite to drop about 4 kilometers in altitude almost immediately.
Anthropic accidentally exposed the full source code of its AI tool Claude Code due to a packaging error. While no user data or core AI systems were affected, the leak revealed the tool's internal workings, raising concerns over security practices. Anthropic, the San Francisco-based artificial intelligence (AI) company, on Tuesday (local time), inadvertently exposed the entire source code of Claude Code, its AI coding tool, NDTV reported. The source code was exposed due to a basic packaging oversight that, according to security researchers, should never happen in a finished software product. Security researcher Chaofan Shau on Tuesday found that Claude Code, the AI company's flagship command-line coding tool, exposed its full source code. The issue stemmed from a 60MB source file map (cli.js.map) bundled within its npm package, which made it possible to recreate the original TypeScript code from the compiled version, the report added. The npm registry, where the file was hosted, is the largest public repository for software packages and is widely used by developers to distribute and access tools. According to BlockBeats, the leak only affects part of the Claude Code tool itself and does not include user data or the AI's core systems, so it doesn't pose a direct risk to regular users. In simple terms, your personal information and chats are safe. However, because the full code is now visible, anyone can see how the tool is built, how it works behind the scenes, and how it handles things like usage tracking and security. A source map is an additional file used in development that links a program's compressed, production-ready code back to its original, human-readable version. It helps developers debug and troubleshoot issues more efficiently. However, such files are not meant to be included in public releases, as they can effectively expose the entire underlying codebase. According to BlockBeats, the latest version of Claude Code (v2.1.88), released on 31 March, still included this file. It reportedly contained the full code for 1,906 proprietary source files, detailing elements such as internal API structures, telemetry systems, encryption mechanisms, and inter-process communication protocols.

Tesla, Inc. (NASDAQ:TSLA) is one of the Goldman Sachs AI Stocks: Top 12 Stocks to Buy. On March 23, 2026, Reuters reported that CEO Elon Musk said the day before that Tesla, Inc. (NASDAQ:TSLA) and SpaceX aim to establish two advanced chip factories in Austin, Texas, as part of the "Terafab" project. The complex will include two fabs, each creating a single chip design, one for the corporation's automobiles and Optimus humanoid robots, and the other for AI data centers in space. Musk said that the businesses must develop the facility to fulfill future chip demand, estimating that the current world supply will only meet approximately 3% of their needs. The Terafab venture is a collaboration between SpaceX, Tesla, Inc. (NASDAQ:TSLA), and xAI, with no planned completion date revealed. Musk stated that the facility could produce one terawatt of computing capacity per year, compared to around half a terawatt now generated in the United States. He acknowledged the company's reliance on suppliers such as Samsung, TSMC, and Micron, while expecting that internal demand will soon exceed global chip supply. laurel-and-michael-evans-c-KDq7nxVdQ-unsplash Tesla, Inc. (NASDAQ:TSLA) is a developer, manufacturer, designer, lessor, and seller of electric vehicles and energy generation and storage systems. The company operates across China, the United States, and globally. It operates through the Automotive and Energy Generation and Storage segments. While we acknowledge the potential of TSLA as an investment, we believe certain AI stocks offer greater upside potential and carry less downside risk. If you're looking for an extremely undervalued AI stock that also stands to benefit significantly from Trump-era tariffs and the onshoring trend, see our free report on the best short-term AI stock. Disclosure: None. Follow Insider Monkey on Google News.