The latest news and updates from companies in the WLTH portfolio.
New Delhi: Finance Minister Nirmala Sitharaman on Thursday met heads of banks on risks related to Artificial Intelligence (AI) following global concerns over Anthropic's Mythos model threatening data security of the financial systems. The meeting assumes significance in view of the development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. According to sources, risks and measures needed to deal with AI were discussed at the meeting. The meeting chaired by the Finance Minister, deliberated on various risks that AI posed on the financial sector, sources said, adding that banks have been urged to take preemptive measures to secure their systems, data and the money of customers. The meeting was attended by top officials of banks, officials from the Reserve Bank of India, and Ministry of Electronics and Information Technology. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach.

Anthropic's valuation has reportedly hit $1 trillion in secondary markets, reflecting strong investor demand for AI firms, according to Business Insider Africa Anthropic has reportedly reached a $1 trillion valuation in secondary markets, surpassing other leading AI firms, according to a report by Business Insider Africa. The report said the valuation comes from recent secondary share transactions, where existing investors traded stakes at significantly higher prices, rather than from a new primary funding round. This makes Anthropic one of the most valuable private companies globally based on secondary market activity. According to Business Insider Africa, strong investor demand for exposure to artificial intelligence companies has pushed up valuations in private markets. Anthropic, which develops the Claude family of AI models, has benefited from this surge as enterprises continue to adopt generative AI tools. The report noted that secondary market trades often reflect investor expectations of future growth, especially for companies that are not yet publicly listed but are seen as leaders in high-growth sectors like AI. The reported valuation puts Anthropic ahead of several competitors in the AI space in terms of secondary market pricing. While these figures are not official valuations set during fundraising rounds, they provide a signal of how investors are pricing the company relative to its peers. Business Insider Africa reported that this surge highlights the increasing willingness of investors to pay a premium for stakes in companies developing large-scale AI models and infrastructure. The report also pointed out that secondary markets are becoming a key avenue for investors to gain access to high-demand private companies, particularly as IPO timelines remain uncertain. These markets allow early investors and employees to liquidate shares while offering new investors entry points. However, such valuations can be volatile and may not always align with formal funding round valuations. The development, as reported by Business Insider Africa, underscores the continued surge in AI-driven valuations, with secondary markets emerging as an early indicator of investor sentiment toward leading AI companies.

SACRAMENTO - Cursor, the fast-growing AI-first coding editor built on Visual Studio Code that uses AI as its core feature rather than a simple add-on, was co-founded by Pakistani entrepreneur Sualeh, is making waves online and now worls's wealthiest person in the world Elon Musk's company SpaceX is interested to buy this Startup. The AI startup helps developers write, edit, debug, and refactor code using natural language, while also understanding entire codebases and handling complex multi-step programming tasks through agent-like workflows. The face behind this startup is Pakistani entrepreneur Sualeh, who is now in discussions with SpaceX for staggering $60 billion acquisition. If completed, the deal would instantly become one of the largest tech acquisitions in history. SpaceX is not just expanding, it is transforming. As the company prepares for what could be a historic IPO potentially valuing it above $1 trillion, it is aggressively moving beyond space exploration into artificial intelligence, signaling a dramatic strategic shift. Sualeh's startup has already surpassed $1 billion in annual recurring revenue and is used daily by more than one million developers worldwide. Even more striking, it has reportedly been integrated into workflows across 67% of Fortune 500 companies, making it one of the most deeply embedded AI tools in enterprise software. If finalized, the deal would be a turning point for SpaceX, evolving it from spaceflight giant into a full-scale artificial intelligence powerhouse competing at the highest level of the global AI race. Analysts are pointing to deeper issue highlighted by this moment, the widening gap between talent-rich countries like Pakistan and the infrastructure available in Western tech ecosystems. Many argue this gap is now one of the most expensive lost opportunities in the global economy. Sualeh's rise become symbolic. For many young Pakistanis, it represents both pride and possibility, the belief that world-changing companies can emerge from anywhere. But it also underscores a hard truth: talent alone is not enough. With stronger policy support, investment, and institutional backing, experts say Pakistan could unlock a wave of innovation capable of producing its own multi-billion, or even trillion-dollar, success stories. The next era of global tech dominance may not just be about space or software, but about the explosive fusion of both, led by AI.

A $10 billion AI startup that supplies training data to companies like OpenAI, Anthropic, and Meta is facing a wave of lawsuits over how it collects and handles sensitive worker data. According to a report by The Wall Street Journal, at least seven class-action lawsuits have been filed against Mercor in recent weeks following a third-party data breach that allegedly exposed contractor information. The lawsuits claim the breach included highly sensitive material, ranging from recorded job interviews to facial biometric data and screenshots of workers' computers. One class-action suit filed in Northern California claims Mercor collected extensive applicant data -- including background checks -- and shared it with partners in violation of federal regulations. Plaintiffs allege that Mercor's practices go beyond standard hiring processes. According to the suit, the company monitored contractors' computers, used recorded candidate interviews to train AI models, and may have trained client systems on materials owned by other companies. In one account cited in the lawsuit, a plaintiff alleged that workers were encouraged to use real company data in tasks, provided it was slightly altered or anonymised. When the plaintiff tried to avoid including sensitive details, reviewers reportedly pushed back, criticising the work as too vague. Another contractor alleged he encountered financial models and prompts that appeared to contain proprietary information, including what the lawsuit describes as "pre-project metadata, hidden defined names, institutional data-terminal markers, real lender or counterparty names, irregular numeric precision, and other features that raised serious provenance questions." Mercor has denied the allegations. "We strongly dispute the speculative claims in these lawsuits and look forward to presenting the facts at the appropriate time and place," the company said in a statement. It added that it "take[s] the privacy of our customers, contractors, employees and those we interview very seriously" and that it complies with relevant laws and regulations. The company also said it acted quickly to address the breach, noting that "we are conducting a thorough investigation with leading third-party forensics experts and are communicating directly with affected stakeholder groups as we have findings." The case is drawing attention to how AI companies source the data used to train models. The Journal reports that Mercor previously attempted to buy work materials from individuals on LinkedIn, including documents those individuals did not necessarily own the rights to. Online postings also suggested the company offered payments for personal-finance files and even Google Maps histories. Workers also described a system of continuous monitoring. Contractors are required to install tracking software called Insightful, which captures screenshots of their computers during work sessions. One lawsuit alleges that this software recorded activity across hundreds of applications, including personal accounts, and that workers were not "clearly informed" of the extent of the monitoring. Mercor said it informs workers that screenshots may be taken during billing hours and instructs them to use only work-related applications while the software is active. The fallout is already affecting Mercor's relationships. Meta has paused its work with the company and is investigating the incident, according to a spokesperson cited by the Journal. Anthropic declined to comment, while OpenAI did not respond to requests. The situation highlights growing tension in the AI industry, where companies are under pressure to secure large volumes of high-quality data to train increasingly advanced models.

Flight delays could hit Stansted Airport during the first May bank holiday weekend. Strikes will go ahead at the airport after a substandard pay offer was rejected. Around 100 ABM workers, who support passengers with disabilities, will walk out from May 3 to May 6. Unite has warned that the action will cause delays to flights, with longer boarding times expected. Sharon Graham, Unite general secretary, said: "ABM staff do a vital job for passengers at the airport, yet they are struggling with low pay while their employer makes huge profits. "This situation is unacceptable and workers at ABM continue to have Unite's full support." Many of the workers are paid below the London living wage of £14.80. They also claim that workloads have increased along with passenger numbers. In January, more than 1.89 million passengers passed through the airport, a two per cent rise on the same month last year. ABM, a global services company, reported $2.2 billion in revenue in March, up 6.1 per cent on the previous year. Steve Edwards, Unite regional officer said: "Workers at ABM are increasingly given bigger workloads and deserve pay that reflects this. "Their employer can afford to come back with an offer workers would accept and could end this dispute easily by doing so. "But until then, Unite members will strike until their voices are heard." A previous strike planned for April 17 to 20 was postponed to allow workers to vote on a last-minute pay offer. The agreement came after a strong strike ballot - reported as a 97 per cent vote for action by Unite. Workers said the offer failed to tackle low pay. ABM is a facilities company that provides special assistance services at Stansted - staff who help passengers with reduced mobility such as wheelchair users and those needing extra boarding help. Over 100 Unite union members at ABM rejected a pay offer which would have increased wages by 1p an hour in year one and 2-3p in year two.
TOKYO, Apr 23, 2026 - (JCN Newswire) - NEC Corporation (NEC; TSE: 6701) today announced a strategic collaboration with Anthropic PBC (Anthropic, *1) to accelerate the utilization of AI in the Japanese enterprise sector. Through this collaboration, NEC becomes the first Japan-based global partner of Anthropic. Both companies will begin joint development of secure industry-specific AI solutions for the Japanese market, leveraging "Claude Cowork" (*2), an AI agent for desktop use. As a first phase, initiatives for the financial, manufacturing, and local government sectors will include the development of solutions that combine the expertise of customers in their respective industries and operations. In addition, the partnership further enhances NEC's next-generation cybersecurity service (*3). NEC will advance the utilization of Claude within "NEC BluStellar Scenario" (*4, *5), which underpins NEC's value creation model "NEC BluStellar." The deployment of Claude will also be promoted across the NEC Group globally, aiming to build one of Japan's largest AI-native engineer teams, and comprising approximately 30,000 members worldwide. Through these initiatives, both companies aim to accelerate the social implementation of safe and reliable AI technology, contributing to business transformation and enhanced competitiveness for Japanese companies and public administration. Background In recent years, AI and AI agent technologies have advanced rapidly, finding broad applications in businesses and public administration, including automating tasks, supporting decision-making, and improving customer service. However, many organizations face hurdles such as a shortage of IT talent, insufficient accumulation of operational know-how, stringent security requirements, and compliance with unique laws and regulations. Especially in highly trusted domains like finance, public administration and cybersecurity, establishing a secure and transparent AI foundation and introducing AI agents tailored to on-site operations are key to accelerating digital transformation (DX). In recent years, NEC has treated itself as its own first client through its "Client Zero" initiative. In this endeavor, NEC has primarily utilized AI agents in its internal development processes, from design to testing, to advance its operations and revolutionize productivity. This collaboration further accelerates these efforts and supports the full-scale adoption and implementation of AI in the Japanese market. Key Collaboration Details and Plans 1. Joint Development of Industry-Specific AI Solutions for the Japanese Market: Jointly develop secure industry-specific AI solutions for customers in demanding sectors such as finance, manufacturing, and local government, which call for strict requirements, including high security, compliance with unique laws, and high quality. Through joint development that integrates customer and on-site expertise, both companies will promote the rapid deployment and implementation of these solutions. Furthermore, in the field of cybersecurity, NEC is leveraging Anthropic's cutting-edge AI technology in its Security Operations Center (SOC) services to protect the digital infrastructure of companies operating both in Japan and globally against increasingly sophisticated cyber threats. Going forward, NEC will utilize the technology and expertise gained through this collaboration to further enhance its next-generation cybersecurity service and deliver it to customers. 2. Utilization of Claude in NEC BluStellar Scenario: Utilize Claude (Claude Opus 4.7)/Claude Code within NEC BluStellar Scenario to accelerate customer transformation. Specifically, NEC will begin by utilizing Claude with two scenarios from the BluStellar Scenario suite -- "Scenarios for Data-Driven Management" and "Scenarios for Customer Experience Transformation" -- and will gradually expand its application to other scenarios. "We are deeply honored to collaborate with NEC, one of Japan's leading technology companies. Since its founding, Anthropic has advanced its research guided by the conviction that building trustworthy AI is the path to building truly great AI. We are deeply grateful for the trust extended to us by our customers, partners, and government stakeholders across Japan, and we regard this collaboration as a meaningful step in our long-term commitment to shaping the future of AI in Japan together. By bringing together the strengths of both companies, we are dedicated to delivering safe and secure AI agents that Japanese enterprises can adopt with full confidence." Comment from Toshifumi Yoshizaki, Executive Officer and COO of NEC Corporation "This long-term partnership with Anthropic enables NEC to maximize the potential of AI in the Japanese market and further strengthen our capabilities in AI and AI agent implementation through large-scale deployments and collaboration. By bringing together the technology and expertise of both companies, we aim to jointly create solutions that meet the high safety, reliability, and quality standards demanded by companies and public administration, and play a central role in supporting the transformation of our customers through AI utilization." (*1) Anthropic PBC: https://www.anthropic.com/ (*2) Claude(Claude Opus 4.7)/Claude Code/Claude Cowork: Claude is Anthropic's general-purpose AI assistant (This collaboration utilizes the latest model, "Claude Opus 4.7".) Claude Code is a coding agent for developers, and Claude Cowork is a desktop application for business users. (*3) NEC's cybersecurity: https://www.nec.com/en/global/solutions/cybersecurity/index.html (*4) "NEC BluStellar" is a value creation model that leads customers into a brighter future by realizing business model innovation and solving social issues and customer management issues. This is accomplished through advanced cross-industry knowledge backed by proven results and NEC's cutting-edge technology honed through years of development and operation. https://www.nec.com/en/global/necblustellar/index.html (*5)"NEC BluStellar Scenario" is a value-creation framework designed to solve our customers' challenges. By combining consulting, products and services, offerings, and integration, we create value for our customers. About NEC The NEC Group leverages technology to create social value and promote a more sustainable world where everyone has the chance to reach their full potential. NEC Corporation was established in 1899. Today, the NEC Group's approximately 110,000 employees utilize world-leading AI, security, and communications technologies to solve the most pressing needs of customers and society.

Rawalpindi Shutdown Sparks Chaos as Security Measures Disrupt Daily Life and Economy A sweeping shutdown across Rawalpindi over five consecutive days has thrown normal life into chaos, with authorities reportedly citing security arrangements tied to Iran-US negotiations as justification. However, residents and businesses alike have borne the brunt of these restrictive measures, as reported by The Express Tribune. According to The Express Tribune, public life in the city came to a grinding halt as transport hubs, wholesale markets, commercial districts, hotels, and even wedding venues were forced to close. The suspension of routine activity disrupted not only trade but also education and judicial proceedings, leaving citizens struggling to manage essential commitments. The prolonged restrictions have created widespread uncertainty, with daily wage earners and small traders among the worst affected as economic activity remains frozen. Transport Crisis Deepens Amid Shutdown and Rising Costs Travel has emerged as a major concern. With public transport services suspended, people have been compelled to rely on privately hired vehicles at inflated costs. Families dealing with urgent situations, including funerals, were left with no choice but to rent entire vehicles, often at nearly twice the normal fare. This unusual surge in demand has ironically boosted business for car dealers and showroom operators. The city, which hosts around 1,470 registered car showrooms, reportedly saw all small vehicles booked out at premium rates, especially for travel to destinations such as Lahore, Sialkot, Faisalabad, and others. Meanwhile, the closure of 34 transport terminals has left hundreds of workers jobless, compounding the economic distress. Although authorities verbally allowed transport services to resume on Tuesday evening, public fear and low passenger turnout prevented a meaningful restart. Transport operators remained hesitant to resume operations without clear assurances. Strict security enforcement continued across major roads, including Murree Road and Rawal Road, with heavy police deployment even extending to areas near the airport, as cited by The Express Tribune. Residents within a three-kilometre radius reportedly faced severe restrictions, including limited access to rooftops, while nearby markets remained sealed. Transport Federation leader, Haji Zahoor Arain, has called for a clearer, more balanced policy. He suggested controlled transport operations instead of a blanket shutdown, proposing alternative routes and locations to keep essential mobility intact while maintaining security, as reported by The Express Tribune.

NEW YORK: Over the last quarter century, Elon Musk revived space travel, turning cosmic exploration into thriving businesses. For its next act, Musk's SpaceX is eyeing an even bigger opportunity in something more prosaic: building artificial intelligence (AI) for the enterprise. SpaceX estimates that its total addressable market (TAM) - a closely watched metric - could be as much as US$28.5 trillion, according to a S-1 filing reviewed by Reuters. TAM is the maximum revenue a company could generate if it captured every customer in a particular market. The S-1 regulatory filing, in which companies disclose their financials and key risks before going public, shows that SpaceX expects more than 90% of that market, or US$26.5 trillion, could stem from the AI sector. The vast majority of that, US$22.7 trillion, could come from AI for businesses. The company is moving ahead with an IPO expected this summer targeting a valuation of roughly US$1.75 trillion and seeking to raise about US$75 billion, which would make it the largest initial public offering in history. "We believe we have identified the largest actionable TAM in human history," according to the filing. The new information about where SpaceX sees its biggest market opportunity stands in stark contrast to how the company currently makes its money. SpaceX did not reply to a request for comment. Although a company's TAM is neither a forecast or a valuation, it is an important indicator for investors evaluating a high-growth company's potential. These figures are often vast and rarely questioned. When Uber went public in 2019, it claimed a US$5.7 trillion market opportunity for its ride-sharing business alone. The eye-popping opportunity identified by SpaceX, tucked into more than 300 pages detailing its finances, underscores Musk's long-held desire to occupy a central role in the advancement of AI technology. The AI for enterprise market is currently dominated by Anthropic and OpenAI, AI industry leaders locked in intense competition, and both of which have indicated intentions to go public as early as this year. In February, SpaceX acquired xAI, an AI research company founded by Musk in early 2023. The filing seen by Reuters shows that xAI remains a nascent and deeply loss-making operation. The AI unit posted an operating loss of US$6.4 billion in 2025, sharply wider than the US$1.6 billion loss a year earlier. Those losses eclipsed the US$4.4 billion in operating profit generated by Starlink, SpaceX's satellite internet business and its largest revenue engine, which brought in US$11.4 billion of its US$18.7 billion total revenue last year. Overall, SpaceX lost US$4.9 billion. SpaceX's AI unit is also resource hungry. In 2025, SpaceX's total capex surged to US$20.7 billion, with AI accounting for US$12.7 billion - more than it spent on its space and connectivity businesses combined. The company said it could capitalise on some of xAI's preexisting tools, such as Grok Enterprise and an agentic or autonomous platform it is developing with Tesla called Macrohard. In the filing, the company warned prospective investors of its big spending plans to develop AI and other technologies, including manufacturing the keys to powering artificial intelligence called graphics processing units, or GPUs. SpaceX also said it would assemble a specialised salesforce and send employees known as forward deployed engineers to embed directly with customers to help their workforces embrace AI. "We believe that our enterprise strategy, which is focused on serving the digital needs of the world's largest industries with Al solutions, positions us competitively to pursue this rapidly growing opportunity," SpaceX said in the filing. One source familiar with the financials of the company was not convinced. "If you decide I'm going to be really sober about this and only value the businesses that I can actually see, you're not going to be in the ballpark of what the market will almost certainly set the valuation to be," the source said.

Secondary share trading platforms are pricing Anthropic at approximately $1 trillion, just three months after its primary fundraising round valued it at $380 billion. OpenAI is trading at $880 billion on the same platforms, a meaningful reversal of the previous order. The valuations are not primary rounds and carry no guarantee of liquidity. Anthropic, the San Francisco-based AI company founded in 2021 by Dario and Daniela Amodei, has reached an implied valuation of approximately $1 trillion on secondary share trading platforms, according to Business Insider, overtaking OpenAI which is trading at approximately $880 billion on the same venues. Kelly Rodriques, CEO of Forge Global, one of the leading private-company share trading platforms, told Business Insider the valuation was "hovering around the $1 trillion mark." The development arrives without a primary fundraising round or press release: it reflects what buyers on secondary markets are willing to pay for existing shares from current or former employees and early investors. The speed of the appreciation is striking. Anthropic closed a $30 billion Series G funding round, led by GIC and Coatue, in February 2026 at a primary valuation of $380 billion. Three months later, secondary markets are pricing the company at more than two and a half times that figure. The driver, according to market participants cited by Business Insider, is a combination of revenue acceleration and a supply-demand imbalance in Anthropic's shares. Glen Anderson of Rainmaker Securities told Business Insider he had just been offered the opportunity to buy Anthropic shares at a valuation of $960 billion and that a month earlier such a figure would have been "unthinkable", but that the shares were being snapped up by competing buyers within hours. One of Anthropic's shareholders had expressed willingness to sell at an implied valuation of $1.15 trillion, according to Ken Sawyer, co-founder of Saints Capital. The revenue trajectory behind the valuations is extraordinary on its own terms. Anthropic's annualised revenue run rate was approximately $9 billion at the end of 2025; by March 2026 it had reached $30 billion. That is a 233% increase in one quarter, driven primarily by enterprise adoption of Claude Code and the company's broader API and enterprise products. Reuters confirmed Anthropic had not yet agreed a new primary round, having reportedly resisted overtures from venture capital investors for a new fundraise, according to Bloomberg. The demand is therefore being channelled into the secondary market, where buyers must acquire shares from existing holders rather than from the company directly. The OpenAI comparison is the other side of the story. On Forge Global, OpenAI trades at approximately $880 billion, just 3% above the $852 billion primary valuation from its early-2026 fundraising round. According to Caplight, an analytics platform that tracks private-market share activity, the ratio of sellers to buyers in OpenAI shares was five-to-one in Q1 2026, a reversal from the end of 2025, when buyers were dominating sellers. Glen Anderson described interest in OpenAI shares as "tepid" this year, with bids often below the company's last primary valuation. Jesse Leimgruber, founder of AI startup OpenHome and a holder of secondary Anthropic shares, told Business Insider that one "very prominent growth fund" had offered to buy Anthropic shares at a $1.05 trillion implied valuation. Secondary market valuations are categorically different from primary fundraising rounds. They reflect what a buyer is willing to pay for illiquid, minority shares with no guarantee of liquidity, no board rights, and no ability to force a sale or IPO. A secondary price of $1 trillion does not mean Anthropic could raise $1 trillion in a primary round, nor that a future IPO would be priced at that level. The Business Insider article itself noted reports of "feverish demand" and buyers offering homes as collateral for Anthropic shares, language that describes speculative intensity rather than fundamental valuation. Anthropic is reportedly exploring an IPO as early as late 2026.

Geopolitical tensions in Eastern Europe underscore the urgency of addressing the climate and radiological consequences of a regional nuclear conflict. Even a small-scale nuclear conflict at the Ukraine-Russia border could cause years of severe global climate disruption and radioactive fallout across much of the world, new research suggests. In the study, published in npj Clean Air, researchers at the University of Exeter used the UK Earth System Model to simulate a hypothetical regional nuclear conflict at the Ukraine-Russia border. The results shows that the soot emitted after nuclear detonation would rapidly spread through the atmosphere, block sunlight and disrupt climate across the Northern Hemisphere. In the first year after the conflict, the Northern Hemisphere cools by about 1°C on average, with much larger regional drops of around 5°C in Russia and 4°C in the United States. Surface sunlight declines sharply, and precipitation falls substantially across key mid-latitude agricultural regions. The researchers also found that the climate effects would not be short-lived, lasting for approximately 6 years. Stratospheric warming caused by the soot alters major atmospheric circulation patterns, including the jet streams and the Intertropical Convergence Zone. Alongside the climate impacts, the study examined the long-term dispersion of radioactive material attached to the black carbon particles. The results suggest that long-lived radionuclides could be transported globally, with around 40% eventually depositing in the Southern Hemisphere. This means the consequences of a regional nuclear conflict would not remain confined to the war zone but would instead become a global humanitarian and environmental issue. Lead author Dr Ananth Ranjithkumar, Post-Doctoral Research Fellow at the University of Exeter, said: "Even a small-scale regional nuclear conflict would not remain a regional catastrophe for long. Our simulations show that its effects could reverberate across the planet for years, disrupting climate systems and spreading radioactive fallout far beyond the detonation zone, turning a regional war into a global crisis." Co-Author Professor Jim Haywood, also of the University of Exeter added: "This study confirms the global impact of regional nuclear conflicts upon climate, and emphasises that the New Strategic Arms Reduction Treaty that ended February 5, 2026 urgently needs to be extended." Co-Author Professor Nathan Mayne, also from the University of Exeter said "This is an excellent example of how our studies of other planets can contribute to understanding Earth's climate. "From planet wide dust storms on Mars, to kilometre per second winds in the atmospheres of extremely hot gas giant planets, our adaptations lead to improvements in how we capture climate and weather phenomena for Earth itself both in 'normal' and, in this case, extreme situations." /Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.

ServiceNow has positioned its AI as a means to bring structure and control to fragmented enterprise environments by combining data context, workflow execution, and governance within a single platform. In its Q1 2026 earnings call, the company said its differentiation comes down to "context," with CEO Bill McDermott arguing that the platform can "convert chaos to control" across complex enterprise systems. With the platform having processed over 95 billion workflows and more than 7 trillion transactions, ServiceNow says it is continuously improving decision-making across its system. McDermott argues that ServiceNow's AI advantage enables it to deliver real outcomes rather than just recommendations: "When people ask, what's the difference between ServiceNow AI and the foundation models, you can boil it down to one word, context." "We're not bolting intelligence onto disconnected systems. We're combining context with execution on a single platform. "It's not recommendations, it's outcomes that matter." AI Trained on Live Enterprise Workflows Having processed large numbers of workflows and transactions, these results reflect ServiceNow's deep integration into enterprise operations, as its AI is not trained in isolation but is continuously refined through real business activity. These scale metrics reveal how ServiceNow's system has improved over time, with each workflow, approval chain, and transaction feeding into the platform's underlying data layer, allowing it to learn patterns tied to assets, identities, vendors, and business rules. As a result, this creates a compounding effect in which every additional workflow strengthens the system's accuracy and decision-making capability, enabling an AI layer to operate with live enterprise context. This places ServiceNow in a strategic position for AI, as its models are grounded in operational data and are directly embedded into the systems where enterprise work is executed. This foundation supports more reliable automation, consistent decision-making, and scalable deployment across complex organizations, laying the foundation for how the company positions its advantage. The "Why We Win" Framework ServiceNow's framework for success requires context, execution, and governance to position its AI as a system of record for decision-making. This combination enables the platform to operate across fragmented enterprise environments while maintaining consistency and auditability, ensuring the company can convert complexity into coordinated workflows. This underpins both customer outcomes and its ability to scale AI adoption commercially. With context, ServiceNow positions this as its primary differentiator, as the platform's knowledge graph can capture relationships across assets, approvals, identities, vendors, and business rules, and is continuously updated through live workflows. This allows AI outputs to be grounded in real enterprise data rather than generic training sets, enabling the system to evaluate which approval chains apply, which dependencies matter, and how prior decisions inform next steps, improving with every transaction. Instilling execution is valuable to the framework, as ServiceNow embeds AI directly into workflows across IT, HR, CRM, and security, so the system is responsible for completing tasks rather than offering recommendations. As a result, the company's internal IT department is currently resolving 90% of employee IT requests autonomously, with specialized AI agents resolving assigned cases 99% faster than human agents. By embedding the AI into operational workflows, it can act on decisions immediately, reducing latency between insight and action, and making automation measurable in terms of time saved and tasks completed. Governance ties the system together, meaning every decision is auditable, and the AI control tower provides visibility across agents, models, and workflows in real time, ensuring that every action follows enterprise policies, permissions, and compliance requirements. "This architecture is a big reason why we recently announced the entire ServiceNow portfolio is AI native," McDermott continued. "AI, data, security and governance are now built into every product and package, not a separate purchase. This is a deliberate break from sidecar AI." Embedding governance at the foundation rather than as an add-on supports scaling AI across large organizations without losing control. Customer Results Drive Repeatable AI Adoption Rather than positioning AI as experimental, ServiceNow's AI adoption strategy centers on de-risking enterprise AI through measurable outcomes and repeatable use cases. By grounding AI in operational deployments where automation is already delivering time savings, cost reduction, and higher throughput. In one example, enterprise customer Robinhood was able to deflect 70% of employee requests with ServiceNow AI before they reach human agents, as well as eliminating roughly 2,200 hours of manual effort each month. A leading online travel company had also used ServiceNow's agentic AI capabilities to deliver 11 million autonomous AI resolutions annually for HR and IT alone, resulting in over 230% ROI, generating millions in annual savings, and giving 45,000 hours back to their employees. A British engineering and technology customer enterprise had also used autonomous workflows to deflect 38,000 tickets, reducing the average resolution time by two full days. These repeatable customer outcomes strengthen the platform's AI value proposition and support broader enterprise rollout decisions. Ahead of the earnings call on Wednesday, Rebecca Wettemann, CEO of Valoir, argued that ServiceNow's AI growth depends on reducing customer hesitation by demonstrating real-world success and leveraging a strong partner ecosystem to drive adoption. "As companies move from FOMO to FOMU (fear of messing up) with AI, the way to convince them to trust and adopt AI is to show them how others have been successful with it," she said. "The ecosystem is critical. ServiceNow knows it needs more than just its feet on the street to drive AI adoption." By using early adopters to validate outcomes, these proofs are then reused across similar enterprise environments, lowering perceived implementation risk for new customers and accelerating adoption cycles. $1M+ Contracts Grow as AI Adoption Accelerates Monetization is increasingly being driven by AI adoption, with ServiceNow beginning to see this translate directly into both larger deal sizes and a shift in how revenue is structured, as AI-related workflows are becoming embedded in larger, multi-product platform agreements rather than isolated point solutions. This includes continued strength in high-value enterprise contracts, with deals above $1MN in annual contract value having expanded significantly, rising 130% year over year. McDermott links this directly to accelerating AI demand across the customer base, highlighting the scale of early commitments to the company's AI portfolio. "We had a goal to be $1 billion on our AI commit this year," he explained. "And I think we might have understated that a little bit. We're already talking about $1.5 billion now, and it's on a run." With AI commitments building faster than originally expected, this demand is also reshaping ServiceNow's revenue model, as around half of net new business is now coming from non-seat-based pricing, including usage-based and consumption-driven structures. ServiceNow's hybrid approach links revenue more directly to customer activity on the platform, particularly as AI agents and workflows scale, allowing monetization to expand with the volume of automation, transactions, and AI-driven tasks running through the system. ServiceNow's Key Earnings Results In its quarterly earnings report, ServiceNow saw strong results in customer expansion and large deals.

Unsanctioned users have allegedly accessed Anthropic's controversial Claude Mythos Preview AI frontier model although the company has limited the businesses that can use it. The group, who have yet to be named, had apparently made many attempts to access Mythos since it debuted earlier this month. They finally gained access via a third-party vendor. The users who accessed Mythos on the day it was announced are members of a [...]

Users of Grok, the artificial intelligence chatbot developed by Elon Musk's xAI, have repeatedly complained about service disruptions throughout 2026, with frequent "high demand" errors, temporary unavailability and slower response times prompting questions about infrastructure strain and rapid scaling challenges. The issues have become a recurring topic on X, Reddit and Downdetector, where spikes in outage reports often coincide with major model updates, heavy traffic from new features or shared infrastructure problems with the X platform. While xAI's official status page shows generally high uptime and no major ongoing incidents as of late April, many subscribers -- especially on free and lower-tier plans -- report intermittent problems that disrupt conversations and image or video generation. The most common complaint centers on the "high demand" message that appears when servers become overloaded. This error has surfaced repeatedly during peak usage hours, after high-profile announcements or when xAI rolls out new capabilities such as Grok 4.3 beta features or integrations with tools like Cursor for coding model training. In several documented cases, including incidents in January, March and early April 2026, thousands of users simultaneously encountered login failures, delayed responses or complete unavailability lasting from 30 minutes to several hours. xAI has not publicly detailed every outage, but company statements and status updates point to a combination of factors. Rapid user growth has placed enormous pressure on the underlying compute resources. Grok relies on xAI's massive Colossus supercomputer cluster and additional cloud capacity, which must handle not only regular chatbot queries but also compute-intensive tasks such as image generation, real-time reasoning and training improvements. When external projects or internal model rollouts pull significant GPU resources, free and SuperGrok Lite users often experience throttling or temporary degradation. A notable example occurred in mid-January 2026 when a broader outage affected both X and Grok, with reports peaking at tens of thousands on Downdetector. The disruption was linked to issues with shared infrastructure and the X API, highlighting the tight integration between the social platform and the AI service. Similar events in March involved authentication problems that logged users out and prevented Grok from loading properly. Another recurring trigger involves planned or unplanned maintenance during model upgrades. After a server downtime on January 23, 2026, users noticed changes in behavior, including stricter content moderation and reduced capabilities in certain creative tasks. Some speculated the outage allowed xAI to implement safety filters or efficiency tweaks, though the company has not confirmed specifics. High demand during viral moments also plays a role. When Grok generates buzz -- whether through humorous responses, timely commentary or new features -- traffic surges can overwhelm available capacity. Free-tier users are particularly affected, as xAI prioritizes paid subscribers and enterprise workloads during constrained periods. This tiered approach has drawn criticism from users who feel the service becomes unreliable precisely when it gains the most attention. xAI's aggressive expansion adds another layer. The company continues to train increasingly powerful models while supporting Musk's other ventures, including potential compute sharing for projects like Cursor. Such demands can temporarily reduce resources available for standard Grok interactions. Additionally, the service's deep integration with X means any platform-wide issues -- from API changes to authentication glitches -- can cascade directly to Grok users. From a technical standpoint, running a large language model at scale involves complex distributed systems. Even brief spikes in concurrent users can strain inference servers, especially when queries involve multimodal tasks like image analysis or video generation. xAI has invested heavily in Colossus and other clusters, but matching compute supply perfectly with unpredictable demand remains challenging for any AI provider in 2026. Comparisons with competitors such as OpenAI's ChatGPT and Anthropic's Claude show that occasional outages are common across the industry during periods of rapid growth. However, Grok's close ties to X and its real-time data access sometimes amplify visibility of disruptions, as users expect constant availability for timely information and conversation. xAI has taken steps to improve reliability. The official status page at status.x.ai provides live metrics on inference and non-inference endpoints, and the company has gradually increased capacity. Planned maintenance windows are now better communicated, and some features include fallback modes during peak load. Still, users on social media frequently express frustration, with comments ranging from mild annoyance to accusations of poor planning. For subscribers, the disruptions have practical consequences. Professionals relying on Grok for research, coding assistance or content creation report lost productivity during outages. Casual users encounter broken conversations or failed image generations at inconvenient moments. Some have turned to alternative AI tools during repeated issues, though many remain loyal due to Grok's unique personality and real-time X integration. Looking ahead, xAI faces the classic scaling dilemma of fast-growing tech companies. Continued user growth, more powerful model releases and new features will likely keep pressure on infrastructure. Musk has signaled ambitious plans for Grok, including deeper multimodal capabilities and broader availability, which will require even more robust systems. Industry analysts suggest that as xAI matures, outages may become shorter and less frequent, similar to how other AI services stabilized after initial growing pains. Investments in dedicated hardware, smarter load balancing and geographic distribution of servers could help mitigate future problems. In the meantime, users are advised to check the official status page, try during off-peak hours or upgrade to higher-tier plans for better reliability. The repeated service hiccups in 2026 reflect both the immense popularity of Grok and the inherent difficulties of operating cutting-edge AI at global scale. While xAI works behind the scenes to expand capacity, many users hope for fewer interruptions as the company balances innovation with stability. For now, the question of why Grok experiences more frequent disruptions than some expect boils down to explosive demand outpacing infrastructure in a hyper-competitive AI landscape. As xAI pushes the boundaries of what conversational AI can do, maintaining consistent uptime remains one of its most visible challenges in 2026.

OSLO, April 23 (Reuters) - Norway's $2.2 trillion sovereign wealth fund, the world's largest, is assessing whether to invest in SpaceX, the fund's deputy CEO told Reuters on Thursday. The rocket and satellite company controlled by the world's richest man, Elon Musk, is expected to launch a $1.75 trillion initial public offering, possibly the largest ever, this summer. Asked whether the fund had been approached to be part of SpaceX as an investor, Trond Grande said in an interview: we have dialogue with companies, right? So, we also have dialogue with SpaceX." When asked whether the fund was assessing whether this could be interesting for the fund, Grande said: "That is what we are doing." He declined to give further details. Grande was speaking after the fund reported on Thursday a first-quarter loss of 636 billion crowns ($68.44 billion) as the war in the Middle East weighed on global stocks. (Reporting by Gwladys Fouche in Oslo, editing by Terje Solsvik)
Q1 EPS beat at $0.41, though warranty and tariff refunds lifted margins. Tesla left its 11,509-coin Bitcoin (BTC) position untouched through the first quarter of 2026, even as the electric vehicle maker funneled $2 billion of fresh capital into SpaceX. The stance held through a quarter in which a drop from roughly $90,000 to $68,000 knocked the carrying value of Tesla's stack down 22% to about $786 million, forcing a $173 million fair value loss. Tesla Sticks With Bitcoin as $2 Billion Flows to SpaceX CEO Elon Musk's company has now kept its Bitcoin position unchanged for more than three years, extending a HODL posture first adopted after Tesla unloaded three-quarters of its original 43,200 BTC stake in mid-2022. Tesla disclosed in its Q1 filing that it neither bought nor sold BTC during the quarter. This matches the unchanged position it maintained last year. The sharper story sits in Tesla's $2 billion SpaceX investment. The infusion, which regulatory filings cleared in March after SpaceX absorbed xAI, turns Tesla's earlier $2 billion xAI stake into a sub-1% position in the private rocket company. It offset Q1 free cash flow of $1.4 billion and arrives alongside $1.2 billion in fresh debt, signaling that Tesla's balance-sheet priorities sit firmly with AI compute and chip supply rather than with broader digital-asset accumulation. The capital flow also deepens the financial link between Tesla and SpaceX, whose own Bitcoin treasury activity has drawn market attention in recent months. Earnings Beat Masks a Heavier AI and Robotaxi Spend Tesla posted Q1 EPS of $0.41 versus a $0.36 consensus and revenue of $22.38 billion. Automotive gross margins excluding credits reached 19.2%. Results beat expectations and lifted shares 4% to 5% after hours. Warranty reserve releases, tariff refunds, and delayed supplier payments supported the margin. Management used the call to push an AI-forward story. The Cortex 2 training cluster at Giga Texas now runs roughly 230,000 H100-equivalent GPUs. Dojo 3 has been repositioned around space-based AI compute after an earlier shutdown. Tesla confirmed its AI5 chip was taped out on April 15. It reiterated that the Terafab venture with SpaceX, xAI, and Intel will secure long-term silicon. The silicon supports Cybercab, Optimus, and Full Self-Driving. Cybercab production remains targeted for Q2 2026. Full Self-Driving (FSD) Tops 1.28 Million Subscribers FSD subscriptions hit a record 1.28 million during the quarter, with unsupervised autonomy trials expanding across additional U.S. cities. Musk separately acknowledged that Hardware 3 vehicles lack the compute for future autonomy features, a concession that drew pushback from long-time owners despite the earnings beat. The capital allocation contrast with peers such as Strategy and Metaplanet, which continue stacking Bitcoin aggressively, leaves Tesla's hold-but-don't-add posture looking increasingly passive within the public-company treasury set. Investors must decide in the coming days how to weigh the AI capex pivot versus BTC stasis. They must also decide whether treasury peers will interpret Tesla's silence as a quiet signal.
The Rundown: Access to Anthropic's Mythos model reportedly leaked into a Discord group within days of launch, after the users reportedly guessed the company's deployment URL and naming using patterns leaked in the recent Mercor breach. The details: Why it matters: The first alleged unauthorized use of the AI model that had the White House and others calling emergency meetings didn't come from China, Russia, or another rival nation -- it came from a random Discord group. Not a great start, and the problem only compounds as partner access grows and the models get more dangerous.

Yesterday, we wrote about how SpaceX secured a $60 billion option to acquire Cursor later this year, or pay $10 billion to continue their AI collaboration. Now, a new report from CNBC adds a twist to the story -- and it points to what was happening behind the scenes before that deal came together. As it turns out, Microsoft had been exploring its own move into the AI coding startup space. For a moment, it looked like Microsoft might step in and take control of one of the fastest-growing tools in AI-assisted software development. That didn't happen. The company chose not to move forward, CNBC reported, citing two people familiar with the discussions. Days later, SpaceX surfaced with a deal that now values Cursor at $60 billion. Before SpaceX's announcement this week, Microsoft had explored a potential acquisition of Cursor, according to two people familiar with the discussions. The talks didn't go far. Microsoft chose not to move forward, one source said. Both asked not to be named since the conversations were private. The timing is striking. Demand for AI coding tools has surged as developers look for faster ways to build software, automate workflows, and ship products with fewer resources. Cursor has emerged as a frontrunner in that shift, alongside rivals backed by Anthropic and OpenAI. Microsoft is hardly absent from the race. Its GitHub Copilot product has gained traction with developers, and the company has poured billions into partnerships with OpenAI and Anthropic, both of which rely heavily on Microsoft's Azure cloud. Still, its role has leaned more toward infrastructure and investment than outright ownership of leading tools. That backdrop makes its decision to pass on Cursor more notable. Venture firms had already lined up funding for Cursor at a $50 billion valuation, as CNBC reported earlier this month. The number reflected how quickly these tools are becoming central to modern software development. Cursor's ability to help users assemble apps and websites with minimal friction has turned it into one of the most sought-after platforms in the category. Then came SpaceX. The company, controlled by Elon Musk, said in a post on X that it secured the right to acquire Cursor for $60 billion by the end of the year. If the deal doesn't go through, SpaceX will pay $10 billion instead. "SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI," the company said in the post. Cursor CEO Michael Truell added on X that he's "excited to partner with the SpaceX team to scale up Composer," referring to the company's AI model. The agreement came together late in Cursor's fundraising process, catching some prospective investors off guard, a source said. In the weeks leading up to the announcement, SpaceX had already begun offering Cursor access to compute, a key advantage in a market where access to infrastructure often shapes outcomes. Musk has been tightening his grip on the AI stack. In February, he merged SpaceX with his AI startup xAI in a deal valued at $1.25 trillion. The combined company is now moving toward what could become one of the largest public offerings on record. Meanwhile, Microsoft faces a different set of pressures. Its stock has dropped 10% this year, trailing peers in the hyperscaler group. CEO Satya Nadella told analysts in January that GitHub Copilot reached 4.7 million paying subscribers, up 75% year over year -- a strong signal of demand, though competition is intensifying. OpenAI continues to push forward with its Codex programming app. CEO Sam Altman said on X that Codex has reached 4 million active users, less than two weeks after crossing 3 million. Anthropic's Claude Code service has gained momentum as well, helping the company reach $30 billion in annualized revenue this month. Cursor now sits at the center of that fight. What started as another AI coding tool has quickly become a strategic asset in a race spanning software, infrastructure, and capital. Microsoft had a chance to own a bigger piece of it. SpaceX decided not to wait.

Analyst's Disclosure: I/we have a beneficial long position in the shares of KRKNF either through stock ownership, options, or other derivatives. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article. Seeking Alpha's Disclosure: Past performance is no guarantee of future results. No recommendation or advice is being given as to whether any investment is suitable for a particular investor. Any views or opinions expressed above may not reflect those of Seeking Alpha as a whole. Seeking Alpha is not a licensed securities dealer, broker or US investment adviser or investment bank. Our analysts are third party authors that include both professional investors and individual investors who may not be licensed or certified by any institute or regulatory body.

Governments and researchers have warned of both defensive and offensive risks involving Mythos. OpenAI CEO Sam Altman pushed back against growing alarm over rival Anthropic's powerful new AI model Claude Mythos, suggesting the company is using "fear" to market the product. Speaking on the Core Memory podcast hosted by tech journalist Ashlee Vance, Altman argued that the use of "fear-based marketing" was geared towards keeping AI in the hands of a "smaller group of people." "You can justify that in a lot of different ways, and some of it's real, like there are going to be legitimate safety concerns," Altman said. "But if what you want is like 'we need control of AI, just us, because we're the trustworthy people', I think fear-based marketing is probably the most effective way to justify that." Altman added that while there are valid concerns about AI safety, "it is clearly incredible marketing to say: 'We have built a bomb. We are about to drop it on your head. We will sell you a bomb shelter for $100 million. You need it to run across all your stuff, but only if we pick you as a customer.'" He noted that it was "not always easy" to balance AI's new capabilities with OpenAI's belief that the technology should be accessible. Anthropic's Claude Mythos model, revealed last month, has drawn intense attention from researchers, governments and the cybersecurity industry, particularly after testing suggested it can autonomously identify software vulnerabilities and execute complex cyber operations. The model is being distributed only to a limited set of organizations through a restricted program. The rollout reflects a broader divide in the AI industry over how powerful systems should be deployed, with some companies emphasizing controlled access and others arguing for wider distribution to accelerate innovation and understanding of the technology. Mythos has become a focal point in that debate. The model's capabilities have been framed by Anthropic as both a defensive breakthrough -- allowing faster detection of critical software flaws -- and a potential offensive risk if misused. Early this month, it identified hundreds of vulnerabilities in Mozilla's Firefox browser during testing and has also demonstrated the ability to carry out multi-stage cyberattack simulations. Anthropic has restricted access to the system via Project Glasswing, granting select companies including Amazon, Apple and Microsoft the ability to test its capabilities. The company has also committed significant resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes more widely available. The model has also exposed limitations in existing AI evaluation systems, with Anthropic acknowledging that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system. That said, a group of researchers claimed last week they were able to reproduce Mythos' findings using publicly available models. Despite calls within parts of the U.S. government to halt use of the technology over concerns about its potential applications in warfare and surveillance, the National Security Agency has reportedly begun testing a preview version of the model on classified networks. On prediction market Myriad, owned by Decrypt's parent company Dastan, users put a 49% chance on Claude Mythos being released to the wider public by June 30. Altman suggested that rhetoric around highly dangerous AI systems may increase as capabilities improve, but argued that not all such claims should be taken at face value. "There will be a lot more rhetoric about models that are too dangerous to release. There will also be very dangerous models that will have to be released in different ways," he said. "I'm sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world." Altman also dismissed suggestions that OpenAI is scaling back its infrastructure spending, saying the company would continue expanding its computing capacity despite shifting narratives. "I don't know where that's coming from... people really want to write the story of pulling back," he said. "But very soon it will be again, like, 'OpenAI is so reckless. How can they be spending this crazy amount?'"

Following the Mythos announcement on April 7, shares of major Chinese cybersecurity firms rose for several consecutive days In the second of a three-part series on Anthropic's powerful Mythos artificial intelligence model, we examine the effect it has had on China's cybersecurity and finance industries. US start-up Anthropic's new AI model, Claude Mythos Preview, has drawn global attention for its ability to autonomously identify and exploit cybersecurity vulnerabilities at a level that appears to surpass conventional tools used in enterprise and financial systems. The model has not been made publicly available, with Anthropic instead granting limited access to selected organisations for controlled testing. While the company's services remain unavailable in China, its release has nonetheless sparked discussion about its potential impact on China's cybersecurity and finance industries. Why are Chinese cybersecurity firms rallying? In China, official reactions and public discussions around Mythos have been relatively muted compared with the rest of the world - the exception being China's cybersecurity industry. Following Anthropic's Mythos announcement on April 7, shares of major Chinese cybersecurity firms including Qi An Xin, Sangfor Technologies and 360 Security Technology rose for several consecutive days, reflecting growing expectations that demand for AI-driven cybersecurity and compliance solutions could accelerate. Austin Zhao, a senior research manager at IDC China, said the strong market reaction highlighted the sector's close attention to Mythos, adding that while its capabilities had been anticipated, the model's actual performance still surprised many. The technology could drive up cybersecurity costs as firms increase spending on personnel, infrastructure and advanced protection systems, while also creating new opportunities for AI-based security services, said James Gong, legal director at the law firm Bird & Bird. Recent developments in China also point to growing capabilities in AI-driven vulnerability discovery. For example, 360 Security Technology claimed it had developed an AI-powered "vulnerability discovery agent" that identified hundreds of previously unknown flaws, including in widely used software such as Microsoft Office, according to a report published on Wednesday by Chinese cybersecurity research group Natto Thoughts.
