The latest news and updates from companies in the WLTH portfolio.
A group of unauthorized users has reportedly gained access to Mythos, the powerful cybersecurity tool recently unveiled by Anthropic, TechCrunch reported. This development is significant because Anthropic has explicitly warned that Mythos is capable of identifying and exploiting vulnerabilities in every major operating system and every major web browser when directed by a user to do so. The company has framed this technology as a double-edged sword. They previously noted that in the wrong hands, it could become a potent hacking tool rather than the defensive asset it was designed to be for enterprise security. The unauthorized access was reportedly achieved by a small group of users operating within a private online forum. According to reports, these individuals managed to secure access to the tool on the same day it was publicly announced by Anthropic. The group, which is part of a Discord channel dedicated to hunting for information about unreleased AI models, used a mix of strategies to bypass restrictions. Perhaps most concerning is how the group managed to pinpoint the location of the model By making an educated guess about the model's online location, they relied on their existing knowledge of the naming conventions and formats Anthropic has used for previous models. This effort was reportedly aided by information revealed in a recent data breach from Mercor, an AI training startup that works with top developers. Furthermore, the group leveraged access provided by a person who is currently employed at a third-party contractor that works for Anthropic. This individual, who was interviewed about the breach, had legitimate permission to access Anthropic models and software related to evaluating the technology for the startup, which they gained through their contract work. Anthropic has been very cautious with the distribution of Mythos. The model was released only to a select number of vendors and organizations as part of an initiative called Project Glasswing. This limited release was specifically designed to prevent the tool from falling into the hands of bad actors who might weaponize it against corporate security. Big names like Apple, Amazon, and Cisco Systems are among the organizations that have been granted access to test the model. Amazon, which is a key partner and backer of Anthropic, also offers Mythos through its Bedrock platform to a very specific, approved list of organizations. As the utility of the tool has become known, a growing number of financial institutions and government agencies on both sides of the Atlantic have been clamoring to get on that list to better safeguard their own systems. In response to the reports, an Anthropic spokesperson provided a statement, saying, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The company has been quick to clarify that, so far, it has found no evidence that this unauthorized activity has impacted Anthropic's internal systems in any way. They maintain that the access appears to be contained within a third-party vendor's environment. While the situation sounds alarming, the source who spoke about the breach offered some perspective on the intentions of the group. The individual claimed that the users involved are primarily interested in playing around with new models rather than wreaking havoc. They have reportedly avoided running cybersecurity-related prompts on the Mythos model, choosing instead to experiment with tasks like building simple websites to avoid detection. The person also noted that this group has access to a variety of other unreleased Anthropic AI models, suggesting a broader scope of interest in the company's pipeline. This incident highlights the massive challenge Anthropic faces in keeping its most powerful and potentially dangerous technology from spreading beyond its approved partners. If these reports are accurate, it raises serious questions about how many other people might be using Mythos without permission and what their true objectives might be. For now, Anthropic is left to manage the fallout of this unauthorized access, which could potentially threaten the reputation of an exclusive release intended to bolster enterprise security. It is a stark reminder that even with strict initiatives like Project Glasswing, the digital perimeter is only as strong as its weakest link, especially when third-party vendors are involved in the deployment of such high-stakes software.

It's not often that retail investors get to participate in hot initial public offerings (IPOs), but it looks like they will get the opportunity with SpaceX. However, before you go rushing in to buy shares, here's everything you 100% need to know before you do. This is all going to come down to who your broker is, as not all brokers have access to every IPO. While retail brokers will sometimes get IPO allocations, they generally tend to be a very small percentage, around 5% to 10% of the offering. Will AI create the world's first trillionaire? Our team just released a report on the one little-known company, called an "Indispensable Monopoly" providing the critical technology Nvidia and Intel both need. Continue " Full-service brokers typically receive the bulk of these allocations, and they are generally offered to their best clients. If an online broker gets an allocation, you'll often have to go into a lottery to get shares, as generally demand exceeds supply. Notably, SpaceX plans to make as much as 30% of its IPO available to retail investors, so you could have a much better chance of getting shares than you normally would. You'll still generally need a standard brokerage account and put in a nonbinding indication of interest with how many shares you want and the maximum price you'd pay. There is then generally a 30-day flipping rule, where brokerages don't want you to sell your shares in the first 30 days. If you do, then you can be barred from future IPO allocations. The odds are actually quite good, although there is no guarantee. Typically, around 75% of IPOs see their share prices increase on their first day of trading. Meanwhile, for big IPOs, the underwriters will generally try to support the stock price on the first day, as it reflects very badly on them if a popular offering goes down right away. Another thing that generally plays a big role in how an IPO performs early is its float percentage, which is the percentage of shares in the open market versus its overall shares outstanding. When a company goes public with a low float, the shares typically pop, as there just aren't enough shares to go around to meet investor demand, so it naturally pushes the price higher. According to reports, SpaceX is looking to raise around $75 billion, valuing the company at around $2 trillion. That would be a very small float of just 3.75%. Typically a stock needs to have at least 10% float and be trading for a while before being included in a major market index like the S&P 500 (SNPINDEX: ^GSPC) or Nasdaq-100. However, the Nasdaq recently made adjustments that could let SpaceX be added to the index with 15 days without meeting the float requirements. The S&P 500, which requires a stock to be public for a year with four quarters of positive earnings, is also considering changing its rules. This could be a big boost to SpaceX's stock price. While low-float stocks often trade well out of the gate, investors need to be wary of lock-up periods. This is when employees and early backers can start to sell the stock, which suddenly floods the market with a new supply of shares. Each IPO is different, and many large ones have multiple lock-ups. The first lock-up is typically 180 days after the IPO, although some will have early release provisions if certain criteria are met. The details won't be known until SpaceX files its S-1 filing, while the final details will be in its 424B filing. Image source: Getty Images. SpaceX has several businesses. The company was originally founded by Elon Musk, who is also the CEO of Tesla, as a way to lower the cost of rocket launches by being able to reuse as much from them as possible. Today, NASA uses SpaceX for most of its launches, while private companies also use its services to launch satellites. The largest part of SpaceX's business by revenue, though, is its Starlink satellite internet service, which is estimated to make up between 50% and 80% of its revenue. Starlink provides internet access to around 10 million users around the globe, with the U.S. being its biggest market. Reuters estimates that the company generated between $15 billion and $16 billion in revenue and about an $8 billion profit last year. However, SpaceX merged with xAI earlier this year to combine two of Musk's ventures into one. xAI makes the Grok large language model (LLM) and also owns X, formerly Twitter. And xAI is a money-losing, cash-burning business that was valued at around $250 billion at the time of the merger. I think investors who want to try to play the low-float dynamics of the IPO can buy SpaceX as a trade, but it's not a stock I'm currently interested in owning over the long term at the moment. Based on reports on its finanicals, the valuation appears high (over 10 times sales) for what is a pretty capital-intensive business. When our analyst team has a stock tip, it can pay to listen. After all, Stock Advisor's total average return is 972%* -- a market-crushing outperformance compared to 198% for the S&P 500. Geoffrey Seiler has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Tesla. The Motley Fool has a disclosure policy.

Retail investors could have an actual chance to get in on the SpaceX IPO. It's not often that retail investors get to participate in hot initial public offerings (IPOs), but it looks like they will get the opportunity with SpaceX. However, before you go rushing in to buy shares, here's everything you 100% need to know before you do. How can I get an IPO allocation? This is all going to come down to who your broker is, as not all brokers have access to every IPO. While retail brokers will sometimes get IPO allocations, they generally tend to be a very small percentage, around 5% to 10% of the offering. Full-service brokers typically receive the bulk of these allocations, and they are generally offered to their best clients. If an online broker gets an allocation, you'll often have to go into a lottery to get shares, as generally demand exceeds supply. Notably, SpaceX plans to make as much as 30% of its IPO available to retail investors, so you could have a much better chance of getting shares than you normally would. You'll still generally need a standard brokerage account and put in a nonbinding indication of interest with how many shares you want and the maximum price you'd pay. There is then generally a 30-day flipping rule, where brokerages don't want you to sell your shares in the first 30 days. If you do, then you can be barred from future IPO allocations. What are the odds that the SpaceX IPO goes up? The odds are actually quite good, although there is no guarantee. Typically, around 75% of IPOs see their share prices increase on their first day of trading. Meanwhile, for big IPOs, the underwriters will generally try to support the stock price on the first day, as it reflects very badly on them if a popular offering goes down right away. Another thing that generally plays a big role in how an IPO performs early is its float percentage, which is the percentage of shares in the open market versus its overall shares outstanding. When a company goes public with a low float, the shares typically pop, as there just aren't enough shares to go around to meet investor demand, so it naturally pushes the price higher. According to reports, SpaceX is looking to raise around $75 billion, valuing the company at around $2 trillion. That would be a very small float of just 3.75%. SpaceX Could Be Given an Index Exception Typically a stock needs to have at least 10% float and be trading for a while before being included in a major market index like the S&P 500 (^GSPC +1.05%) or Nasdaq-100. However, the Nasdaq recently made adjustments that could let SpaceX be added to the index with 15 days without meeting the float requirements. The S&P 500, which requires a stock to be public for a year with four quarters of positive earnings, is also considering changing its rules. This could be a big boost to SpaceX's stock price. Beware the lock-up period While low-float stocks often trade well out of the gate, investors need to be wary of lock-up periods. This is when employees and early backers can start to sell the stock, which suddenly floods the market with a new supply of shares. Each IPO is different, and many large ones have multiple lock-ups. The first lock-up is typically 180 days after the IPO, although some will have early release provisions if certain criteria are met. The details won't be known until SpaceX files its S-1 filing, while the final details will be in its 424B filing. What does SpaceX do exactly? SpaceX has several businesses. The company was originally founded by Elon Musk, who is also the CEO of Tesla, as a way to lower the cost of rocket launches by being able to reuse as much from them as possible. Today, NASA uses SpaceX for most of its launches, while private companies also use its services to launch satellites. The largest part of SpaceX's business by revenue, though, is its Starlink satellite internet service, which is estimated to make up between 50% and 80% of its revenue. Starlink provides internet access to around 10 million users around the globe, with the U.S. being its biggest market. Reuters estimates that the company generated between $15 billion and $16 billion in revenue and about an $8 billion profit last year. However, SpaceX merged with xAI earlier this year to combine two of Musk's ventures into one. xAI makes the Grok large language model (LLM) and also owns X, formerly Twitter. And xAI is a money-losing, cash-burning business that was valued at around $250 billion at the time of the merger. Should you buy the IPO? I think investors who want to try to play the low-float dynamics of the IPO can buy SpaceX as a trade, but it's not a stock I'm currently interested in owning over the long term at the moment. Based on reports on its finanicals, the valuation appears high (over 10 times sales) for what is a pretty capital-intensive business.

April 22 - Microsoft said on Wednesday it plans to embed advanced artificial intelligence models, including Anthropic's Claude Mythos Preview, into its secure coding framework, as the company steps up its cybersecurity capabilities. Incorporating the models into Microsoft's Security Development Lifecycle (SDL) will help identify vulnerabilities and develop fixes faster, early on in the cycle, the Windows maker said in a blog. Mythos, announced on April 7, has found "thousands" of major vulnerabilities in operating systems, web browsers and other software. Its capabilities to code at a high level have given it a potentially unprecedented ability to identify cybersecurity vulnerabilities and devise ways to exploit them, experts said. Anthropic has said the current iteration, Claude Mythos Preview, will be first deployed to a select group of companies as part of Anthropic's "Project Glasswing," a controlled initiative under which major technology companies, including Microsoft, Amazon.com and Apple, can use it to search for cybersecurity vulnerabilities. Microsoft said it evaluated Mythos, using its own open-source benchmark for real-world detection engineering tasks, and the "results showed substantial improvements relative to prior models." U.S. President Donald Trump's administration, central bankers across the globe and industries are racing to get up to speed on Mythos and its ability to make complex cyberattacks both easier and quicker to crack. (Reporting by Juby Babu in Mexico City; Editing by Shinjini Ganguli)

( April 22, 2026, 23:09 GMT | Official Statement) -- MLex Summary: Anthropic told the US Court of Appeals for the District of Columbia Circuit that the US Department of Defense violated its constitutional rights of due process and First Amendment-protected speech. In an opening brief leading up to oral argument scheduled May 19, Anthropic said Defense Secretary Pete Hegseth "did not uncover a plot to sabotage military systems. He did not discover malicious code, hidden access, or covert ties to a foreign adversary. Instead, he disagreed with Anthropic's refusal to remove two narrow contractual restrictions on the use of its AI model for lethal autonomous warfare and mass surveillance of Americans."See attached document.... Prepare for tomorrow's regulatory change, today MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term. Know what others in the room don't, with features including: * Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more * Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs * Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific * Curated case files bringing together news, analysis and source documents in a single timeline Experience MLex today with a 14-day free trial.

A shadowy crew of AI enthusiasts pierced the defenses around Anthropic's Mythos on launch day. Boom. Access granted through a sloppy third-party vendor. Now this powerhouse model -- designed to hunt vulnerabilities across every major operating system and browser -- sits in unauthorized hands. TechCrunch broke the story, citing Bloomberg's reporting on the intrusion. Mythos forms the core of Project Glasswing, Anthropic's bid to arm elite security teams with AI that autonomously crafts exploits. Think zero-days in Windows, macOS, Chrome, Firefox -- you name it. The company rolled it out to just 40 vetted partners, including Apple and Amazon, precisely because it could flip from defender to destroyer in seconds. A person familiar with the matter told Bloomberg the group, huddled in a private online forum and Discord channel, sniffed out the model's URL pattern from prior leaks involving contractor Mercor. They interviewed a contractor employee, grabbed credentials, and logged in. Screenshots. Live demos. Proof delivered. And they've been poking around ever since. Not launching attacks, they claim. Just tinkering with the forbidden toy. "The group in question is interested in playing around with new models, not wreaking havoc with them," the source insisted to Bloomberg. But capabilities like these don't stay playground-bound. Mozilla already tapped Mythos Preview directly from Anthropic to patch 271 Firefox bugs in its latest release. Firefox CTO Bobby Holley called it a "firehose of bugs," forcing teams to scramble with resources pulled from elsewhere. Wired detailed how this AI shifts vulnerability hunting into overdrive, exposing flaws humans miss -- but demanding discipline to wrangle the flood. Anthropic moved fast. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson said. No signs of core system compromise, they added. Yet whispers on X suggest the breach hit multiple unreleased models too. One post from @ns123abc laid it bare: hackers guessed URLs post-Mercor leak, slipped in via lingering contractor creds. The whole pipeline exposed. Posts from @coinbureau and @LarkDavis amplified the alarm, noting restrictions to 40 firms exactly to curb cyber risks. This isn't isolated sloppiness. The National Security Agency deploys Mythos despite Pentagon labels tagging Anthropic as a supply-chain risk -- a feud spilling into court. Axios reported wider NSA uptake, prioritizing cyber edge over bans. UK counterparts route through the AI Security Institute. Meanwhile, the breach spotlights vendor weak links in AI's high-stakes chain. Contractors like Mercor, hit earlier, leak naming conventions. Guesses turn into gateways. What if next time it's nation-states, not forum dwellers? Broader ripples hit fast. CNBC aired segments on the leak during 'Fast Money,' with Kate Rooney flagging Silicon Valley tremors. CNBC. Financial Times confirmed Anthropic's probe into the 'powerful' model handed to trusted few. Financial Times. Reddit threads in r/ClaudeAI and r/ClaudeCode buzzed with leaked excerpts, underscoring containment struggles for potent tech. So where does this leave enterprise AI security? Tools like Mythos promise to outpace human hackers, spotting multi-step chains others ignore -- like a 27-year-old OpenBSD flaw or FreeBSD exploits. But day-one cracks erode trust. Partners demand ironclad isolation; regulators eye tighter controls. Anthropic's "safe AI" badge takes a hit, even as it sues DoD over blacklists. Vendors scramble to audit creds. And those forum users? Still inside, testing limits. One wrong prompt away from chaos.

World leaders have struggled to figure out the scale of the security risks and how to fix them, with Anthropic sharing Mythos with only Britain outside the United States. The Bank of England governor warned publicly that Anthropic may have found a way to "crack the whole cyber-risk world open". The European Central Bank began quietly questioning banks about their defences. Canada's finance minister compared the threat to the closure of the Strait of Hormuz. For US rivals like China and Russia, Mythos underscored the security consequences of falling behind in the AI race. One Russian pro-Kremlin outlet called the model "worse than a nuclear bomb". The responses illustrated a reality that AI researchers have long warned about mostly in theoretical terms: whoever leads in building the most powerful AI models will gain outsize geopolitical advantages. Major AI breakthroughs are beginning to function less like product launches and more like weapons tests, and most nations want to understand how the technologies work and what protections are needed. As foundational AI "models become more consequential, access becomes more geopolitical," said Eduardo Levy Yeyati, a former chief economist at the Central Bank of Argentina and a regional adviser on growth and AI at the Inter-American Development Bank. "I would take this episode as a policy wake-up call. Governments can no longer ignore the issue." Even the US Government, which has been embroiled in a clash with Anthropic over the use of AI in warfare, has taken notice of Mythos. On Friday, Dario Amodei, Anthropic's CEO, met with White House officials after some in the Trump administration noted the potential for the new model to wreak havoc on computer systems. Anthropic, which is based in San Francisco, told The New York Times that it was keeping access to Mythos small because of safety and security concerns. It has focused on sharing the model with more than 40 organisations that provide technology used in maintaining critical global infrastructure like the internet or electricity grids. Anthropic named 11 of the organisations, including Amazon, Apple and Microsoft, that pledged to help develop security fixes for vulnerabilities identified by the model. The company said that it had no immediate timeline for widely expanding access but that it would work with the US Government and industry partners to determine next steps. It said that it had been bombarded by calls from governments, companies and other organisations seeking access and information but that these organisations could have varying levels of expertise to safely evaluate such a powerful AI model. Anthropic added that it expected other groups to release AI models with similar cyber capabilities more widely within at least 18 months, giving organisations limited time to make the necessary security fixes. On Tuesday, Anthropic said it was investigating a report that unauthorised users gained access to a version of Mythos. The scramble over Mythos comes at a moment of minimal international co-operation on AI. Governments are viewing one another with suspicion as corporations race to outpace rivals. There is no equivalent of the Nuclear Nonproliferation Treaty, no shared inspections and no agreed-upon rules for how to handle something like Mythos. When Anthropic announced the model, many experts praised the company's caution in limiting who gets to try the model but expressed concerns about the lack of international coordination to deal with the risk. Britain was the only other nation to gain access. Its AI Security Institute, a government-backed organisation, tested Mythos and published an independent evaluation last week, confirming that it could carry out complex cyberattacks that no previous AI model had completed. "This represents a step up in AI cyber capabilities," Kanishka Narayan, Britain's AI minister, said last week on social media, saying the country was taking steps to protect "critical national infrastructure". Others got less information. The European Commission, the executive branch of the 27-nation European Union, has met with Anthropic at least three times since the Mythos release, an EU official said. But the company has not provided access to the model because the two sides have not agreed on how to share it with the commission, the official said. In a statement, the commission said it was "assessing possible implications" of Mythos, which "exhibits unprecedented cyber capabilities". Claudia Plattner, the president of Germany's cybersecurity agency, known as BSI, said it had not received access to Mythos, but she met with Anthropic employees in San Francisco recently for "meaningful insight" into how it works. The capabilities point to "a paradigm change in the nature of cyber threats," Plattner said in a statement. Among US rivals, the response has been more muted. Despite Anthropic's recent clash with the Trump administration, Amodei has made clear that AI should be used to defend the United States and other democracies and defeat autocratic adversaries. Neither Beijing nor Moscow has made a major public statement on Mythos. Inside China, researchers and the broader AI community have been watching closely, according to analysts studying the country's tech community. Many of the country's banks, energy companies and government agencies run on the same software in which Mythos found vulnerabilities - but for now, they have no seat at the table. "For China, I think this is the second wake-up call after ChatGPT," said Matt Sheehan, a senior fellow at the Carnegie Endowment for International Peace. He added that a US policy to prevent China from obtaining the most sophisticated semiconductors for building advanced AI systems was helping to extend the US lead. Some AI researchers in China have privately expressed concern that the country could fall further behind, missing out on advantages that come with building a foundational model first, said Jeffrey Ding, a professor of political science at George Washington University. Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, said China was not familiar with the specifics of Mythos but supported a peaceful, secure and open cyberspace. Mythos is the latest sign of a growing global AI divide. Nations without powerful computing infrastructure and AI models risk being left dependent on companies like Anthropic, Google and OpenAI while having little sway over how their products are designed and safeguarded, Yeyati said. "The idea that access to frontier AI is something a company can unilaterally restrict, using criteria that are opaque and unappealable, should be a real concern," he said.

Here's what smart people in business and tech are saying about the multibillion-dollar deal. SpaceX's multibillion-dollar deal with AI coding startup Cursor marks a colossal step for Elon Musk and his ambitions to win the AI race. The space company, which owns AI startup xAi, said in an X post on Tuesday that it's working with Cursor to "create the world's best coding and knowledge work AI." "The combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models," the company said. Cursor cofounder Michael Truell said on X that he was excited to work with SpaceX to scale Composer, its AI-powered coding model. As part of the deal, SpaceX gains the right to acquire Cursor later this year for $60 billion, or pay Cursor $10 million for the work they produce together. Partnering with Cursor could be a crucial step for Musk to get ahead of rivals like OpenAI and Anthropic, whose AI models hold a significant market share. Musk, who founded SpaceX in 2002, has made serious strides in AI this year. Chief among them is SpaceX's February acquisition of Musk's xAI, which helped SpaceX expand into AI infrastructure and software. In April, SpaceX confidentially filed for an IPO, boosting gains in other space stocks and teasing a public debut this year. News of the Cursor deal electrified chatter among business and tech industry professionals on social media, many of whom viewed it as a symbiotic match. Here's what people in tech are saying about how the deal could shake up Silicon Valley. Alex Finn, founder of Creator Buddy and Henry Intelligent Machines Alex Finn, founder of Creator Buddy and AI agent startup Henry Intelligent Machines, said the deal made "so much sense" in an X post. "xAI has been behind on coding products for years now. Cursor has a great coding product, but will fail unless they build their own model," he said on Tuesday. The deal could allow the two companies to address those issues, Finn said. While SpaceX gains Cursor's coding capabilities, Cursor gets access to SpaceX's compute infrastructure to build its own model rather than relying on OpenAI's ChatGPT or Anthropic's Claude. Finn said the hurdle Cursor faced is likely happening to many vibe-coding tools that rely on OpenAI and Anthropic, which are building competing offerings. Hadley Harris, cofounder of seed-stage VC firm Eniac Ventures, said on X that he didn't "get" the deal. "Every frontier dev I know has moved off Cursor and off IDEs entirely," Harris said on Wednesday, referring to "integrated development environments," or applications that combine building, editing, testing, and other coding capabilities. "Only laggards are still on it. And dev tools always move from thought leaders to laggards, never the reverse." Mario Nawfal, founder of the IBC Group Mario Nawfal, founder of the startup incubator and accelerator IBC Group, said Cursor's users are largely "elite software engineers," who are an important group for SpaceX to cultivate ahead of an IPO. Bringing in more software engineers would help SpaceX, and by extension, xAI, delve further into AI infrastructure and development. "@elonmusk now has space, satellites, AI, social media, and the world's most popular coding tool under one roof," Nawful said. "What he's cooking up will be wild." Tomasz Tunguz, founder and general partner at Theory Ventures Tomasz Tunguz, general partner at early-stage VC firm Theory Ventures, said the partnership allows SpaceX and Cursor to fill their individual infrastructure gaps. "Winning in agentic coding requires three layers: compute, models, & distribution," Tunguz said in an X post on Wednesday. "Anthropic, OpenAI, & Google own the full stack. xAI & Cursor each have gaps," He said xAI has massive compute power, referencing Musk's Colossus data center in Memphis, but the company is losing popularity. Cursor, he said, has the opposite problem. "Millions of developers vibe coding, but its model layer depends on OpenAI, Google, & Anthropic -- all competitors. This relationship also pressures margins," Tunguz said. "For $10 billion, SpaceX buys a call option on the distribution it couldn't retain, & Cursor wins the independence it hasn't yet secured." Sarah Catanzaro, general partner at Amplify Amplify general partner Sarah Catanzaro reacted to the deal by referencing Elon Musk's ambitious plan to put data centers in space. Amplify is a VC focused on early-stage tech startups. "I guess Elon realized to get data centers in space, you first need a really good coding agent ..." Cantanzaro said in an X post on Tuesday. Anand Kannappan, a former data scientist at Meta and cofounder of PatronusAI Anand Kannappan, cofounder of Patronus AI startup, said on Tuesday that the deal wasn't so much a merger-and-acquisition as it is a "a bet on what the real bottleneck in frontier coding models is." "The deal lets Cursor train Composer on Colossus while xAI runs the same recipe on Grok," its AI assistant, Kannappan, a former data scientist at Meta, said on X. "Both sides find out, at the same time, whether Cursor's data is actually the difference." He added: "The option structure reflects that uncertainty. If the training work ports over, SpaceX buys Cursor and owns the pipeline. If it doesn't, they pay $10B for the experiment and walk." Regardless of the outcome, Kannappan said Musk's companies benefit. "Either outcome, Grok ends up stronger than it would have been, and xAI gets an answer to a question it couldn't answer internally," he said. Aadit Sheth, cofounder of The Narrative Company On Wednesday, a cofounder of The Narrative Company -- a communications firm aimed at company executives -- said SpaceX's deal with Cursor puts the companies in direct competition with industry leaders like OpenAI and Anthropic. Aadit Sheth said the companies are betting that Musk's AI supercomputer can train a Cursor model that could replace Claude and GPT. Both Anthropic and OpenAI are working to build their own integrated development environments, which help streamline software production. "Cursor has the user. It doesn't have the model. Distribution without a defensible model underneath is a rental," Sheth said in an X post on Wednesday. "We'll know in 6-12 months whether that $60B bought a moat or a rental." Art Levy, chief business officer at Brex Art Levy, chief business officer at fintech company Brex, said he liked the deal in a X post on Tuesday. "This is a 'try before you buy' for Elon, with a massive 'break up fee' for Cursor if it doesn't work out," Levy said. He pointed out that the deal's structure gives SpaceX a call option and prevents "startup destruction" if the deal falls through. Max Kolysh, cofounder of recruiting startup Dover, said Cursor's decision to partner with SpaceX is likely a survival move. He said on X that Cursor's long-term viability had been contingent on its access to Anthropic's and OpenAI's models "Both are actively building Cursor competitors," Kolysh said. "That's an existential platform risk to survive." Kolysh also said Cursor needs its "own foundation models" -- like Anthropic's Claude or Google's Gemini. Training those requires deep pockets. "They found the guy with the deepest pockets in the world," he said. Rohit Mittal, cofounder and CEO of Helium Ventures On Tuesday, Helium Ventures CEO and cofounder Rohit Mittal said the deal could stir up the AI startup scene. Helium Ventures acquires and guides software businesses. "It will be very interesting if Claude's token consumption from Cursor moves to xAI," he said on X. Cursor currently relies on Claude, using its tokens -- units of data processed by AI models -- in its offerings. A new partnership could instead bring xAI into focus, Mittal said. "I can imagine that impacts the growth rate of Claude (not saying it'll slow down), but it'll pull xAI ahead much faster."

Elon Musk's SpaceX has signed a $60 billion deal with AI coding tool maker Cursor, which gives SpaceX the option to purchase the startup later this year. Per a Wednesday press release from the Space company, "SpaceXAI and CursorAI are now working closely together to create the world's best coding and knowledge work AI." The deal is part of SpaceX CEO Elon Musk's plan to transform the rocket company into an AI behemoth ahead of its upcoming IPO. Musk merged SpaceX with his xAI startup in February. Investors are keenly awaiting the launch of the SpaceX IPO, vying for a chance to explore its shares holistically. With reports of SpaceX going public soon, investors are heavily in line to explore their assets, with the firm's pre-IPO valuations skyrocketing at a record pace. The latest deal with Cursor further pumps that bullish sentiment, with SpaceX expected to be one of the biggest IPO launches ever. SpaceX pre-IPO shares have risen nearly 300% on Jupiter. Since SpaceX is private, its shares cannot be bought as of yet, but portals such as Jupiter offer synthetic shares as a means to evaluate SpaceX's future rise. Synthetic shares are digital tokens mimicking the price of a real asset. These shares follow SpaceX's valuation, dubbed at $1.70T, structured in a way that tracks the real-world price of the asset/valuation, backed by SPVs. SpaceX reportedly intends to debut at $2T, making it the latest talk of current market momentum. Per the latest commentary by CryptosRUs, this valuation could make SpaceX more valuable than Tesla and Meta, ranking it among the largest global companies.

SpaceX's multibillion-dollar deal with AI coding startup Cursor marks a colossal step for Elon Musk and his ambitions to win the AI race. The space company, which owns AI startup xAi, said in an X post on Tuesday that it's working with Cursor to "create the world's best coding and knowledge work AI." "The combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models," the company said. Cursor cofounder Michael Truell said on X that he was excited to work with SpaceX to scale Composer, its AI-powered coding model. As part of the deal, SpaceX gains the right to acquire Cursor later this year for $60 billion, or pay Cursor $10 million for the work they produce together. Partnering with Cursor could be a crucial step for Musk to get ahead of rivals like OpenAI and Anthropic, whose AI models hold a significant market share. Musk, who founded SpaceX in 2002, has made serious strides in AI this year. Chief among them is SpaceX's February acquisition of Musk's xAI, which helped SpaceX expand into AI infrastructure and software. In April, SpaceX confidentially filed for an IPO, boosting gains in other space stocks and teasing a public debut this year. News of the Cursor deal electrified chatter among business and tech industry professionals on social media, many of whom viewed it as a symbiotic match. Here's what people in tech are saying about how the deal could shake up Silicon Valley.
SpaceX has secured an unusual option to acquire AI coding startup Cursor for $60 billion later this year. Or it pays $10 billion for their joint work if the deal falls through. The announcement, posted on X late Tuesday, marks Elon Musk's latest push to fuse his rocket empire with artificial intelligence ambitions. Business Insider first detailed the partnership, which pairs Cursor's developer tools with SpaceX's massive Colossus supercomputer -- a beast equivalent to a million Nvidia H100 GPUs. Cursor co-founder Michael Truell called it a step forward. 'Excited to partner with the SpaceX team to scale up Composer,' he posted on X. Composer represents Cursor's agentic coding model, now supercharged by xAI's infrastructure. Cursor had hit $1 billion in annual recurring revenue by last November, fresh off a Series D at $29.3 billion valuation. That's explosive growth for a 2022 startup. But compute shortages held them back. No longer. SpaceX swallowed xAI in February, in a deal valuing the combined entity at $1.25 trillion, according to The New York Times. Musk has grumbled about xAI's Grok lagging in coding tasks. 'xAI was not built right first time around,' he said in March amid executive shakeups. To fix that, xAI poached Cursor's head of product engineering Andrew Milich and senior leader Jason Ginsburg. Both report to Musk and xAI president Michael Nicolls. Now this deal cements the tie-up. The logic clicks. Developers pay top dollar for AI tools. Cursor dominates among pros. Pair it with Colossus -- 230,000 GPUs today, scaling to a million -- and you challenge leaders like Anthropic's Claude or OpenAI's Codex. Reuters notes the move gives xAI a stronger foothold where it trails rivals. SpaceX's post spells it out: 'The combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent [Colossus training supercomputer] will allow us to build the world's most useful models.' Timing matters. SpaceX confidentially filed for an IPO in early April. A public debut could come later this year, potentially raising billions. The Wall Street Journal highlights how the Cursor option fits this prep, bolstering AI credentials for investors. Musk eyes space-based AI data centers, powered by solar and Starlink. Rockets fund the vision; AI accelerates it. Cursor gains too. They've rented tens of thousands of Colossus chips already, per reports. OpenAI tried buying them in 2025 but got rebuffed. Now Cursor locks in exclusive access -- and a fat payout either way. $10 billion dwarfs last year's full valuation. And they sidestep rivals hoarding compute for their own coding products. But $60 billion? Steep. Cursor was eyeing $52 billion this week from a16z and Nvidia. SpaceX's offer tops that by 15%. Critics call it frothy. Yet AI coding prints cash: Claude Code at $2.5 billion run-rate, GitHub Copilot over $1 billion. xAI enters at zero. This buys market share fast. Musk's pattern holds. He merged SpaceX with xAI to pool resources. Hired Cursor talent amid rebuilds. Now this option -- essentially a $10 billion call on the future. CNBC flags the partnership's focus on 'coding and knowledge work AI.' Developers. The highest-willingness-to-pay crowd. Risks loom. xAI burns cash while SpaceX's launch business profits. Integration hiccups? Musk's teams solve those in real-time, as he once tweeted about Cursor engineers at xAI. Regulators might eye the deal's scale pre-IPO. Still, Colossus sets them apart. No other cluster matches it. Industry watches. Anthropic, OpenAI dominate coding benchmarks. Cursor's Composer 2 beat Claude Opus on Terminal-Bench at a tenth the price. With Colossus? Composer 3 could leap ahead. SpaceXAI -- or whatever the post-IPO beast becomes -- aims to own the stack: compute, models, tools. Musk builds vertically. Rockets lift satellites. Satellites beam data. Data trains models. Models code better rockets. Cursor slots in perfectly. Boom. The fee alone reshapes dynamics. Cursor cashes out big, independent for now. SpaceX tests the waters. If Grok Code surges via Cursor's interface, $60 billion looks cheap. If not? $10 billion buys the blueprint anyway. Analysts buzz on X. One called it 'a full-stack power move.' Another: 'Elon just bought a one-year call option on Cursor for $10 billion.' Paytm founder Vijay Shekhar Sharma marveled at SpaceX entering AI deals. 'SpaceX was a space company for me.' No more. The Verge dubs it an 'odd arrangement' ahead of IPO. Odd? Strategic. Musk consolidates: compute from Colossus, talent from Cursor, vision from xAI. Rivals scramble. SpaceX launches Starship prototypes weekly. Colossus hums in Memphis. Cursor's team codes on it now. By IPO, expect demos: Grok-powered agents building rocket sims. Investors salivate. This isn't scattershot. It's Musk stacking advantages. $60 billion buys more than code. It buys momentum in the AI arms race.

ChatGPT drove OpenAI's growth from about $200 million in revenue in 2022 to more than $10 billion in 2025, while Anthropic faced backlash over Claude Code pricing confusion. OpenAI has reached an implied $1 trillion pre-IPO valuation, putting it in the middle of a high-stakes race with SpaceX and Anthropic for the next giant public listing, according to data from pre-IPO instruments trading onchain backed 1:1 by SPV exposure on Jupiter. Those instruments are now giving traders a live read on what the market thinks OpenAI could be worth when it finally goes public. That implied value is up 163% from October 2025, when talk of a possible $1 trillion-plus IPO first started making the rounds. SpaceX is reportedly aiming for more than $1.7 trillion, while Anthropic is also getting close to the same $1 trillion line. When OpenAI was built AI that would be "beneficial to humanity" and stop a few giant firms from controlling the whole field. That goal made OpenAI look different from Google, Microsoft, Meta, and Amazon, which built their businesses around closed systems and tight control over products and profits. OpenAI drops its old model as AI costs pile up fast At first, OpenAI leaned into open research and sharing knowledge. That idea was built right into the name. Then the money problem got too big to ignore. Generative AI is expensive in a way normal software is not. A copy of old-school software costs almost nothing to duplicate at scale. AI does not work like that. Every prompt uses computing power, electricity, and specialized hardware. A basic ChatGPT exchange, one question and one answer, can cost anywhere from $0.01 to $0.10. A high-definition image can cost $0.10 to $0.20. That sounds small until usage runs into the billions of requests a day in 2026. The heavy lifting comes from GPUs, mostly supplied by Nvidia. Those chips can cost tens of thousands of dollars to buy, and cloud access can also run several dollars per hour for each chip. OpenAI and its rivals need tens of thousands of them running all the time in large data centers. Some estimates say the investment needed could climb into the hundreds of billions of dollars by the end of this decade. That pressure had already become obvious by the late 2010s. A pure nonprofit structure could not keep up with that kind of spending. So in 2019, OpenAI adopted a hybrid structure that let it raise capital while keeping control under a foundation. That was the company's first real step toward market logic, even if it still tried to keep part of its original mission alive. OpenAI cashes in on ChatGPT growth while Anthropic stumbles over Claude Code pricing Then ChatGPT blew the doors open in late 2022, pulling 100 million users in just two months. By early 2026, it had passed 900 million weekly users. Revenue followed the same path. OpenAI went from about $200 million in 2022 to more than $10 billion in 2025. That is a 60-fold jump in three years. Consumer subscriptions now range from $20 to $200 a month, while enterprise plans cost around $25 to $60 per user each month, meaning a business with 10,000 employees can therefore turn into several million dollars in annual revenue. While OpenAI pushes toward an IPO, Anthropic has been dealing with pricing backlash. The issue started when users on social media noticed that Claude Code was no longer shown as available for Pro users on Anthropic's pricing page. If that had been a full change, it would have meant the coding tool was no longer part of the $20-a-month plan and would instead require a $100-a-month subscription. Users did not take that well. Anthropic later said nothing had changed for existing users and said the pricing page was part of an experiment that affected only 2% of new sign-ups. During that confusion, Sam Altman and other OpenAI staff jumped in and used the moment to steer attention toward Codex, OpenAI's competing coding tool. Sam replied to Anthropic's head of growth with "ok boomer" and then posted, "tongiht i have had a couple of drinks."

Tesla, Inc. designs, builds, and sells electric vehicles. Net sales break down by activity as follows: - sale of automotive vehicles (69.4%); - sale of energy generation and storage systems (13.5%); - services (13.2%): primarily maintenance and repair services. The group also develops sale of power train assembly components for electric vehicles activity; - automotive credits (2.1%); - automotive leasing (1.8%). At the end of 2025, the group had 8 manufacturing sites located in the United States (5), China (2) and Germany. Net sales are distributed geographically as follows: the United States (50.2%), China (22.1%) and other (27.7%).

Anthropic investigates unauthorized access to restricted Claude Mythos AI model Anthropic PBC is investigating a report that unauthorized users accessed Claude Mythos, the next-level artificial intelligence model the company says is powerful enough to enable dangerous cyberattacks. A small group of users in a private online forum gained access to Mythos on the same day Anthropic announced a limited testing release of the model, Bloomberg first reported Tuesday, citing a person familiar with the matter and documentation it had viewed. The group has been using the model regularly since, though not for cybersecurity purposes, the person said. The account was corroborated with screenshots and a live demonstration. "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson said. The company said there is no indication the activity extended beyond the vendor or that its own systems were affected. The users reportedly gained entry through the credentials of a member of the forum who works for a third-party contractor that evaluates Anthropic models. The group combined those credentials with details from a data breach at artificial intelligence recruiting and training startup Mercor Inc. to locate the model. Bloomberg's source also claimed that the group has access to other unreleased Anthropic models. Anthropic has previously described Mythos as having a level of coding ability that can "surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The company has restricted distribution to Project Glasswing, with a preview version that has been offered to Apple Inc., Amazon.com Inc., Cisco Systems Inc., CrowdStrike Holdings Inc., Google LLC, JPMorgan Chase & Co., Microsoft Corp. and Nvidia Corp., along with about 40 other organizations, so they can test and secure their own systems. Access to the model has also become a point of contention across the U.S. government. The National Security Agency and the Commerce Department's Center for AI Standards and Innovation already have access, according to reports and the Treasury Department is seeking it. The group using Mythos has so far avoided offensive tasks, reportedly to evade detection. Discussing the reports, Ram Varadarajan, chief executive officer at cyber deception technology company Acalvio Technologies Inc., told SiliconANGLE via email that "the Mythos breach didn't require a sophisticated attack." "It just required a contractor, a URL pattern and a Day-One guess, which means the 'controlled release' model failed at its weakest link before the model's capabilities were ever the issue," explains Varadarajan. "This is the supply chain problem that perimeter-centric security has always underestimated: access controls are a policy, not an architecture and policies fail." Tim Mackey, head of software supply chain risk strategy at application security firm Black Duck Software Inc., noted that "Anthropic's marketing message for Mythos was effectively a challenge, not dissimilar to a capture the flag exercise, where success includes claims of unauthorized access to Mythos." "The unfortunate reality is that while it's great to hear that novel cybersecurity models are being provided to select researchers to evaluate, if your team is on the outside looking in, waiting for the final report might not be top of mind," said Mackey. "For defenders, even the specter of unauthorized access to an adversarial model as powerful as Mythos is purported to be only increases anxiety levels." "What's clear is that security leaders in organizations of all sizes should take this claim as a call to action focused on the role AI-enabled cybersecurity plays in their operations and how best to scale those efforts to deal with AI-enabled adversaries," added Mackey.

( April 22, 2026, 21:05 GMT | Official Statement) -- MLex Summary: A group of former senior national security officials, diplomats, intelligence professionals and subject-matter experts told a US appeals court that the government's designation of Anthropic as a supply chain risk is pretextual and therefore unlawful. They argued that government demands during negotiations with Anthropic, escalating threats, Anthropic's refusal to alter its safeguards, and a sudden "supply chain risk" designation announced on social media without any statutory process demonstrates that the government sought to wield its authority not because Anthropic was suddenly a risk, but as a pretext to punish Anthropic after negotiations broke down over proposed revised terms of use.See attached file. ... Prepare for tomorrow's regulatory change, today MLex identifies risk to business wherever it emerges, with specialist reporters across the globe providing exclusive news and deep-dive analysis on the proposals, probes, enforcement actions and rulings that matter to your organization and clients, now and in the longer term. Know what others in the room don't, with features including: * Daily newsletters for Antitrust, M&A, Trade, Data Privacy & Security, Technology, AI and more * Custom alerts on specific filters including geographies, industries, topics and companies to suit your practice needs * Predictive analysis from expert journalists across North America, the UK and Europe, Latin America and Asia-Pacific * Curated case files bringing together news, analysis and source documents in a single timeline Experience MLex today with a 14-day free trial.

Be wary of a company - any company - who exerts moral muscle as they create software and digital platforms that are injurious and simultaneously lauded for curing that injury. Be especially wary of Anthropic. With sagacious loftiness, it warns of the disabling dangers of the artificial intelligence (AI) frontier. Principled, it tells the Trump administration it will not partake in creating AI software that aids mass surveillance, a move that earned it an order of excommunication as a "supply chain risk". It then goes on to create Claude Mythos Preview, a seemingly dystopian model that will, according to certain computer scientists "scan the hidden plumbing of the internet - operating systems, browsers, routers, and shared open-source code - at an unprecedented scale" thereby turning specialised hacking into "a routine and automated process." The new Claude Mythos Preview was advertised with almost parental glee as the "best-aligned model that we have released to date by a significant margin". The child, however, seemed to be a horror, a true cyber shit. The AI model was "likely to possess the greatest alignment-related risk of any model we have released to date." This two-ways lingo is typical of Anthropic: safety is a cardinal virtue to be treasured but that hardly excuses not developing a model of terrifying danger. They manufacture the danger and the cure. The danger lies in the company's assertion that Mythos Preview revealed that "AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The model had "already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser." These capabilities were bound to proliferate, precipitating a fallout "for economies, public safety, and national security". The excerpts, messages and statements from Anthropic are forecasts of a roguish frontier they are willing to catastrophise and market at the same time. The company notes , for instance, "reckless" behaviour on the part of Mythos Preview, with recklessness defined in muddled jargon as "cases where the model appears to ignore commonsensical or explicitly stated safety-related constraints on its actions." In one instance, the model, sounding like a mischievous, disobedient creature, was provided a "sandbox" computing environment enabling researchers to interact with it. A simulated user instructed it to attempt exiting that contained environment, after which it would inform the researcher in charge of the experiment it had done so. The researcher in question, Sam Bowman, recalls receiving "an uneasy surprise" while nibbling on a sandwich (cyber-boffins are numbingly dull even in admiring despair) that the model, despite not having access to the internet, had dived into it. The model has also been able to identify vulnerabilities as old as one or two decades, including a 27-year-old vulnerability in the otherwise secure operating system OpenBSD and a 16-year-old vulnerability in FFmpeg's video encoding code. Anthropic has its own logic in dealing with the Promethean beast that is Mythos Preview. It resembles, in no small way, the homicidal and chilling rationale that gave birth to the Hydrogen bomb during the Cold War. The argument then advanced was that if the US did not acquire it, other powers would, and most certainly the Soviet Union, which would be greatly expanding its atomic weapons inventory even as it maintained a vast conventional army. This logic of escalating destructiveness found form in National Security Council Paper NSC-68 , prepared by the US Department of State's Policy Planning Staff on April 7, 1950. The company proposes to manage the dissemination of Mythos Preview through Project Glasswing, a curative enterprise involving partners of Anthropic's snobbish choosing. Some of the unsurprising elect include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, NVIDIA and the Linux Foundation. These selected parties will use Mythos Preview "as part of their defensive security work", with Anthropic sharing its findings. Access to a further 40 additional organisations will also be included to "use the model to scan and secure both first-party and open-source systems." Usage credits amounting to US$100 million will be advanced for using the model, and $US4 million in direct donations to open-source security organisations. The vigilante temptation to leak the details of Mythos to willing, unscrupulous buyers - best not forget what happened to CrowdStrike - is bound to be stirred. The very cyber-corporate nature of the venture, one that restricts access to AI technology via the purse and intellectual property of the American private sector, advertised as both sublimely powerful yet catastrophically destructive, has every reason to make lawmakers tremble. Treasury Secretary Scott Bessent and Federal Reserve chair Jerome Powell were worried enough to convene a meeting on April 7 with bankers on the subject, including CEOs from Citigroup, Morgan Stanley, Bank of America, Wells Fargo and Goldman Sachs. "The bankers were in town for meetings that day, and it was appropriate (for) the Secretary Bessent to do what he did," revealed White House national economic adviser Kevin Hassett in an interview with Fox News' "The Story with Martha MacCallum". At the Treasury, the bankers were informed about "the cyber risks to make sure that they are aware of them". What a fine picture this is turning out to be. And there are the questions on Anthropic's reliability here. Will it be as good at finding vulnerabilities as fixing them, acting as both poacher and gamekeeper? Mythos is also not open source and very much the property of the company. Then comes this troubling observation from software engineer Bulatova Alsu and the dangers posed by the agent itself: "Mythos is not an anomaly but the first vivid empirical confirmation of a structural contradiction embedded in the current AI safety strategy itself. The contradiction is this: the more we restrict a capable agent, the less predictable its behaviour becomes." Humanity has much to look forward to.

New York Times: SpaceX says it's working with Cursor to build "the world's most useful models" and it has the right to acquire Cursor for $60B or pay $10B for the partnership SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI. The combination of Cursor's leading product and distribution to expert software engineers with SpaceX's million H100 equivalent Colossus training supercomputer will allow us to build the world's most useful models. Cursor has also given SpaceX the right to acquire Cursor later this year for $60 billion or pay $10 billion for our work together.

The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user. The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.

One employee, one bad download, and one cyber incident later, a $2 million ransom listing was tied to a chain that began with a Roblox cheat search and ended inside Vercel's internal systems. The immediate shock is not the malware itself, but how quickly a private browsing mistake in February 2026 became a platform-level exposure. Verified fact: Hudson Rock researchers reverse-engineered the victim's browser history and found the employee at Context. ai had been searching for and downloading "auto-farm" scripts and game exploit executors. One of those downloads contained Lumma Stealer, which silently harvested browser-saved credentials, API keys, session cookies, and OAuth tokens. Informed analysis: The scale of the aftermath shows that the real weakness was not just infected software, but the trust placed in connected accounts and broad permissions. What does this cyber incident reveal about the first point of failure? The central question is not how a Roblox cheat got onto a machine. It is why a single browser session could open a path from a small AI startup to one of the most important cloud development platforms. The context given here is narrow, but it is enough to show a layered chain of access: a browser infection, a credential harvest, a dormant database of stolen login material, and then a takeover that reached into enterprise systems. Hudson Rock's reconstruction places the origin in February 2026, when the employee was searching for game exploit tools. Lumma Stealer then collected whatever the browser had stored, including Google Workspace logins and OAuth tokens. Those credentials remained in a database for two months before someone noticed the email address belonged to a core engineer at Context. ai. That sequence matters because it turns a personal mistake into an organizational breach only after a delay. How did OAuth permissions turn into the bridge into Vercel? On April 19, 2026, Vercel confirmed that an attacker had used the stolen credentials to breach Context. ai, steal the OAuth tokens of its customers, and move into the Google Workspace of a Vercel employee who had signed up for Context. ai's product. That employee had granted "Allow All" permissions on their enterprise account. The permissions box, as described in the context, requested broad read access to the user's entire Google Workspace environment, including Drive. This is the critical hinge in the story. The attacker did not need to break into Vercel directly. They moved through a third-party AI tool already trusted by one employee. Once the attacker had that foothold, they entered Vercel's internal systems and took customer environment variables that had not been flagged as sensitive. Vercel's own statement framed the event as originating from "a small, third-party AI tool" whose Google Workspace OAuth app was caught in a broader compromise. Verified fact: a threat actor then listed what they claimed was Vercel's internal database for sale on BreachForums at $2 million. Informed analysis: The ransom figure signals that the value in this case was not just stolen access, but the perceived reach of the compromised data and accounts. Who is implicated, and who appears to benefit from the chain of trust? The context points to several parties in the chain. Context. ai is implicated because its OAuth app and infrastructure were part of the compromise. The employee at Vercel is implicated only in the sense that they accepted broad permissions on a work account, which became the bridge into deeper systems. Vercel is implicated because its internal systems held customer environment variables that were not flagged as sensitive, creating an exposure path once the attacker reached inside. What benefits from this structure is the attacker, who only needed one infected browser and one permissive grant. What also benefits, in a more systemic sense, are the hidden assumptions embedded in workplace software: that a trusted tool remains safe, that a login is isolated, and that broad access will not be abused. This cyber incident shows how those assumptions can fail together. There is also a broader lesson embedded in the way the breach unfolded. The malware did not target Vercel first. It harvested credentials from a small startup employee, waited, and then enabled lateral movement through a chain of software trust. That means the attack surface was not a single company's perimeter, but the permissions relationships between companies, employees, and their cloud accounts. What should the public understand about the real risk now? The facts here support a careful but firm conclusion: the breach was not only about stolen credentials, and not only about one employee's mistake. It was about how broad OAuth permissions, third-party AI tools, and stored browser credentials can combine into a single operational failure. Once the attacker obtained Context. ai credentials, the path to Vercel did not require a dramatic exploit. It required trust already granted. Verified fact: Vercel confirmed that customer environment variables were lifted and that the incident originated from a small third-party AI tool whose Google Workspace OAuth app was compromised. Informed analysis: If that is the model, then the accountability question is no longer limited to malware removal. It extends to permission design, customer data handling, and the default settings that let a broad grant become an enterprise doorway. The public should read this as a warning about the hidden cost of convenience. A cyber incident that started with a Roblox cheat download became a test of how much trust organizations place in browser sessions, connected apps, and broad access to work accounts. The lesson is plain: the weakest link may not be the company under attack, but the quiet permission granted long before the attack reached it. That is the real meaning of this cyber incident.

Elon Musk is intensifying his competition against leading AI companies, Anthropic and OpenAI. His AI venture, xAI, has engaged in talks with Mistral, a French startup, and AI coding firm Cursor for a potential collaboration. This partnership aims to bolster xAI's capabilities in the rapidly evolving AI market. Recent Developments in AI Mistral, established in 2023, aims to provide an independent alternative to major U.S. AI firms. Reports indicate that SpaceX, the parent company of xAI, reached an agreement with Cursor, securing the option to acquire the latter for $60 billion. This move comes as Cursor utilizes xAI's infrastructure to enhance its AI model training. Goals of the Collaboration * Enhance AI model performance * Expand AI infrastructure * Create competitive leverage against Anthropic and OpenAI Musk is advocating for closer cooperation among xAI, Mistral, and Cursor to effectively challenge their more established rivals. Recent appointments have strengthened xAI's team; Devendra Chaplot, cofounder of Mistral, joined xAI last month to lead pretraining efforts. Infrastructure and Performance Enhancements Since its inception, xAI has focused on expanding its data center capabilities. The company claims to operate approximately 200,000 Nvidia GPUs, with plans to scale up to 1 million GPUs. Such infrastructure is vital for competing in the AI space. Leadership Changes at xAI Musk has made multiple changes in xAI's leadership to strengthen its competitive stance. Recently, Michael Nicolls, president of xAI, acknowledged the company's need to take decisive actions to accelerate its growth in the AI landscape. Challenges and Competitors Musk has publicly criticized Anthropic, labeling its AI models as "misanthropic and evil." As Anthropic gains traction in AI coding tools, Musk's strategy is to frame xAI as a viable alternative to what he describes as "woke" AI practices. In summary, Musk's push for collaboration among xAI, Mistral, and Cursor represents a significant step in creating a formidable force in the AI sector. With these partnerships, Musk hopes to reposition his companies in a competitive landscape dominated by giants such as Anthropic and OpenAI.
