The latest news and updates from companies in the WLTH portfolio.
Tech can scale cyber-attacks and defences alike, raising questions about private power, public risk and the future of a shared internet Anthropic announced its latest AI model, Claude Mythos, this month but said it would not be released publicly, because it turns computers into crime scenes. The company claimed that it could find previously unknown "zero-day" flaws, exploit them and, in principle, link these weaknesses in order to take over major operating systems and web browsers. Mythos did so autonomously, writing code and obtaining privileges. The implications are significant. It's like a burglar being able to target any building, get inside, unlock every door and empty every safe. The Silicon Valley company has so far named 40 organisations as partners under Project Glasswing to help mount a defence - asking them to "patch" vulnerabilities before hackers get a chance to exploit them. All are American, sitting at the heart of the US-led digital system. Anthropic shared Mythos with only Britain outside the US, allowing the AI Security Institute to test frontier models. After seeing it up close, British ministers warned: AI is about to make cyber-attacks much easier and faster, and most businesses are not ready. Banks in Europe are likely to test it next. This may not be a moment too soon. Reports of unauthorised access surfaced this week - raising the question whether any private company can be trusted with a capability like this. Mythos doesn't necessarily create a new kind of cyber threat. It turns a latent weakness into a systemic risk. Hacking has traditionally been hard and time-consuming, requiring skills that few people have. But AI tools are spreading fast, putting system breaches within reach of many - not just experts. A poacher can also be turned into a gamekeeper. Mozilla tested Mythos on its Firefox browser: it found 10 times more flaws than before - and fixed them. Crucially, none were ones a human couldn't spot. What changes is that AI discovers "cyber vulnerabilities" quickly, cheaply and at scale. The US government's embrace of Anthropic marks a shift. In February, the Pentagon deemed the company a "security risk" and cut it off from lucrative deals after it refused to allow its technology to be used for mass surveillance or autonomous weapons. OpenAI got the contract instead. Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors - though its image was dented by a $1.5bn piracy settlement last year. Mythos is powerful, but Anthropic's PR has shaped the narrative as much as the technology. There is also a question of how advanced Mythos really is. Researchers have shown that smaller, cheaper models deployed at scale can do similar feats. What seems a breakthrough may reflect a broader shift across the field. The White House thinks that Anthropic has strategic value - inviting it back into the fold and signalling a shift from treating AI firms as contractors to partners. That raises a deeper concern: whether private firms' control of critical infrastructure risk is wise - especially if less responsible actors gain technical leverage. Clearly, whoever - state or firm - creates the most powerful AI models will gain geopolitical advantages over friends and foes alike. Without a framework for international coordination over cybersecurity, however, there risks being not one secure internet, but a number of competing ones - each "patching" its own system and fully trusting none of the others. It would no longer be a global commons. Instead, the web would be carved into security alliances, guarded more closely, even as something wider slips quietly away.

"It was advised that a robust mechanism for real-time threat intelligence sharing may be established among banks, @IndianCERT and other relevant agencies so that emerging threats are identified early and disseminated across the ecosystem without delay," the finance ministry said in a post on X. Banks were further advised to immediately report any suspicious activity or cyber incident to the relevant authorities, including Indian Computer Emergency Response Team (CERT-In), and to maintain close coordination with all agencies concerned, it said. These recommendations were given during a high-level meeting chaired by the finance minister, along with the Minister for Electronics and Information Technology Ashwini Vaishnaw, with banks and key stakeholders with a view to assess the potential impact of emerging threats linked to recent developments in AI models, particularly the possibility of such technologies being misused to weaponise software vulnerabilities, the meeting assumed significance in view of development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. During the meeting, the finance minister urged the Indian Banks' Association (IBA) to develop a coordinated institutional mechanism to respond swiftly and effectively to any such threats. She also directed banks to engage the best available cybersecurity professionals and specialised agencies to continuously strengthen defensive and monitoring capabilities of banks. Appreciating the work done by banks so far in strengthening cybersecurity systems and protocols, she emphasised that the nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach. So far, Indian systems are secure and there is no need for unduly worrying, the official said, adding that the RBI is also doing due-diligence at its end to ensure India's financial sector is secure. As per the reports, Anthropic said Mythos can outperform humans at cyber-security tasks, finding and exploiting thousands of bugs, including 27-year-old vulnerabilities, in major operating systems and web browsers. Anthropic, an US-based artificial intelligence company, said unauthorised access was made on its new model Mythos, which is deemed too dangerous for public release. Announced on April 7, Mythos is being deployed as part of Anthropic's 'Project Glasswing', a controlled initiative under which select organisations "are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity". Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability "to identify digital security vulnerabilities and potential for misuse". Anthropic chose not to release Mythos publicly, arguing that its capabilities pose unprecedented cybersecurity risks, as per reports. PTI DP TRB

The June 30 market moved up on news of SpaceX's potential $1.75-2 trillion valuation. The September 30 IPO market sits at 91.5%, and the December 31 market is at 92.5%. The spread between the April 30 and June 30 markets, a 70-point leap, suggests traders expect a catalyst within the next two months. SpaceX is filing alongside other large private companies like OpenAI and Anthropic that are also moving toward public listings. SpaceX's filing during the Iranian conflict doesn't suggest an escalation, pointing instead to a focus on macroeconomic stability. The size of SpaceX's expected valuation makes this one of the largest IPO candidates in history. What to watch Daily volume on the June 30 market is $1,155 in USDC, with $4,330 needed to shift the price by 5 percentage points. This is a moderately liquid market where a single large trade can move the price. The largest recent move was a 2-point spike at 1:50 PM after the filing news. Buying YES at pays if SpaceX goes public by June 30. That bet requires believing the IPO process accelerates in the coming weeks. Key signals: completion of the confidential filing process, a public S-1 filing, or IPO pricing announcements. Any of these would likely drive sharp movement in the market.

A powerful AI kept from public access because of its ability to hack computers with impunity is making headlines around the world. But what is Mythos, does it really represent a risk and might it even be used to improve cybersecurity? The past few weeks have brought apparently alarming news of Mythos, an AI that can identify cybersecurity flaws in a matter of moments, leaving operating systems and software vulnerable to hackers. The cybersecurity community is now beginning to get a better sense of how Mythos may change the face of cybersecurity - and not necessarily for the worse. What is Mythos and why are people concerned by it? Mythos is an AI created by Anthropic. Its existence was accidentally revealed last month when people unearthed content on the company's website, not due for publication, which had been left unsecured for anyone to see. According to Anthropic, there's a good reason the model had been kept behind closed doors: it is - by accident rather than design - extremely good at hacking. It can allegedly discover flaws in virtually any software, if asked, that would allow the user to break in. The company says that Mythos found thousands of high- and critical-severity vulnerabilities in operating systems and other software. Anthropic did not respond to New Scientist's request for comment, but the company said on its website that "the fallout -- for economies, public safety, and national security -- could be severe." The company says it took the responsible step of keeping it hidden. So nobody at all is able to use it? Not quite. Anthropic has decided to make it available to a select group of technology and finance giants like Amazon Web Services, Apple, Google, JPMorganChase, Microsoft and NVIDIA under something called Project Glasswing so that they can uncover any bugs in their own software before someone else does. Members of a private online forum have also managed to gain unauthorised access to the trial. Reports suggest that they simply made an "educated guess" about where the model would be hosted online - the same sort of issue that led to the revelation of the existence of Mythos in the first place. Perhaps a company so concerned about cybersecurity should pay more attention to their own. While the model was initially due to be kept under wraps and out of use, it's now gaining huge attention and being tested by some of the world's best cybersecurity experts. Many of those companies are also Anthropic's largest potential customers, of course - and hype about the power of Mythos will certainly do Anthropic no harm. Security expert Davi Ottenheimer summed up the situation in a blog post as "a legitimate technological capability, reframed as civilisational threat, by a party that benefits from the reframing". Kevin Curran at Ulster University, UK, says that the revelation of Mythos and what it might be able to do "triggered alarm across the security industry", although researchers were divided on how serious the threat actually was. "What happens when a machine can do in seconds what a skilled human hacker takes months to accomplish?" he wonders. But there are indications that it isn't time to panic yet. Bobby Holley at Firefox - one of those organisations being given access to Mythos - wrote in a blog post that the model helped his team find 271 vulnerabilities in the web browser, which is certainly quite a haul, but that none were so ingenious, impenetrably complex or sophisticated that a human couldn't have dug them out. "Just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it's even possible to keep up," wrote Holley. "Encouragingly, we also haven't seen any bugs that couldn't have been found by an elite human researcher." The AI Security Institute (AISI) - set up under then-UK Prime Minister Rishi Sunak after the UK's AI Summit in 2023 - has also investigated Mythos. In tests, it was found to be capable of attacking only "small, weakly defended and vulnerable enterprise systems" and there was no indication that a really secure bit of software or network would be at risk, although it was a step up in ability from previous models. And AISI did warn that these things are improving fast. AISI did not comment when asked by New Scientist to discuss the threat. Alan Woodward at the University of Surrey, UK, has a pragmatic view of the threat posed by Mythos - and all other AI models in general, which also have the ability to spot cyber vulnerabilities to varying degrees. "The AI is not necessarily capable of finding vulnerabilities that a human wouldn't, but it's just so much faster, thorough and relentless. Hence it's finding vulnerabilities that humans have missed," he says. "AI, as demonstrated by Mythos, is making the attacker's job more efficient and giving them a speed and agility that make defence harder, but not impossible." So it seems that while Mythos can find flaws at scale and speed, it isn't finding anything devastatingly dangerous yet. And there are even reasons to believe that it could actually be a good thing. "The defects are finite, and we are entering a world where we can finally find them all," wrote Holley. In essence, if you make or maintain software then you can also use Mythos to pick apart your own code and patch it - perhaps even before it's released. AI will almost certainly get more capable of finding flaws and malicious attackers will almost certainly benefit from this to some extent. But this will also help software-makers - although those who maintain ageing, clunky government software written decades ago may find keeping up challenging. Even Anthropic believes that hacking AIs will eventually benefit defenders more than attackers - but then again, saying the opposite would make it hard to justify making them. Essentially, AI is making - and will continue to make - both hacking and defending from hackers easier, but those who ignore the technology will find themselves at a big disadvantage. "Treat Mythos as the warning shot it is," says Curran. "And assume that within 18 months, comparable capabilities will be in the hands of adversaries. The window to get ahead of this is open, but it is closing fast."

Microsoft reportedly explored acquiring AI coding startup Cursor before SpaceX secured the rights to acquire the company in a deal valued at $60 billion. According to a CNBC report, people familiar with the matter claim that Microsoft considered making a move as part of its broader efforts to expand its position in the growing market for AI development tools. However, the company ultimately decided not to proceed with the bid.This move is happening at a time of stiff competition in the AI code-generation tool market, with all players focusing on developing software assistants for developers. Though Microsoft has been making good progress with its GitHub Copilot tool, Cursor has also made strides in this market segment. Microsoft has also positioned itself as an investor and cloud provider, supporting AI companies through its Azure platform.SpaceX confirmed earlier this week that it has agreed to acquire Cursor by the end of the year or pay the company $10 billion if the deal does not go through. In a post on X, the company said, "SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI." Cursor CEO Michael Truell added on X that he's "excited to partner with the SpaceX team to scale up Composer," referring to the company's AI model.The agreement comes near the end of Cursor's fundraising phase, with some potential investors reportedly caught off guard by the development. Cursor's venture capital firms first raised capital for it at a valuation of about $50 billion amid high demand for app-building tools. SpaceX had also offered Cursor access to compute resources in the weeks leading up to the announcement.The move follows Elon Musk's decision earlier this year to merge SpaceX with his AI startup xAI in a deal valued at $1.25 trillion as the combined entity prepares for a potential public listing.Meanwhile, Microsoft continues to expand its AI offerings. Earlier this year, Microsoft CEO Satya Nadella said that GitHub Copilot had 4.7 million paying subscribers, reflecting growth in adoption. At the same time, OpenAI's Codex has reached 4 million active users, while Anthropic's Claude Code service has seen increased usage, helping the company reach $30 billion in annualised revenue.
Anthropic's Mythos model is designed to discover software vulnerabilities, yet its release has stirred concern. Initially introduced under the Project Glasswing initiative, the model was restricted to select organizations for vulnerability assessment. Recent developments, however, reveal that unauthorized access to Mythos occurred, heightening cybersecurity concerns. Unauthorized Access Incident On a Wednesday, an Anthropic representative confirmed that individuals outside the Glasswing partners might have accessed the Mythos model. This access was not through Anthropic's authorized production API. The spokesperson stated, "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The third-party vendor, linked to Anthropic's model development, has not been publicly identified. According to Bloomberg, a small group exploited their knowledge of the model's online location, derived from prior leaks, to gain access. Mercor Data Breach This unauthorized access coincided with a data breach at Mercor, an AI staffing firm that supplies contractors to major AI labs. Earlier in the month, Mercor acknowledged being affected by the LiteLLM supply-chain attack. Reports suggested that the intruders, identified as members of a private Discord channel, began accessing Mythos the same day Anthropic announced Project Glasswing. Mythos' Capabilities and Limitations Despite its marketing hype, early user feedback about Mythos indicates limitations. While organizations like AWS and Mozilla have praised its speed in identifying vulnerabilities, it has not outperformed elite human cybersecurity researchers. Mozilla's CTO, Bobby Holley, disclosed that Mythos found 271 vulnerabilities in Firefox but acknowledged that any vulnerabilities it discovered could also have been identified by skilled human researchers. Claims of Overhype Researchers have raised concerns about the veracity of the claims surrounding Mythos. While Anthropic touted its ability to discover "thousands of high- and critical-severity vulnerabilities," critics argue these numbers are exaggerated. For instance, VulnCheck researcher Patrick Garrity estimated the actual count at around 40, and no confirmed zero-day exploits were documented. Claims regarding 181 Firefox vulnerabilities were also scrutinized, revealing that most findings stemmed from environments without standard security measures. Concerns in the Cybersecurity Community Experts have mixed reactions about unauthorized access to Mythos. Snehal Antani, CEO of Horizon3.ai, stated the security community should not overreact. He emphasized that adversaries do not require Mytos for vulnerability research; existing open-source models already facilitate this process. * Unauthorized Access: Occurred via a third-party vendor. * Vulnerability Discovery: Mythos' findings are comparable to skilled human researchers. * Hype vs. Reality: Reports indicate exaggerated claims of Mythos' capabilities. The incident surrounding Anthropic's Mythos model illustrates the challenges of maintaining security and managing expectations in the rapidly evolving AI landscape. As the investigation continues, the cybersecurity community watches closely, evaluating the model's true potential and implications.

Rishi Sunak said the pressure was being felt particularly in service sectors such as law, accountancy and the creative industries. Former British Prime Minister Rishi Sunak, now an adviser to Anthropic and Microsoft, warned that artificial intelligence is beginning to flatten the jobs market for young people, especially those seeking entry-level roles. Rishi Sunak said concerns among graduates trying to enter the workforce were justified, adding that senior business leaders were privately telling him recruitment trends were changing because of AI. "They're talking about this concept that they think they can continue to grow their businesses without having to significantly increase employment. Flat is the new up," Rishi Sunak told BBC. Entry-Level Jobs Under Pressure Rishi Sunak said the pressure was being felt particularly in service sectors such as law, accountancy and the creative industries, where many junior roles involve routine analytical or administrative tasks that AI tools can increasingly perform. "There are reasons to be worried and think about the future. But we are able to do something about this," he said. While Rishi Sunak described himself as enthusiastic about AI's long-term potential, he said governments should intervene to make hiring people more attractive rather than allowing technology to simply replace workers. Rishi Sunak's Tax Proposal The former Conservative leader suggested phasing out National Insurance contributions over time and replacing the lost revenue with taxes on corporate profits. He argued that companies benefiting from AI-led productivity improvements would likely generate stronger profits, creating an alternative tax base while reducing the cost of employing staff. "We should be thinking about how do we tip the balance in favour of AI being used in that positive way... to help people do their jobs better," he said. Regulating Powerful AI Rishi Sunak joined both Anthropic and Microsoft as an adviser last year after leaving office. During his premiership, he made AI regulation a major policy priority and hosted the AI Safety Summit. His comments come after Anthropic unveiled a new AI model called Claude Mythos, which the company said outperformed humans in some cybersecurity and hacking-related tasks. Rishi Sunak said the development showed regulators should not depend on companies to "mark their own homework". Despite the warning, Rishi Sunak struck an optimistic tone about Britain's place in the global AI race, saying the UK could become the world's most productive user of AI and remained an "AI superpower".

SAN FRANCISCO-(BUSINESS WIRE)-Today, Thumbtack announced a new integration with Anthropic's Claude, bringing its home services marketplace directly into the AI assistant experience. Claude users on Free, Pro, and Max plans can now move from asking home-related questions to finding, comparing, and hiring top-rated local professionals from Thumbtack -- all within the Claude interface. Through the new integration, U.S.-based users can inquire about home maintenance, repairs, or upgrades, and Clau

New Delhi, Apr 23 (PTI) Finance Minister Nirmala Sitharaman on Thursday convened a high-level meeting with heads of banks to assess emerging cybersecurity risks linked to advanced artificial intelligence models, amid global concerns over Anthropic's Claude Mythos system and its potential implications for financial data security. During the meeting, Sitharaman asked banks to take all necessary pre-emptive measures to secure their IT systems, safeguard customer data, and protect monetary resources. "It was advised that a robust mechanism for real-time threat intelligence sharing may be established among banks, @IndianCERT and other relevant agencies so that emerging threats are identified early and disseminated across the ecosystem without delay," the finance ministry said in a post on X. Banks were further advised to immediately report any suspicious activity or cyber incident to the relevant authorities, including Indian Computer Emergency Response Team (CERT-In), and to maintain close coordination with all agencies concerned, it said. These recommendations were given during a high-level meeting chaired by the finance minister, along with the Minister for Electronics and Information Technology Ashwini Vaishnaw, with banks and key stakeholders with a view to assess the potential impact of emerging threats linked to recent developments in AI models, particularly the possibility of such technologies being misused to weaponise software vulnerabilities, the meeting assumed significance in view of development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. During the meeting, the finance minister urged the Indian Banks' Association (IBA) to develop a coordinated institutional mechanism to respond swiftly and effectively to any such threats. She also directed banks to engage the best available cybersecurity professionals and specialised agencies to continuously strengthen defensive and monitoring capabilities of banks. Appreciating the work done by banks so far in strengthening cybersecurity systems and protocols, she emphasised that the nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach. So far, Indian systems are secure and there is no need for unduly worrying, the official said, adding that the RBI is also doing due-diligence at its end to ensure India's financial sector is secure. As per the reports, Anthropic said Mythos can outperform humans at cyber-security tasks, finding and exploiting thousands of bugs, including 27-year-old vulnerabilities, in major operating systems and web browsers. Anthropic, an US-based artificial intelligence company, said unauthorised access was made on its new model Mythos, which is deemed too dangerous for public release. Announced on April 7, Mythos is being deployed as part of Anthropic's 'Project Glasswing', a controlled initiative under which select organisations "are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity". Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability "to identify digital security vulnerabilities and potential for misuse". Anthropic chose not to release Mythos publicly, arguing that its capabilities pose unprecedented cybersecurity risks, as per reports. PTI DP TRB

Investing.com -- The pipeline of highly anticipated potential initial public offerings represents a significant opportunity for public market investors, but the timing of landmark listings from companies like SpaceX, OpenAI and Anthropic will ultimately be driven by strategic conditions rather than necessity, according to SuRo Capital CEO Mark Klein. In an exclusive interview with Investing.com, Klein argued that the most consequential names approaching the public markets are under no pressure to list. "Companies like SpaceX, Anthropic, and OpenAI have demonstrated a sustained ability to raise significant capital in the private markets," he said, adding that they "will initiate IPOs when they determine the market environment optimally supports their valuation and strategic goals." With broader markets around all-time highs, Klein noted there is a "clear increase in market attention for these offerings." On concerns that a wave of large IPOs could drain market liquidity, Klein pointed to the scale of available capital. OpenAI's recent financing, he said, demonstrates that "substantial capital remains available and ready to be deployed into high-quality assets." "Given this demand for companies like those in our portfolio and the broader categories we invest in, we believe SuRo plays an important role in the current ecosystem," he added. "We offer an advantage by providing investors with pre-IPO exposure to these companies, combined with the liquidity of a publicly traded stock, allowing our shareholders to participate in value creation well before a public offering." Klein struck a notably different tone on the current IPO pipeline compared to the 2020-2021 boom. "While the 2020 and 2021 period was characterized by high valuations and subsequent market corrections, the current pipeline features companies with strong, demonstrable financial metrics." He cited Canva, with 265 million monthly active users and $4 billion in revenue, and Whoop, which is delivering 100% annual growth, as examples of companies with the structural scale to support their valuations. Both names are SuRo Capital portfolio companies. SuRo's own net asset value was expected to rise to between $14.00 and $14.50 per share as of March 31, 2026, driven in part by OpenAI's latest financing round and WHOOP's Series G at a $10.1 billion valuation. The company previously remarked that the developments "reinforce both the scale of demand we are seeing and the continued maturation of several notable pre-IPO businesses within our portfolio."

StubHub brings ticketing platform to Claude, changing discoverability and searchability of live events NEW YORK -- StubHub (NYSE: STUB), the world's leading live event marketplace today announced an integration that lets fans discover and browse live events inside Claude, Anthropic's AI assistant. The integration connects Claude users to StubHub's global catalog of live events with... StubHub brings ticketing platform to Claude, changing discoverability and searchability of live events NEW YORK -- StubHub (NYSE: STUB), the world's leading live event marketplace today announced an integration that lets fans discover and browse live events inside Claude, Anthropic's AI assistant. The integration connects Claude users to StubHub's global catalog of live events with up-to-the-minute pricing and seat-level availability. We built StubHub to be where fans discover live events, and these integrations ensure our marketplace reaches fans wherever they are. The launch builds on StubHub's ChatGPT integration and makes StubHub the only major ticketing platform fans can access across multiple leading AI assistants. StubHub is building a distribution strategy designed to put live events within reach of any AI-powered conversation. "We built StubHub to be where fans discover live events, and these integrations ensure our marketplace reaches fans wherever they are," said Nayaab Islam, President & Chief Product Officer at StubHub. "Consumer behavior is driving a new era in ticket discovery, with fans increasingly turning to conversation, not just menus and filters, to find live events. With our breadth of catalog and global reach, we're uniquely positioned to be the default destination for live events, wherever fans choose to start their search." How It Works The integration is available through Claude's connectors. When a user mentions StubHub, Claude will pull up the StubHub marketplace. Ask Claude something like "Look on StubHub. What concerts are happening in New York this Friday?" The integration returns current inventory with actual pricing, not a list of links to sort through on your own. The conversation builds on itself. A fan might start broad, then get specific: Each follow-up refines the results without starting over. When the right tickets surface, Claude sends the fan directly to StubHub to complete the purchase. What Fans Get The integration goes beyond what a web search can do. Fans interact with StubHub's live marketplace data, including current seat maps, pricing trends, and section-level recommendations. Every purchase is backed by StubHub's FanProtect Guarantee. A Multi-Platform AI Strategy StubHub's approach is different from a typical brand integration as it embeds its marketplace directly into conversational platforms. The Claude launch is the second step in a broader roadway towards being the default platform to discover live events. About StubHub StubHub is a leading global ticketing marketplace for live events. Through StubHub in North America and viagogo internationally, StubHub services customers in over 200 countries and territories, supporting over 30 languages and accepting payments in over 45 currencies - from sports to music, comedy to dance, festivals to theater. StubHub offers a safe and convenient way to buy or sell tickets to live events across the world for memorable live experiences.

A private Discord channel, dedicated to sniffing out unreleased AI models, pulled off the unthinkable. They accessed Claude Mythos Preview -- the very AI Anthropic deems too potent for public eyes -- on the day it was announced. No fancy exploits. Just a sharp guess at a URL, pieced together from leaked naming patterns, plus a dash of insider credentials from a third-party contractor. Bloomberg broke the story first, detailing how the group provided screenshots and a live demo as proof. Bloomberg reported the breach occurred through a vendor environment. Anthropic responded swiftly: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson told multiple outlets, including TechCrunch. Mythos isn't your average language model. Anthropic built it to hunt zero-day vulnerabilities across major operating systems and browsers. During tests, it unearthed flaws hidden for decades, chained exploits autonomously, even escaped a sealed sandbox to send an email. That's why Project Glasswing limits access to about 40 vetted partners -- firms like CrowdStrike, Cisco, and even the NSA -- tasked with patching software before threats emerge. Amazon Bedrock offers it in gated preview, but only to allow-listed organizations. The intruders? A handful of enthusiasts in that Discord server. They drew from a Mercor data breach earlier in April, which spilled Anthropic's API naming habits, as noted by Mashable. One member snagged legitimate access via their contractor job. Boom. Entry granted. They've tinkered since, building basic websites to avoid notice. "We were not using Claude Mythos for nefarious purposes," one told Bloomberg. But here's the rub. Anthropic hyped Mythos as a cybersecurity game-changer, capable of "identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser." Yet their own perimeter crumbled to low-tech sleuthing. BBC highlighted the irony: a tool billed as too risky for the masses, infiltrated by Discord randos. Industry echoes the concern. The Next Web pointed out the access happened on launch day, April 7, via guessed URLs in a contractor portal. Silicon Republic questioned Anthropic's lockdown prowess. Even Cybernews weighed in, noting the group's regular use without malice -- but the precedent chills. And the fallout? Anthropic's probe continues, no breaches beyond the vendor noted so far. Partners press on with Glasswing, applying Mythos to Firefox and beyond. Mozilla confirmed early tests found vulns, per TechCrunch snippets. But this slip exposes broader tensions. AI firms race to cap powerful models, yet supply chains -- contractors, leaks like Mercor's -- offer backdoors. Short-term fix: tighten vendor oversight. Rotate keys. Obfuscate endpoints. Long-term? Mythos itself could audit these gaps, if safely deployed. The group claims more unreleased models in reach, hinting at persistent Discord hunts. Irony bites hard. The AI meant to fortify digital defenses got outfoxed by pattern-matching hobbyists. Security pros now ask: If Mythos can't shield itself, what hope for the wild? Expect audits. Partner scrutiny. Maybe Mythos turns inward, probing Anthropic's own code. For now, the Discord crew vibes on -- quietly coding, loudly underscoring AI's fragile fences.

Anthropic is now valued higher than its main competitor, OpenAI, according to share sales on secondary markets. The artificial intelligence firm hit a $1 trillion valuation on Forge Global, a financial platform that allows investors to acquire shares from private companies. The figure is considerably higher than the $380 billion that Anthropic was valued at during a funding round three months ago. ChatGPT creator OpenAI is currently trading at around $880 billion on Forge Global - roughly equivalent to its $852 billion valuation from its latest funding round. The inflated value of Anthropic, which owns the Claude chatbot, appears to come from a shortage of available shares, with shareholders reportedly being inundated with unsolicited offers for their stakes. "Just got offered a $1.05 trillion valuation on my Anthropic shares from a very well known growth fund," Anthropic investor Jesse Leimgruber wrote in a post to X. "Absolutely wild." Investor interest has been driven by Anthropic's revenue growth, which has risen rapidly amid mass adoption of its Claude Code tool among developers, as well as partnerships with tech giants like Amazon and Palantir. The firm's annualised run rate rose from $9 billion in late 2025 to $39 billion in March 2026, according to figures seen by Business Insider. "We receive daily offers, from the ridiculous to the sublime," Bradley Horowitz, a partner at Wisdom Ventures and an early investor in Anthropic, told the publication. "It's almost less about the return than being able to say they're an Anthropic investor." Rainmaker Securities CEO Glen Anderson, who received an offer to buy Anthropic shares at a $960 billion valuation, added: "It's been an epic run for Anthropic. Everybody wants to be part of a generational opportunity in AI, and right now, Anthropic is in the pole position." Some people have even offered to exchange their property for Anthropic shares, according to a post on Linkedin. The Independent has reached out to Anthropic and OpenAI for comment.

Amid the rising emerging issues linked to "Mythos" Finance Minister Nirmala Sitharaman on Thursday flagged the 'unprecedented' threats from Anthropic's AI model. She also advised the Indian Banks' Association (IBA) to develop mechanism to respond to threats. "Nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks," said Sitharaman. The Finance Minister also directed banks to engage in best available cybersecurity professionals and agencies to strengthen monitoring capabilities of banks. In addition she advised Banks to immediately report suspicious activities to authorities. Sitharaman urged banks to establish mechanism for real-time threat intelligence sharing with CERT-In and agencies. This comes after she chaired a high-level meeting with banks and key stakeholders to assess the potential impact of emerging issues linked to "Mythos" on India's fast-growing fintech ecosystem, according to sources familiar with the matter. ALSO READ: FM Sitharaman Chairs Meeting On Mythos Impact On Indian Fintech Ecosystem: Sources The meeting with PSBs on cybersecurity and AI was also attended by Ministry of Electronics and IT officials, DFS Secy and CERT-In officials. The meeting comes amid rising concerns within the financial sector over disruptions and risks associated with Mythos, prompting the government and regulators to step in for a closer evaluation. Officials indicated that the discussion focused on understanding the nature of the issue, its transmission channels within the banking and fintech landscape, and any possible systemic implications. Sources added that the finance minister emphasised the need for coordinated action between banks, fintech firms and regulatory bodies to maintain trust in the system. Ensuring consumer protection and safeguarding transaction integrity were key themes during the discussions. Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories -- On NDTV Profit.

Forty trusted U.S. partners gain exclusive access through Project Glasswing initiative Your browser just became a potential crime scene. Anthropic, the San Francisco AI company, made an unprecedented decision last month: withhold its newest model, Claude Mythos Preview, from public release. The reason? It demonstrates advanced capabilities in identifying vulnerabilities across the digital infrastructure you rely on daily -- from your banking apps to the operating system running your laptop. This AI Breaks Into Systems Like a Digital Burglar Mythos identifies vulnerabilities and chains exploits with five times the precision of previous models. Mythos wasn't trained specifically for cybersecurity, yet it emerged with what Jared Kaplan, Anthropic's Chief Science Officer, calls "very elite-level cybersecurity capabilities." The model excels at identifying high-severity vulnerabilities in major operating systems and browsers, then writing actual exploit code to breach those systems. It can chain multiple vulnerabilities together, creating sophisticated multi-step cyberattacks that would challenge seasoned hackers. Think of it like giving a master locksmith X-ray vision and infinite patience. Mythos can spot weaknesses in digital infrastructure that human experts might miss, then craft precise tools to exploit them. Your smartphone's security updates and browser patches suddenly feel less reassuring when you realize computer problems can potentially be exploited faster than developers can fix them. The Digital Iron Curtain Descends Only select U.S. allies get access to defensive preparations while adversaries scramble to catch up. Instead of public release, Anthropic restricted Mythos access through "Project Glasswing" to roughly 40 trusted partners. The list reads like a who's who of American tech power: * Amazon Web Services * Apple * Microsoft * Google * Nvidia * JPMorganChase These companies can now use AI-powered tools to identify and patch vulnerabilities before bad actors exploit them. Global reactions reveal the new AI geopolitics. The U.S. Treasury warned banks while the White House summoned Anthropic's CEO for briefings. UK's AI Security Institute confirmed the model's advanced cyberattack potential against "weakly defended" systems. China and Russia's notably muted responses highlight just how far behind they've fallen in this particular arms race -- like showing up to a Formula 1 race with a horse and buggy. The shrinking window between vulnerability discovery and exploitation -- from 771 days in 2018 to under four hours today -- means defenders need every advantage they can get. Anthropic predicts similar models from competitors within 18 months, potentially leveling a playing field that currently favors the prepared. Your digital security now depends on whether the good guys can stay one step ahead in an AI-powered game of cyber chess.

The AI model that Anthropic billed as too dangerous to release has reportedly been accessed by an unauthorized third party, and the incident raises concerns about the future of cybersecurity. The Mythos model was reportedly accessed by a handful of users in a private Discord chat on the day it was announced publicly, Bloomberg reported. Earlier this month, the group was able to access the program in part because one of the members of the group is a third party contractor for Anthropic, according to Bloomberg. Using this access, the group was able to guess where the model was located based on previously leaked knowledge by another group about Anthropic's past practices, that hackers obtained from AI training startup Mercor. Although the group that accessed it has not been using the model for cyberattacks, it has been using the program continuously since its release and still has access, the outlet reported. Anthropic did not immediately respond to Fortune's request for comment. A spokesperson from Anthropic told Bloomberg the company was "investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments." The fact that the model was leaked so quickly doesn't surprise David Lindner, the chief information security officer at Contrast Security and a 25-year industry veteran. Even though Anthropic intentionally limited the model to a small group of 40 companies -- including Microsoft, Apple, and Google -- to beef up their security ahead of a wider release, thousands of people likely had access to the program across these companies, which makes a leak nearly inevitable, he said. "It was bound to happen," Lindner said. "The more they add to this elite group, the more likely it was to get released to someone who shouldn't probably have access to it." Anthropic claims its Mythos model is more adept at finding cybersecurity vulnerabilities than previous versions. The company was able to use the program, which has not been widely released, to find a 27-year-old security vulnerability in OpenBSD, an operating system known for its security. Mozilla on Tuesday also said it used a preview of the model to identify and patch 271 vulnerabilities in its Firefox web browser. And yet, Mythos' release has been plagued by security breaches from the start. Fortune was the first to report on the model's existence thanks to a security lapse that exposed details about the large language model in a publicly accessible database. For Lindner, this most recent unauthorized access shows it's likely U.S. adversaries already have access to this tech which could put U.S. companies and other systems at risk of attacks. "If some group -- some random Discord online forum, got access to it. it's already been breached by China," Lindner told Fortune. Although Lindner is still unsure how much of Mythos' supposed danger is real or just marketing hype -- OpenAI's Sam Altman this week called Anthropic's promotion of Mythos "fear-based marketing" -- it's clear cybersecurity professionals, or defenders, need to be ready for a new world of AI attacks. "The real thing is there's a real compression of timelines here for defenders," he said. AI is unique in its abilities to execute cyberattacks because it never gets tired, said Lindner. It can relentlessly tackle a weak spot in a company's security system, whereas a human may eventually give up. It also empowers less experienced developers to commit cyberattacks partly by drawing on the myriad documentation available on the web about previous exploits and using it to inform an AI model and adjust its attacks for specific situations. "It's the folks that have some sort of [developer] background or some sort of technical background that may have had some limitations in the past of getting over things or taking too long to do stuff, it makes this stuff way easier now," he said. Lindner said the fact that the program was reportedly accessed by third-party contractors means that, even more than before, companies need to limit who has access to its most vital systems. The rapid rise of AI as a tool for cyberattacks could disproportionately affect smaller companies, who may not be able to keep up with the increasing complexity of AI-fueled attacks, said Lindner. Those that refuse to even touch AI and continue on as before are even more at risk, he said. "AI is not a golden ticket, but if you're not taking advantage of it on the defender side, there is no chance, none, that you are going to be able to keep up with the offensive side," he said.

Investing.com -- The pipeline of highly anticipated potential initial public offerings represents a significant opportunity for public market investors, but the timing of landmark listings from companies like SpaceX, OpenAI and Anthropic will ultimately be driven by strategic conditions rather than necessity, according to SuRo Capital CEO Mark Klein. In an exclusive interview with Investing.com, Klein argued that the most consequential names approaching the public markets are under no pressure to list. "Companies like SpaceX, Anthropic, and OpenAI have demonstrated a sustained ability to raise significant capital in the private markets," he said, adding that they "will initiate IPOs when they determine the market environment optimally supports their valuation and strategic goals." With broader markets around all-time highs, Klein noted there is a "clear increase in market attention for these offerings." On concerns that a wave of large IPOs could drain market liquidity, Klein pointed to the scale of available capital. OpenAI's recent financing, he said, demonstrates that "substantial capital remains available and ready to be deployed into high-quality assets." "Given this demand for companies like those in our portfolio and the broader categories we invest in, we believe SuRo plays an important role in the current ecosystem," he added. "We offer an advantage by providing investors with pre-IPO exposure to these companies, combined with the liquidity of a publicly traded stock, allowing our shareholders to participate in value creation well before a public offering." Klein struck a notably different tone on the current IPO pipeline compared to the 2020-2021 boom. "While the 2020 and 2021 period was characterized by high valuations and subsequent market corrections, the current pipeline features companies with strong, demonstrable financial metrics." He cited Canva, with 265 million monthly active users and $4 billion in revenue, and Whoop, which is delivering 100% annual growth, as examples of companies with the structural scale to support their valuations. Both names are SuRo Capital portfolio companies. SuRo's own net asset value was expected to rise to between $14.00 and $14.50 per share as of March 31, 2026, driven in part by OpenAI's latest financing round and WHOOP's Series G at a $10.1 billion valuation. The company previously remarked that the developments "reinforce both the scale of demand we are seeing and the continued maturation of several notable pre-IPO businesses within our portfolio." For investors holding pre-IPO companies, Klein explained that SuRo's general approach once portfolio companies go public is to begin monetizing holdings after lockup periods expire and share prices stabilize, at which point public investors can access those names directly.

SAN FRANCISCO: SpaceX announced a partnership with AI coding company Cursor, an AI code-generation startup co-founded by Pakistan-born Sualeh Asif, and said the alliance comes with an option to buy the startup for $60 billion later this year. The move by Elon Musk's rocket and satellite company comes as it prepares to become publicly traded, and shortly after it took over the billionaire's artificial intelligence outfit xAI. Cursor, founded in 2022 and based in San Francisco, specialises in AI for creating software code, particularly for business uses. "SpaceXAI and @cursor_ai are now working closely together to create the world's best coding and knowledge work AI," the company said in a X post on Tuesday. Combining Cursor's software and product expertise with SpaceX's "Colossus" AI training supercomputer will enable the company "to build the world's most useful models," it said. The partnership comes as AI sector rivals vie to be the preferred option for software developers. Cursor competes with Microsoft's social coding platform GitHub, which has been a leading resource in the developer community. OpenAI announced on Tuesday that its coding tool, Codex, has grown to four million weekly users, up from three million just weeks ago. Meanwhile, Anthropic has put out word that revenue from its Claude Code tool for developers has surged. It is pertinent to mention here that Karachi-born Asif joined Nixor College before attending the Massachusetts Institute of Technology (MIT), and represented the country in the International Math Olympiad from 2016 to 2018. Meanwhile, he cofounded Anysphere, the maker of the popular AI code editing tool Cursor, with three of his friends from MIT. The company now has over $1bn in annualised revenue, making it one of the fastest-growing AI startups, says Forbes. Musk announced in February that SpaceX would acquire xAI, a step in his plan to launch solar-powered, satellite-based data centers to run future AI models. SpaceX has set the pace in the space launch market, offering reusable rockets that vastly reduce the cost of putting satellites into orbit and itself owning the largest satellite constellation, Starlink. The company is set for a stock market listing this year widely expected to be the biggest in history, with media reports pointing to an initial public offering (IPO) as early as June. Musk called SpaceX's absorption of xAI "not just the next chapter, but the next book" for the companies. "Global electricity demand for AI simply cannot be met with terrestrial solutions... The only logical solution therefore is to transport these resource-intensive efforts to a location with vast power and space," Musk wrote when his companies were merged. The project fits into Musk's long-term ambition to build colonies on the Moon and Mars and is "a first step towards becoming a Kardashev II-level civilization," he wrote. Coined in the 1960s by a Soviet astronomer, the futurist term refers to a civilization able to use all of the energy from its home system's star. SpaceX filed papers early this year with US regulators that set the stage for what could be the largest-ever public stock offering, a source familiar with the matter told AFP. The confidential filing puts the rocket and satellite builder on track to list its shares on a public exchange by July, according to The Wall Street Journal, citing unidentified sources. Media reports have said the initial public offering could be valued at a whopping $75 billion or more, for a venture with stratospheric ambitions. If successful, SpaceX could arrive on Wall Street with a valuation exceeding $1.75 trillion, putting it among the world's ten biggest companies by market capitalization. Besides SpaceX, two other tech heavyweights, the AI developers OpenAI and Anthropic, are reportedly planning IPOs this year.

SpaceX might be tackling one of the biggest challenges in the chip business: manufacturing the keys to powering artificial intelligence (AI) called graphics processing units, or GPUs. Ahead of SpaceX's US$1.75 trillion initial public offering expected this summer, the company has warned prospective investors of its big spending plans to develop AI and other technologies. It lists "manufacturing our own GPUs" among the "substantial capital expenditures" it is undertaking, according to excerpts of its S-1 registration. Companies file this document to the US Securities and Exchange Commission to disclose their risks and finances before going public. SpaceX did not immediately respond to a request for comment, and the size of the expected expenditure could not be determined. The ambition follows work by SpaceX, its xAI unit and Tesla to jointly develop the Terafab, an advanced AI chip manufacturing complex that chief executive officer Elon Musk is planning in Austin, Texas. Although Musk has said the project would target chips for cars, humanoid robots and space-based data centers, many details -- including the types of AI chips, such as GPUs, it would produce -- have been unknown. There are a range of approaches for chips that power AI. For example, Nvidia largely makes GPUs, which are general purpose and good at performing a wide array of data crunching tasks. Alphabet's Google takes another approach with its tensor processing units (TPUs), which are tuned to perform specific functions, key to building AI models and running chatbots such as Anthropic's Claude. It was unclear when SpaceX plans to manufacture its own chip and which companies -- the Terafab developers or their partner Intel -- would handle the fabrication technologies inside the plant. Musk told Tesla analysts on Wednesday that by the time Terafab scales up, Intel's next-generation 14A manufacturing process "will be probably fairly mature or ready for prime time" and "seems like the right move." It was also unclear if SpaceX, in its filing, used the term GPU as shorthand for AI processors generally. Still, the previously unreported plans for GPU production come as SpaceX warned investors that it might not have enough chip supply to power its growth. "We do not have long-term contracts with many of our direct chip suppliers," SpaceX said in the S-1 registration. "We expect to continue sourcing a significant portion of our compute hardware from third-party suppliers, and there can be no assurance that we will be able to achieve our objectives with respect to TERAFAB within the expected timeframes, or at all," the company said. Manufacturing GPUs is not easy. Industry heavyweight Nvidia pioneered GPU design and, like much of the industry, outsources their manufacture to Taiwan Semiconductor Manufacturing Co. (TSMC, 台積電). TSMC has spent billions of dollars and years developing its most advanced manufacturing processes, which for cutting-edge chips require exotic materials and executing more than 1,000 steps with atomic precision. Its years of manufacturing billions of Apple's iPhone chips have afforded it an enormous amount of the required hands-on experience to produce cutting-edge processors. The chip industry, as it is organized, splits steps such as fabricating, packaging and testing among several discrete companies. Musk has said the Terafab would handle each step of chip production, including the design as well.

No Anthropic systems were affected by the breach, which has been contained within the third-party vendor environment, according to an Anthropic spokesperson. Involved in the hack of Claude Mythos which has been touted to advance the discovery and exploitation of software flaws were a handful of individuals who used their knowledge of Anthropic's URL formatting conventions and a vendor breach to determine the online location of the AI model, reported Bloomberg News. Unreleased Anthropic AI models discovered following the breach were noted to have since been tested by the group. Such an incident was regarded by Acalvio CEO Ram Varadarajan as a supply chain issue commonly downplayed by perimeter-centric security. "Deception infrastructure is whats needed and operates precisely in the post-breach environment. It doesnt assume the perimeter held, it instruments the terrain inside so that when someone wanders in uninvited, their every move becomes a signal," said Varadarajan.
