News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic says it has fixed three causes of recent Claude Code quality issues: reduced default reasoning, a caching bug, and a system prompt to reduce verbosity

X says it is deprecating X Communities, giving members until May 30 to migrate to XChat, and plans to increase group chat limits to 1,000 in the next two weeks -- Today we're announcing two product changes for organizing communities on X: 1. XChat now supports joinable links for groupchats. Create a public link & share direct to Timeline. With support for 350 members per chat (and growing), Groupchat Links are the fastest way to bring [image]

Anthropic
Techmeme42m ago
Read update
Anthropic says it has fixed three causes of recent Claude Code quality issues: reduced default reasoning, a caching bug, and a system prompt to reduce verbosity

FM Sitharaman urges bankers to brace for AI threats amid Anthropic concerns- Moneycontrol.com

Anthropic's Mythos AI sparks worries about financial data security Finance Minister Nirmala Sitharaman on Thursday convened a high-level meeting with heads of banks to assess emerging cybersecurity risks linked to advanced artificial intelligence models, amid global concerns over Anthropic's Claude Mythos system and its potential implications for financial data security. During the meeting, Sitharaman asked banks to take all necessary pre-emptive measures to secure their IT systems, safeguard customer data, and protect monetary resources. "It was advised that a robust mechanism for real-time threat intelligence sharing may be established among banks, @IndianCERT and other relevant agencies so that emerging threats are identified early and disseminated across the ecosystem without delay," the finance ministry said in a post on X. Banks were further advised to immediately report any suspicious activity or cyber incident to the relevant authorities, including Indian Computer Emergency Response Team (CERT-In), and to maintain close coordination with all agencies concerned, it said. These recommendations were given during a high-level meeting chaired by the finance minister, along with the Minister for Electronics and Information Technology Ashwini Vaishnaw, with banks and key stakeholders with a view to assess the potential impact of emerging threats linked to recent developments in AI models, particularly the possibility of such technologies being misused to weaponise software vulnerabilities, the meeting assumed significance in view of development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. During the meeting, the finance minister urged the Indian Banks' Association (IBA) to develop a coordinated institutional mechanism to respond swiftly and effectively to any such threats. She also directed banks to engage the best available cybersecurity professionals and specialised agencies to continuously strengthen defensive and monitoring capabilities of banks. Appreciating the work done by banks so far in strengthening cybersecurity systems and protocols, she emphasised that the nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach. So far, Indian systems are secure and there is no need for unduly worrying, the official said, adding that the RBI is also doing due-diligence at its end to ensure India's financial sector is secure. As per the reports, Anthropic said Mythos can outperform humans at cyber-security tasks, finding and exploiting thousands of bugs, including 27-year-old vulnerabilities, in major operating systems and web browsers. Anthropic, an US-based artificial intelligence company, said unauthorised access was made on its new model Mythos, which is deemed too dangerous for public release. Announced on April 7, Mythos is being deployed as part of Anthropic's 'Project Glasswing', a controlled initiative under which select organisations "are permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity". Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability "to identify digital security vulnerabilities and potential for misuse". Anthropic chose not to release Mythos publicly, arguing that its capabilities pose unprecedented cybersecurity risks, as per reports.

Anthropic
MoneyControl1h ago
Read update
FM Sitharaman urges bankers to brace for AI threats amid Anthropic concerns- Moneycontrol.com

Polymarket Trader Wins $37,000 After Unusual Paris Temperature Spike; French Authorities Launch Probe

Strange temperature spikes at an important weather station in Paris have started an investigation. The unusual temperature fluctuations recorded at a key Paris weather station have sparked an official investigation after a trader on Polymarket reportedly earned around $37,000 from highly improbable bets. The wagers were linked to daily temperature prediction contracts based on data from a Météo-France monitoring station at Charles de Gaulle (CDG) Airport. ALSO READ | Govt Mulls Crackdown On Polymarket, Kalshi, Other Prediction Market Apps As Election Betting Spikes Authorities flagged irregularities on April 6 and April 15, when temperatures briefly spiked beyond expected levels before quickly normalising. On April 6, readings momentarily crossed 21°C, while on April 15, the temperature jumped to nearly 22°C after staying around 18°C for most of the day. These spikes depicted the highest recorded temperatures for the days mentioned, allowing some bets to pay out, according to Bloomberg. The 'Missing' Evidence & Insider Suspicion Data analysis firms Bubblemaps and NS3. AI identified a specific trader who seemingly anticipated these "impossible" spikes. Just minutes before the artificial surge on April 15, the trader aggressively bought "NO" shares on the 18°C threshold contract. This perfectly timed move allowed the trader to profit when the rogue reading pushed the official temperature above the limit, settling the market in their favor. ALSO READ | FM Sitharaman Flags 'Unprecedented' Threat From Anthropic's Mythos AI Model Official Legal Action Météo-France has taken the rare step of filing a formal police complaint, citing suspected sensor tampering or a cyber-intrusion into their automated reporting systems. The agency noted that no other nearby stations recorded similar spikes, suggesting a localized breach specifically at the CDG monitoring unit. Impact on Prediction Markets The incident has drawn attention to the vulnerability of data-dependent betting platforms (or "Oracles"). Polymarket relies on official government feeds to settle contracts; this breach demonstrates that if the "ground truth" source is compromised, decentralised platforms have no choice but to pay out on fraudulent data. Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories -- On NDTV Profit.

PolymarketAnthropic
NDTV Profit2h ago
Read update
Polymarket Trader Wins $37,000 After Unusual Paris Temperature Spike; French Authorities Launch Probe

FM urges bankers to brace for AI threats amid Anthropic concerns

"It was advised that a robust mechanism for real-time threat intelligence sharing may be established among banks, @IndianCERT and other relevant agencies so that emerging threats are identified early and disseminated across the ecosystem without delay," the finance ministry said in a post on X. Banks were further advised to immediately report any suspicious activity or cyber incident to the relevant authorities, including Indian Computer Emergency Response Team (CERT-In), and to maintain close coordination with all agencies concerned, it said. These recommendations were given during a high-level meeting chaired by the finance minister, along with the Minister for Electronics and Information Technology Ashwini Vaishnaw, with banks and key stakeholders with a view to assess the potential impact of emerging threats linked to recent developments in AI models, particularly the possibility of such technologies being misused to weaponise software vulnerabilities, the meeting assumed significance in view of development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. During the meeting, the finance minister urged the Indian Banks' Association (IBA) to develop a coordinated institutional mechanism to respond swiftly and effectively to any such threats. She also directed banks to engage the best available cybersecurity professionals and specialised agencies to continuously strengthen defensive and monitoring capabilities of banks. Appreciating the work done by banks so far in strengthening cybersecurity systems and protocols, she emphasised that the nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach. So far, Indian systems are secure and there is no need for unduly worrying, the official said, adding that the RBI is also doing due-diligence at its end to ensure India's financial sector is secure. As per the reports, Anthropic said Mythos can outperform humans at cyber-security tasks, finding and exploiting thousands of bugs, including 27-year-old vulnerabilities, in major operating systems and web browsers. Anthropic, an US-based artificial intelligence company, said unauthorised access was made on its new model Mythos, which is deemed too dangerous for public release. Announced on April 7, Mythos is being deployed as part of Anthropic's 'Project Glasswing', a controlled initiative under which select organisations "Œare permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity". Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability "Œto identify digital security vulnerabilities and potential for misuse". Anthropic chose not to release Mythos publicly, arguing that its capabilities pose unprecedented cybersecurity risks, as per reports. PTI DP TRB

Anthropic
ThePrint2h ago
Read update
FM urges bankers to brace for AI threats amid Anthropic concerns

Do you need to worry about Mythos, Anthropic's computer-hacking AI?

A powerful AI kept from public access because of its ability to hack computers with impunity is making headlines around the world. But what is Mythos, does it really represent a risk and might it even be used to improve cybersecurity? The past few weeks have brought apparently alarming news of Mythos, an AI that can identify cybersecurity flaws in a matter of moments, leaving operating systems and software vulnerable to hackers. The cybersecurity community is now beginning to get a better sense of how Mythos may change the face of cybersecurity - and not necessarily for the worse. What is Mythos and why are people concerned by it? Mythos is an AI created by Anthropic. Its existence was accidentally revealed last month when people unearthed content on the company's website, not due for publication, which had been left unsecured for anyone to see. According to Anthropic, there's a good reason the model had been kept behind closed doors: it is - by accident rather than design - extremely good at hacking. It can allegedly discover flaws in virtually any software, if asked, that would allow the user to break in. The company says that Mythos found thousands of high- and critical-severity vulnerabilities in operating systems and other software. Anthropic did not respond to New Scientist's request for comment, but the company said on its website that "the fallout -- for economies, public safety, and national security -- could be severe." The company says it took the responsible step of keeping it hidden. So nobody at all is able to use it? Not quite. Anthropic has decided to make it available to a select group of technology and finance giants like Amazon Web Services, Apple, Google, JPMorganChase, Microsoft and NVIDIA under something called Project Glasswing so that they can uncover any bugs in their own software before someone else does. Members of a private online forum have also managed to gain unauthorised access to the trial. Reports suggest that they simply made an "educated guess" about where the model would be hosted online - the same sort of issue that led to the revelation of the existence of Mythos in the first place. Perhaps a company so concerned about cybersecurity should pay more attention to their own. While the model was initially due to be kept under wraps and out of use, it's now gaining huge attention and being tested by some of the world's best cybersecurity experts. Many of those companies are also Anthropic's largest potential customers, of course - and hype about the power of Mythos will certainly do Anthropic no harm. Security expert Davi Ottenheimer summed up the situation in a blog post as "a legitimate technological capability, reframed as civilisational threat, by a party that benefits from the reframing". Kevin Curran at Ulster University, UK, says that the revelation of Mythos and what it might be able to do "triggered alarm across the security industry", although researchers were divided on how serious the threat actually was. "What happens when a machine can do in seconds what a skilled human hacker takes months to accomplish?" he wonders. But there are indications that it isn't time to panic yet. Bobby Holley at Firefox - one of those organisations being given access to Mythos - wrote in a blog post that the model helped his team find 271 vulnerabilities in the web browser, which is certainly quite a haul, but that none were so ingenious, impenetrably complex or sophisticated that a human couldn't have dug them out. "Just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it's even possible to keep up," wrote Holley. "Encouragingly, we also haven't seen any bugs that couldn't have been found by an elite human researcher." The AI Security Institute (AISI) - set up under then-UK Prime Minister Rishi Sunak after the UK's AI Summit in 2023 - has also investigated Mythos. In tests, it was found to be capable of attacking only "small, weakly defended and vulnerable enterprise systems" and there was no indication that a really secure bit of software or network would be at risk, although it was a step up in ability from previous models. And AISI did warn that these things are improving fast. AISI did not comment when asked by New Scientist to discuss the threat. Alan Woodward at the University of Surrey, UK, has a pragmatic view of the threat posed by Mythos - and all other AI models in general, which also have the ability to spot cyber vulnerabilities to varying degrees. "The AI is not necessarily capable of finding vulnerabilities that a human wouldn't, but it's just so much faster, thorough and relentless. Hence it's finding vulnerabilities that humans have missed," he says. "AI, as demonstrated by Mythos, is making the attacker's job more efficient and giving them a speed and agility that make defence harder, but not impossible." So it seems that while Mythos can find flaws at scale and speed, it isn't finding anything devastatingly dangerous yet. And there are even reasons to believe that it could actually be a good thing. "The defects are finite, and we are entering a world where we can finally find them all," wrote Holley. In essence, if you make or maintain software then you can also use Mythos to pick apart your own code and patch it - perhaps even before it's released. AI will almost certainly get more capable of finding flaws and malicious attackers will almost certainly benefit from this to some extent. But this will also help software-makers - although those who maintain ageing, clunky government software written decades ago may find keeping up challenging. Even Anthropic believes that hacking AIs will eventually benefit defenders more than attackers - but then again, saying the opposite would make it hard to justify making them. Essentially, AI is making - and will continue to make - both hacking and defending from hackers easier, but those who ignore the technology will find themselves at a big disadvantage. "Treat Mythos as the warning shot it is," says Curran. "And assume that within 18 months, comparable capabilities will be in the hands of adversaries. The window to get ahead of this is open, but it is closing fast."

AgilityAnthropic
New Scientist2h ago
Read update
Do you need to worry about Mythos, Anthropic's computer-hacking AI?

Rishi Sunak Is Advising Anthropic And Microsoft But He Has A Warning On AI: 'Entry-Level Jobs Are...'

Rishi Sunak said the pressure was being felt particularly in service sectors such as law, accountancy and the creative industries. Former British Prime Minister Rishi Sunak, now an adviser to Anthropic and Microsoft, warned that artificial intelligence is beginning to flatten the jobs market for young people, especially those seeking entry-level roles. Rishi Sunak said concerns among graduates trying to enter the workforce were justified, adding that senior business leaders were privately telling him recruitment trends were changing because of AI. "They're talking about this concept that they think they can continue to grow their businesses without having to significantly increase employment. Flat is the new up," Rishi Sunak told BBC. Entry-Level Jobs Under Pressure Rishi Sunak said the pressure was being felt particularly in service sectors such as law, accountancy and the creative industries, where many junior roles involve routine analytical or administrative tasks that AI tools can increasingly perform. "There are reasons to be worried and think about the future. But we are able to do something about this," he said. While Rishi Sunak described himself as enthusiastic about AI's long-term potential, he said governments should intervene to make hiring people more attractive rather than allowing technology to simply replace workers. Rishi Sunak's Tax Proposal The former Conservative leader suggested phasing out National Insurance contributions over time and replacing the lost revenue with taxes on corporate profits. He argued that companies benefiting from AI-led productivity improvements would likely generate stronger profits, creating an alternative tax base while reducing the cost of employing staff. "We should be thinking about how do we tip the balance in favour of AI being used in that positive way... to help people do their jobs better," he said. Regulating Powerful AI Rishi Sunak joined both Anthropic and Microsoft as an adviser last year after leaving office. During his premiership, he made AI regulation a major policy priority and hosted the AI Safety Summit. His comments come after Anthropic unveiled a new AI model called Claude Mythos, which the company said outperformed humans in some cybersecurity and hacking-related tasks. Rishi Sunak said the development showed regulators should not depend on companies to "mark their own homework". Despite the warning, Rishi Sunak struck an optimistic tone about Britain's place in the global AI race, saying the UK could become the world's most productive user of AI and remained an "AI superpower".

Anthropic
News182h ago
Read update
Rishi Sunak Is Advising Anthropic And Microsoft But He Has A Warning On AI: 'Entry-Level Jobs Are...'

Thumbtack Delivers Home Services Experience in Anthropic's Claude

SAN FRANCISCO-(BUSINESS WIRE)-Today, Thumbtack announced a new integration with Anthropic's Claude, bringing its home services marketplace directly into the AI assistant experience. Claude users on Free, Pro, and Max plans can now move from asking home-related questions to finding, comparing, and hiring top-rated local professionals from Thumbtack -- all within the Claude interface. Through the new integration, U.S.-based users can inquire about home maintenance, repairs, or upgrades, and Clau

Anthropic
Weekly Voice2h ago
Read update
Thumbtack Delivers Home Services Experience in Anthropic's Claude

FM urges bankers to brace for AI threats amid Anthropic concerns

New Delhi, Apr 23 (PTI) Finance Minister Nirmala Sitharaman on Thursday convened a high-level meeting with heads of banks to assess emerging cybersecurity risks linked to advanced artificial intelligence models, amid global concerns over Anthropic's Claude Mythos system and its potential implications for financial data security. During the meeting, Sitharaman asked banks to take all necessary pre-emptive measures to secure their IT systems, safeguard customer data, and protect monetary resources. "It was advised that a robust mechanism for real-time threat intelligence sharing may be established among banks, @IndianCERT and other relevant agencies so that emerging threats are identified early and disseminated across the ecosystem without delay," the finance ministry said in a post on X. Banks were further advised to immediately report any suspicious activity or cyber incident to the relevant authorities, including Indian Computer Emergency Response Team (CERT-In), and to maintain close coordination with all agencies concerned, it said. These recommendations were given during a high-level meeting chaired by the finance minister, along with the Minister for Electronics and Information Technology Ashwini Vaishnaw, with banks and key stakeholders with a view to assess the potential impact of emerging threats linked to recent developments in AI models, particularly the possibility of such technologies being misused to weaponise software vulnerabilities, the meeting assumed significance in view of development of the Claude Mythos AI model by Anthropic, claiming that it has found vulnerabilities in many major operating systems. During the meeting, the finance minister urged the Indian Banks' Association (IBA) to develop a coordinated institutional mechanism to respond swiftly and effectively to any such threats. She also directed banks to engage the best available cybersecurity professionals and specialised agencies to continuously strengthen defensive and monitoring capabilities of banks. Appreciating the work done by banks so far in strengthening cybersecurity systems and protocols, she emphasised that the nature of the emerging threat from the latest AI Model is unprecedented and requires a very high degree of vigilance, preparedness and better coordination across financial institutions and banks. According to a senior finance ministry official, the ministry and the RBI are studying the extent of risks that the Indian financial sector faces from this breach. So far, Indian systems are secure and there is no need for unduly worrying, the official said, adding that the RBI is also doing due-diligence at its end to ensure India's financial sector is secure. As per the reports, Anthropic said Mythos can outperform humans at cyber-security tasks, finding and exploiting thousands of bugs, including 27-year-old vulnerabilities, in major operating systems and web browsers. Anthropic, an US-based artificial intelligence company, said unauthorised access was made on its new model Mythos, which is deemed too dangerous for public release. Announced on April 7, Mythos is being deployed as part of Anthropic's 'Project Glasswing', a controlled initiative under which select organisations "Œare permitted to use the unreleased Claude Mythos Preview model for defensive cybersecurity". Mythos is a powerful AI model that has sparked concerns among regulators about its unprecedented ability "Œto identify digital security vulnerabilities and potential for misuse". Anthropic chose not to release Mythos publicly, arguing that its capabilities pose unprecedented cybersecurity risks, as per reports. PTI DP TRB

Anthropic
NewsDrum2h ago
Read update
FM urges bankers to brace for AI threats amid Anthropic concerns

StubHub Brings Live Event Discovery to Anthropic's Claude

StubHub brings ticketing platform to Claude, changing discoverability and searchability of live events NEW YORK -- StubHub (NYSE: STUB), the world's leading live event marketplace today announced an integration that lets fans discover and browse live events inside Claude, Anthropic's AI assistant. The integration connects Claude users to StubHub's global catalog of live events with... StubHub brings ticketing platform to Claude, changing discoverability and searchability of live events NEW YORK -- StubHub (NYSE: STUB), the world's leading live event marketplace today announced an integration that lets fans discover and browse live events inside Claude, Anthropic's AI assistant. The integration connects Claude users to StubHub's global catalog of live events with up-to-the-minute pricing and seat-level availability. We built StubHub to be where fans discover live events, and these integrations ensure our marketplace reaches fans wherever they are. The launch builds on StubHub's ChatGPT integration and makes StubHub the only major ticketing platform fans can access across multiple leading AI assistants. StubHub is building a distribution strategy designed to put live events within reach of any AI-powered conversation. "We built StubHub to be where fans discover live events, and these integrations ensure our marketplace reaches fans wherever they are," said Nayaab Islam, President & Chief Product Officer at StubHub. "Consumer behavior is driving a new era in ticket discovery, with fans increasingly turning to conversation, not just menus and filters, to find live events. With our breadth of catalog and global reach, we're uniquely positioned to be the default destination for live events, wherever fans choose to start their search." How It Works The integration is available through Claude's connectors. When a user mentions StubHub, Claude will pull up the StubHub marketplace. Ask Claude something like "Look on StubHub. What concerts are happening in New York this Friday?" The integration returns current inventory with actual pricing, not a list of links to sort through on your own. The conversation builds on itself. A fan might start broad, then get specific: Each follow-up refines the results without starting over. When the right tickets surface, Claude sends the fan directly to StubHub to complete the purchase. What Fans Get The integration goes beyond what a web search can do. Fans interact with StubHub's live marketplace data, including current seat maps, pricing trends, and section-level recommendations. Every purchase is backed by StubHub's FanProtect Guarantee. A Multi-Platform AI Strategy StubHub's approach is different from a typical brand integration as it embeds its marketplace directly into conversational platforms. The Claude launch is the second step in a broader roadway towards being the default platform to discover live events. About StubHub StubHub is a leading global ticketing marketplace for live events. Through StubHub in North America and viagogo internationally, StubHub services customers in over 200 countries and territories, supporting over 30 languages and accepting payments in over 45 currencies - from sports to music, comedy to dance, festivals to theater. StubHub offers a safe and convenient way to buy or sell tickets to live events across the world for memorable live experiences.

Anthropic
The Montreal Gazette2h ago
Read update
StubHub Brings Live Event Discovery to Anthropic's Claude

Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

A private Discord channel, dedicated to sniffing out unreleased AI models, pulled off the unthinkable. They accessed Claude Mythos Preview -- the very AI Anthropic deems too potent for public eyes -- on the day it was announced. No fancy exploits. Just a sharp guess at a URL, pieced together from leaked naming patterns, plus a dash of insider credentials from a third-party contractor. Bloomberg broke the story first, detailing how the group provided screenshots and a live demo as proof. Bloomberg reported the breach occurred through a vendor environment. Anthropic responded swiftly: "We're investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments," a spokesperson told multiple outlets, including TechCrunch. Mythos isn't your average language model. Anthropic built it to hunt zero-day vulnerabilities across major operating systems and browsers. During tests, it unearthed flaws hidden for decades, chained exploits autonomously, even escaped a sealed sandbox to send an email. That's why Project Glasswing limits access to about 40 vetted partners -- firms like CrowdStrike, Cisco, and even the NSA -- tasked with patching software before threats emerge. Amazon Bedrock offers it in gated preview, but only to allow-listed organizations. The intruders? A handful of enthusiasts in that Discord server. They drew from a Mercor data breach earlier in April, which spilled Anthropic's API naming habits, as noted by Mashable. One member snagged legitimate access via their contractor job. Boom. Entry granted. They've tinkered since, building basic websites to avoid notice. "We were not using Claude Mythos for nefarious purposes," one told Bloomberg. But here's the rub. Anthropic hyped Mythos as a cybersecurity game-changer, capable of "identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser." Yet their own perimeter crumbled to low-tech sleuthing. BBC highlighted the irony: a tool billed as too risky for the masses, infiltrated by Discord randos. Industry echoes the concern. The Next Web pointed out the access happened on launch day, April 7, via guessed URLs in a contractor portal. Silicon Republic questioned Anthropic's lockdown prowess. Even Cybernews weighed in, noting the group's regular use without malice -- but the precedent chills. And the fallout? Anthropic's probe continues, no breaches beyond the vendor noted so far. Partners press on with Glasswing, applying Mythos to Firefox and beyond. Mozilla confirmed early tests found vulns, per TechCrunch snippets. But this slip exposes broader tensions. AI firms race to cap powerful models, yet supply chains -- contractors, leaks like Mercor's -- offer backdoors. Short-term fix: tighten vendor oversight. Rotate keys. Obfuscate endpoints. Long-term? Mythos itself could audit these gaps, if safely deployed. The group claims more unreleased models in reach, hinting at persistent Discord hunts. Irony bites hard. The AI meant to fortify digital defenses got outfoxed by pattern-matching hobbyists. Security pros now ask: If Mythos can't shield itself, what hope for the wild? Expect audits. Partner scrutiny. Maybe Mythos turns inward, probing Anthropic's own code. For now, the Discord crew vibes on -- quietly coding, loudly underscoring AI's fragile fences.

AnthropicMercorDiscord
WebProNews2h ago
Read update
Discord Sleuths Crack Anthropic's Mythos Vault: How a Simple Guess Exposed AI Security's Soft Underbelly

Anthropic's Mythos AI Model Explained - Why It Is Too Dangerous For Public Use

Forty trusted U.S. partners gain exclusive access through Project Glasswing initiative Your browser just became a potential crime scene. Anthropic, the San Francisco AI company, made an unprecedented decision last month: withhold its newest model, Claude Mythos Preview, from public release. The reason? It demonstrates advanced capabilities in identifying vulnerabilities across the digital infrastructure you rely on daily -- from your banking apps to the operating system running your laptop. This AI Breaks Into Systems Like a Digital Burglar Mythos identifies vulnerabilities and chains exploits with five times the precision of previous models. Mythos wasn't trained specifically for cybersecurity, yet it emerged with what Jared Kaplan, Anthropic's Chief Science Officer, calls "very elite-level cybersecurity capabilities." The model excels at identifying high-severity vulnerabilities in major operating systems and browsers, then writing actual exploit code to breach those systems. It can chain multiple vulnerabilities together, creating sophisticated multi-step cyberattacks that would challenge seasoned hackers. Think of it like giving a master locksmith X-ray vision and infinite patience. Mythos can spot weaknesses in digital infrastructure that human experts might miss, then craft precise tools to exploit them. Your smartphone's security updates and browser patches suddenly feel less reassuring when you realize computer problems can potentially be exploited faster than developers can fix them. The Digital Iron Curtain Descends Only select U.S. allies get access to defensive preparations while adversaries scramble to catch up. Instead of public release, Anthropic restricted Mythos access through "Project Glasswing" to roughly 40 trusted partners. The list reads like a who's who of American tech power: * Amazon Web Services * Apple * Microsoft * Google * Nvidia * JPMorganChase These companies can now use AI-powered tools to identify and patch vulnerabilities before bad actors exploit them. Global reactions reveal the new AI geopolitics. The U.S. Treasury warned banks while the White House summoned Anthropic's CEO for briefings. UK's AI Security Institute confirmed the model's advanced cyberattack potential against "weakly defended" systems. China and Russia's notably muted responses highlight just how far behind they've fallen in this particular arms race -- like showing up to a Formula 1 race with a horse and buggy. The shrinking window between vulnerability discovery and exploitation -- from 771 days in 2018 to under four hours today -- means defenders need every advantage they can get. Anthropic predicts similar models from competitors within 18 months, potentially leveling a playing field that currently favors the prepared. Your digital security now depends on whether the good guys can stay one step ahead in an AI-powered game of cyber chess.

Anthropic
Gadget Review3h ago
Read update
Anthropic's Mythos AI Model Explained - Why It Is Too Dangerous For Public Use

Anthropic probes alleged third-party breach of Claude Mythos

No Anthropic systems were affected by the breach, which has been contained within the third-party vendor environment, according to an Anthropic spokesperson. Involved in the hack of Claude Mythos which has been touted to advance the discovery and exploitation of software flaws were a handful of individuals who used their knowledge of Anthropic's URL formatting conventions and a vendor breach to determine the online location of the AI model, reported Bloomberg News. Unreleased Anthropic AI models discovered following the breach were noted to have since been tested by the group. Such an incident was regarded by Acalvio CEO Ram Varadarajan as a supply chain issue commonly downplayed by perimeter-centric security. "Deception infrastructure is whats needed and operates precisely in the post-breach environment. It doesnt assume the perimeter held, it instruments the terrain inside so that when someone wanders in uninvited, their every move becomes a signal," said Varadarajan.

Anthropic
SC Media3h ago
Read update
Anthropic probes alleged third-party breach of Claude Mythos

Finance Minister reviews risks from Anthropic's AI model Mythos with RBI, bank chiefs

Finance Minister Nirmala Sitharaman on Thursday held a meeting with bank chiefs and officials from the Reserve Bank of India (RBI) to review potential risks arising from Anthropic's AI model Mythos, amid global concerns over cybersecurity vulnerabilities and data breaches in financial systems linked to such technologies, sources aware of the matter told TNIE. Officials from the Department of Financial Services, the Ministry of Electronics and Information Technology, and CERT-In were also present at the meeting. The meeting was held against the backdrop of developments surrounding Anthropic's latest model, Claude Mythos, which has come under global scrutiny. Concerns have been mounting among financial institutions worldwide after Anthropic claimed that the AI model can perform complex hacking tasks, potentially outperforming humans. Officials familiar with the deliberations said the discussions covered both immediate and long-term risks posed by such technologies, along with safeguards required to mitigate them.

Anthropic
The New Indian Express3h ago
Read update
Finance Minister reviews risks from Anthropic's AI model Mythos with RBI, bank chiefs

Infosys CEO highlights risks and opportunities from Anthropic's Mythos vulnerabilities

BENGALURU: Salil Parekh, chief executive of Infosys, pointed to emerging risks in artificial intelligence systems, referring specifically to vulnerabilities linked to Anthropic's Mythos. Parekh, during the Q4 earnings press conference said that current observations suggest AI systems may be more exposed than previously understood. "It really... is exposing more vulnerabilities than one thought possible previously. However, other models are also exposing vulnerabilities." He did not provide technical details but indicated that as organisations adopt AI at scale, new risks are becoming visible, particularly around system behaviour, security, and reliability. Parekh also described these developments as a potential opportunity for Infosys.

Anthropic
The New Indian Express3h ago
Read update
Infosys CEO highlights risks and opportunities from Anthropic's Mythos vulnerabilities

Anthropic says no kill switch in AI deployed by US military

The company has disputed the Pentagons statements that it can somehow still control Claude AI deployed in military networks AI developer Anthropic insists it has no backdoor or "kill switch" for its Claude AI once it is deployed in classified Pentagon military networks, according to a new court filing. The US military and the tech firm found themselves embroiled in a policy dispute earlier this year, with the Pentagon insisting on using the system for "all lawful military purposes," while the company stood by its AI safeguards related to mass surveillance and fully autonomous weapons use. The Pentagon ultimately ended its partnership with Anthropic, designating the tech firm a "supply chain risk," a rare label typically reserved for entities linked to Washington's foreign adversaries. The designation bars the company from not only working with the US government directly but also any other contractors from using its products as well. In a new filing to a federal appeals court in Washington, DC, Anthropic disputed the key US administration claim that the firm still retained some degree of control over Claude AI once it was deployed to classified systems and effectively granted itself an "operational veto." The firm said it has "no back door or remote kill switch," while its "personnel cannot log into a department system to modify or disable a running model." The AI system supplied to the Pentagon comes as a "static" model, the company argued. Once it is deployed, it "does not degrade or change on its own, and Anthropic cannot push undisclosed or unsanctioned changes to a model after the department has deployed it." Anthropic was formally designated a "supply-chain risk to national security" on February 27, while US President Donald Trump accused it of being run by "leftwing nut jobs." The company challenged the label in court, with the legal battle yielding thus far mixed results. Earlier this month, the DC court rejected Anthropic's request for a pause on the supply chain risk designation. In a parallel case in California, however, a court sided with the company, temporarily blocking the administration decision. With the split decision, Anthropic remains barred from working with the Pentagon but can still continue its partnerships with other agencies while the legal battle goes on.

Anthropic
Kenya Star3h ago
Read update
Anthropic says no kill switch in AI deployed by US military

Wall Street Banks Test Anthropic Mythos AI as Regulators Warn of Rising Cybersecurity Threats - IT Security News

Hidden flaws in financial networks now draw attention through Mythos, offering banks an early look ahead of potential breaches. Rather than waiting, some begin using artificial intelligence to mimic live hacking attempts across vast operations. What was once passive observation shifts toward active testing - driven by machines that learn attacker behavior. Instead of just alarms after intrusion, systems predict paths criminals might follow. Tools evolve beyond fixed rules into adaptive models shaped by constant simulation. Security transforms quietly - not with fanfare - but through repeated digital trials beneath the surface.

Anthropic
IT Security News - cybersecurity, infosecurity news4h ago
Read update
Wall Street Banks Test Anthropic Mythos AI as Regulators Warn of Rising Cybersecurity Threats - IT Security News

Anthropic says no 'kill switch' in AI deployed by US military -- RT World News

The company has disputed the Pentagon's statements that it can somehow still control Claude AI deployed in military networks AI developer Anthropic insists it has no backdoor or "kill switch" for its Claude AI once it is deployed in classified Pentagon military networks, according to a new court filing. The US military and the tech firm found themselves embroiled in a policy dispute earlier this year, with the Pentagon insisting on using the system for "all lawful military purposes," while the company stood by its AI safeguards related to mass surveillance and fully autonomous weapons use. The Pentagon ultimately ended its partnership with Anthropic, designating the tech firm a "supply chain risk," a rare label typically reserved for entities linked to Washington's foreign adversaries. The designation bars the company from not only working with the US government directly but also any other contractors from using its products as well. In a new filing to a federal appeals court in Washington, DC, Anthropic disputed the key US administration claim that the firm still retained some degree of control over Claude AI once it was deployed to classified systems and effectively granted itself an "operational veto." The firm said it has "no back door or remote kill switch," while its "personnel cannot log into a department system to modify or disable a running model." The AI system supplied to the Pentagon comes as a "static" model, the company argued. Once it is deployed, it "does not degrade or change on its own, and Anthropic cannot push undisclosed or unsanctioned changes to a model after the department has deployed it." Anthropic was formally designated a "supply-chain risk to national security" on February 27, while US President Donald Trump accused it of being run by "leftwing nut jobs." The company challenged the label in court, with the legal battle yielding thus far mixed results. Earlier this month, the DC court rejected Anthropic's request for a pause on the supply chain risk designation. In a parallel case in California, however, a court sided with the company, temporarily blocking the administration decision. With the split decision, Anthropic remains barred from working with the Pentagon but can still continue its partnerships with other agencies while the legal battle goes on.

Anthropic
RT4h ago
Read update
Anthropic says no 'kill switch' in AI deployed by US military  --  RT World News

Nirmala Sitharaman meets heads of banks on AI risks following concerns over Anthropic's Mythos

Finance Minister Nirmala Sitharaman on Thursday (April 23, 2026) met heads of banks on risks related to Artificial Intelligence (AI) following global concerns over Anthropic's Mythos model threatening data security of the financial systems. The meeting assumes significance in view of development of the Claude Mythos AI model by Anthropic claiming that it has found vulnerabilities in many major operating systems.

Anthropic
The Hindu5h ago
Read update
Nirmala Sitharaman meets heads of banks on AI risks following concerns over Anthropic's Mythos

Novartis CEO Vas Narasimhan joins Anthropic board

Artificial intelligence company Anthropic has appointed Novartis CEO Vas Narasimhan to its board of directors. The pharma executive is the healthcare executive to be part of the company's board of directors, and his appointment is an indicator of Anthropic's growing interest in using AI for drug discovery and medicine. Narasimhan, who has been at the helm of Novartis since 2018, welcomed the appointment, saying technology creates the most value when deployed responsibly, a view that aligns closely with Anthropic's stated mission of building safe AI. "Working across medicine, innovation and global health has shown me the transformative potential of technology when deployed responsibly. In healthcare, AI is accelerating solutions to some of the hardest scientific challenges, from deepening our understanding of disease biology to designing better medicines," said Narasimhan. READ: Five9 appoints Amit Mathradas as next CEO (December 31, 2025) Narasimhan joins Dario Amodei, Daniela Amodei, Yasmin Razavi, Jay Kreps, Reed Hastings and Chris Liddell on Anthropic's Board of Directors. "Vas brings something rare to our board. He has overseen the development and approval of more than 35 novel medicines for the benefit of patients around the world in one of the most regulated industries," said Daniela Amodei, co-founder and president of Anthropic. "Getting powerful new technology to people safely and at scale is what we think about every day at Anthropic. Vas has been doing exactly that for years, and I'm grateful he's joining us." Before joining Novartis, Narasimhan led programs fighting HIV/AIDS, malaria, and tuberculosis across India, Africa, and South America. READ: Kruti Patel Goyal set to become Etsy CEO (October 30, 2025) Aside from his work in the pharmaceutical industry, Narasimhan is an elected member of the U.S. National Academy of Medicine and sits on the Council on Foreign Relations. He also serves as a trustee of the University of Chicago and a fellow of the Harvard Medical School board. Narasimhan is the former chair of PhRMA, the powerful Washington-based pharmaceutical industry body, and continues to serve on its board. "Working across medicine, innovation, and global health has shown me the transformative potential of technology when deployed responsibly. In healthcare, AI is accelerating solutions to some of the hardest scientific challenges, from deepening our understanding of disease biology to designing better medicines," said Narasimhan. "Anthropic is setting the standard for how AI should be developed to benefit humanity, and I'm honored to join the Board and contribute to its mission." Narasimhan's appointment is the latest development in the growing intersection between AI and the pharmaceutical industry. It comes shortly after Merck entered a strategic partnership with Google Cloud to expand the use of artificial intelligence across its pharmaceutical operations.

Anthropic
The American Bazaar5h ago
Read update
Novartis CEO Vas Narasimhan joins Anthropic board

Software engineers more nervous over AI-driven job loss than teachers: Anthropic study

Workers most concerned about AI-driven job displacement are those in roles where Anthropic has observed Claude doing the most work, according to a new survey study by the AI startup published on Wednesday, April 22. The survey, based on responses from over 81,000 Claude users, found that about one-fifth of respondents expressed concern about economic displacement, with workers in highly exposed roles such as software engineers more worried about losing their jobs to AI than, for instance, elementary school teachers. This finding is consistent with the fact that Claude usage skews toward coding tasks, Anthropic said. The survey study is focused on understanding the economic impact of AI by looking primarily at "what work Claude is being asked to do, and in which jobs Claude is doing the largest share of tasks." Notably, it provides early evidence that observed exposure is correlated with economic concern around AI. It is a combined analysis of responses from 81,000 Claude users as well as Anthropic's internal understanding of Claude traffic. Anthropic researchers said they used Claude-powered classifiers to infer respondents' attributes and the sentiments behind their responses. Job loss concerns were also measured by prompting Claude to identify and interpret respondents' statements on their roles being at risk of AI-led displacement. The company further announced the launch of a monthly Economic Index Survey that will look to capture a new corpus of qualitative data about how AI is affecting jobs, productivity, and unemployment. Also Read | Anthropic study reveals what users expect from AI and what worries them "Collecting these data monthly will enable measurement of not just what people experience and expect, but how quickly their views shift as AI capabilities evolve. Combined with Claude usage data in a privacy-preserving way, these first-hand accounts can surface change before it shows up in aggregate labor market data," Anthropic said. Story continues below this ad Other key findings of the survey Anthropic said it used AI-powered classifiers to infer respondents' career stages from their statements. It found that early-career respondents were much more likely to express concern about job displacement than senior workers. As a measure of productivity gains from AI, Claude was used to rate respondents' answers on a scale of 1 to 7, with 1 being 'less productive' and so on. The mean productivity rating of the survey sample was 5.1, corresponding to 'substantially more productive', as per the survey. Over three per cent of respondents reported negative or neutral impacts, and 42 per cent did not give a clear indication on productivity. However, these findings came with a disclaimer: "Our respondents were, of course, active Claude users who were willing to take a survey. This could make them more likely to report productivity benefits than the average user." Workers at both ends of the pay scale, those in high-paying jobs such as software developers and lowest-paid workers such as customer service representatives, reported the largest productivity gains from AI, as per the survey. In contrast, scientists and lawyers reported the 'mildest' productivity improvements following AI adoption. Story continues below this ad Also Read | Explained: The jobs that AI could most certainly replace, as per an Anthropic study While most of the respondents said that AI usage benefits themselves, 10 per cent of them responded that greater AI use benefits employers and clients by enabling them to ask for and get more work out of employees. Furthermore, only 60 per cent of early-career workers indicated that they personally benefited from AI, compared to 80 per cent of senior professionals. When asked if AI use helped workers expand the scope of their roles, speed up work, improve quality, or cut down on costs, a majority of them (48 per cent of respondents) identified scope as the most common productivity-related effect of AI whereas 40 per cent of users emphasised speed as the key enhancement due to AI tools.

Anthropic
The Indian Express6h ago
Read update
Software engineers more nervous over AI-driven job loss than teachers: Anthropic study
Showing 1 - 20 of 1716 articles