News & Updates

The latest news and updates from companies in the WLTH portfolio.

"2026 Just Got Crazy": Internet Erupts After Anthropic Source Code Leak Shakes AI Industry

Security researcher Chaofan Shou discovered the leak on March 31, The leak of Claude Code's source code from Anthropic has sparked intense and varied reactions across the tech ecosystem. What makes it particularly notable is that Anthropic has built a reputation around strong security practices and strict controls, yet the leak stemmed through a basic packaging oversight that security researchers say should never occur in a finished software product. Developers and techies, meanwhile, have reacted with enthusiasm, sharing and analysing the code across forums and repositories, calling it a valuable learning resource rather than a crisis. Cybersecurity professionals, however, have criticised the lapse, saying how even leading AI firms may be lagging in operational security, raising concerns about future risks as AI systems become more autonomous. The leak is seen as a blow to Anthropic's operational reputation, especially as it reportedly prepares for a $380 billion IPO. On the internet, the leak has triggered sharp reactions, with many users both criticising and mocking the operational security practices at Anthropic and pointing out the obvious irony. Shakthi Vadakkepat, an active Enterprise AI Architect, called the lapse "the mothership of all code leaks," noting how the leak stemmed from something as basic as shipping a map file within an npm package. "The big deal is that Anthropic is a company that prides itself on the level of security and controls they have in place, and then they ship a map file in their npm. The other thing is that they'll have a tough time suing the guy who created the repo on GitHub because he has essentially ported the code to Python, hence making the DMCA inapplicable here. And the logical argument would be that nothing was "hacked" per se; Anthropic essentially shipped the map file themselves," he wrote on X. To make the technical lapse easier to understand, another user compared it to a homeowner investing heavily in security, locking doors, installing surveillance systems, and hiring guards, only to accidentally publish the detailed layout of the house online for anyone to access. "This is the same company that told Congress AI is an existential threat... the same company that spent $8 billion building 'the most safety-focused lab on earth'... the same company the Pentagon blacklisted as a 'supply chain risk' because they were supposedly TOO principled... and they got exposed by a config file that any mid-level engineer would've caught in a code review," the user added. "The company telling the world how dangerous AI is... couldn't protect its own code from a rookie mistake. These are the people advising governments on regulation. Testifying about existential risk. Asking to be the ones trusted with the most powerful technology ever built. And they just shipped their own blueprints to the public by accident," another user commented. Check out other reactions and memes flooding the internet: Notably, security researcher Chaofan Shou discovered the leak on March 31, when he found out that Claude Code had its entire source code compromised via a 60MB source-map file (cli.js.map) in its npm package. This file allowed anyone to reconstruct the full TypeScript codebase, essentially exposing the underlying architecture of Claude Code. The exposed code includes the CLI implementation, agent architecture, unreleased features, and internal tooling - but not the model weights or user data. Anthropic confirmed the leak was due to human error and not a security breach. Show full article Track Latest News Live on NDTV.com and get news updates from India and around the world Claude Code Leak, Anthropic, Claude Code Hack

Anthropic
NDTV27d ago
Read update
"2026 Just Got Crazy": Internet Erupts After Anthropic Source Code Leak Shakes AI Industry

Anthropic's Amodei seeks copyright deal that 'works for everyone'

Anthropic chief executive Dario Amodei says his rare visit to Australia is not about convincing the Albanese government to give the artificial intelligence labs unpaid access to copyrighted material, but about finding a path forward that benefits both rights holders and AI companies. Amodei on Wednesday met Prime Minister Anthony Albanese and Industry Minister Tim Ayres to sign a memorandum of understanding between Anthropic and the Australian government.

Anthropic
Australian Financial Review27d ago
Read update
Anthropic's Amodei seeks copyright deal that 'works for everyone'

'Looks like human error': Anthropic accidentally releases 3,000 files of its own AI coding tool's source code- Moneycontrol.com

We at moneycontrol use cookies and other tracking technologies to assist you with navigation and determine your location. We also capture cookies to obtain your feedback, analyse your use of our products and services and provide content from third parties. By clicking on 'I Accept', you agree to the usage of cookies and other tracking technologies. For more details you can refer to our cookie policy. Please select (*) all mandatory conditions to continue.

Anthropic
MoneyControl27d ago
Read update
'Looks like human error': Anthropic accidentally releases 3,000 files of its own AI coding tool's source code- Moneycontrol.com

'Looks like human error': Anthropic accidentally releases 3,000 files of its own AI coding tool's source code

Anthropic has come under fresh scrutiny after a major code leak revealed internal details of its AI tool, Claude Code. Just days earlier, Fortune reported that the company had mistakenly exposed around 3,000 files, including a draft post about an upcoming model. Known internally as "Mythos' or "Capybara," the model was said to be highly advanced and potentially risky from a cybersecurity standpoint. The latest incident exposed nearly 500,000 lines of code across 1,900 files. Anthropic confirmed that "some internal source code" became public during a "Claude Code release," adding that no sensitive data was involved and the issue was due to human error. Even though the core AI model was not leaked, experts say the exposed code could still reveal important internal workings. A cybersecurity expert told Fortune that developers may now be able to extract useful insights from the codebase. While the main AI model remains secure, experts warn the leaked code could expose important internal systems. A cybersecurity expert told Fortune that it may help developers understand internal processes. Used by large companies, Claude Code depends not only on AI models but also on a system that guides how the AI works. This "harness" controls behaviour and connects the AI to tools, and it is this system that has been leaked. Experts warn the leak could allow rivals to study the system and create competing tools. It may also lead to open-source versions based on the exposed code. Researcher Roy Paz said the leak also hints at a new advanced model, possibly more powerful than current versions. Anthropic currently offers models like Opus, Sonnet and Haiku, but the new system may go beyond them. According to Roy Paz, the leak suggests that Anthropic is working on a more advanced model, likely stronger than its current offerings such as Opus, Sonnet and Haiku. According to reports, the leak happened when incorrect files were uploaded to NPM. "a single misconfiguration or misclick suddenly exposed the full source code," said Paz. He also warned that such leaks could reveal internal systems and APIs, making it easier for attackers to understand and exploit the technology. This is not the first such incident. In February 2025, Anthropic accidentally exposed Claude Code's source code in a similar error, raising concerns about its safeguards. The incident has caught the internet's attention, with users sharing strong reactions. One user said, "Was it purposeful to leave the 3000 documents buried where someone would find them?" Another wrote, "The 'just sitting open' part hits different when you realise most breaches aren't sophisticated attacks, they're someone forgetting to flip a private toggle."

Anthropic
MoneyControl27d ago
Read update
'Looks like human error': Anthropic accidentally releases 3,000 files of its own AI coding tool's source code

'Looks like human error': Anthropic accidentally releases 3,000 files of its own AI coding tool's source code

Anthropic has come under fresh scrutiny after a major code leak revealed internal details of its AI tool, Claude Code. Just days earlier, Fortune reported that the company had mistakenly exposed around 3,000 files, including a draft post about an upcoming model. Known internally as "Mythos' or "Capybara," the model was said to be highly advanced and potentially risky from a cybersecurity standpoint. The latest incident exposed nearly 500,000 lines of code across 1,900 files. Anthropic confirmed that "some internal source code" became public during a "Claude Code release," adding that no sensitive data was involved and the issue was due to human error. Even though the core AI model was not leaked, experts say the exposed code could still reveal important internal workings. A cybersecurity expert told Fortune that developers may now be able to extract useful insights from the codebase. While the main AI model remains secure, experts warn the leaked code could expose important internal systems. A cybersecurity expert told Fortune that it may help developers understand internal processes. Used by large companies, Claude Code depends not only on AI models but also on a system that guides how the AI works. This "harness" controls behaviour and connects the AI to tools, and it is this system that has been leaked. Experts warn the leak could allow rivals to study the system and create competing tools. It may also lead to open-source versions based on the exposed code. Researcher Roy Paz said the leak also hints at a new advanced model, possibly more powerful than current versions. Anthropic currently offers models like Opus, Sonnet and Haiku, but the new system may go beyond them. According to Roy Paz, the leak suggests that Anthropic is working on a more advanced model, likely stronger than its current offerings such as Opus, Sonnet and Haiku. According to reports, the leak happened when incorrect files were uploaded to NPM. "a single misconfiguration or misclick suddenly exposed the full source code," said Paz. He also warned that such leaks could reveal internal systems and APIs, making it easier for attackers to understand and exploit the technology. This is not the first such incident. In February 2025, Anthropic accidentally exposed Claude Code's source code in a similar error, raising concerns about its safeguards. The incident has caught the internet's attention, with users sharing strong reactions. One user said, "Was it purposeful to leave the 3000 documents buried where someone would find them?" Another wrote, "The 'just sitting open' part hits different when you realise most breaches aren't sophisticated attacks, they're someone forgetting to flip a private toggle."

Anthropic
MoneyControl27d ago
Read update
'Looks like human error': Anthropic accidentally releases 3,000 files of its own AI coding tool's source code

Ban on Russian grain imports causes chaos on Kazakhstan market

The Kazakhstan grain market found itself in a zone of turbulence: sudden import restrictions, subsequent adjustment of decisions, and an unstable macroeconomic situation provoked price increases and intensified uncertainty among industry participants, with reference to Evgeny Karabanov. As the head of the analytics committee of the Grain Union of Kazakhstan Evgeny...

CHAOS
newsline.kz27d ago
Read update
Ban on Russian grain imports causes chaos on Kazakhstan market

OpenAI joins Anthropic in courting private equity firms for enterprise AI venture

OpenAI is in advanced talks with private equity firms including TPG, Advent International, Bain Capital and Brookfield Asset Management to form a joint venture that would distribute its enterprise products across the firms' portfolio companies and beyond, four people familiar with the matter said. The proposed deal has a pre-money valuation of about $10 billion, two of the people said, and could give OpenAI a faster route into corporate adoption while providing the PE firms with a potential lifeline for companies in their portfolios that are exposed to AI disruption. Both OpenAI and Anthropic are aggressively courting private equity firms because they control enterprise companies and influence how businesses budget for software and AI, three of the people said -- a race growing more urgent as both companies vie to go public as soon as this year. OpenAI declined to comment on the joint venture plans. Advent, TPG and Brookfield declined to comment. Bain did not respond to requests for comment. Under the proposed arrangement, the private equity investors would commit about $4 billion and receive equity stakes in the venture, along with influence over how OpenAI's technology is deployed across their portfolio companies, two of the people said. TPG would serve as the anchor investor, committing the most capital, while Advent, Bain, and Brookfield would participate as co-founding investors. All four firms would secure board seats in the joint venture, according to people familiar with the matter, cautioning that no final decision has been taken and the plans are subject to change. The arrangement would also give the PE firms early access to OpenAI's enterprise tools and the potential to benefit when adoption expands beyond their portfolios, two people familiar with the talks said. Sources requested anonymity because the discussions are private. Anthropic is also in discussions with private equity firms, including Blackstone, Permira, and Hellman & Friedman, to form a joint venture that would sell its Claude AI technology to companies backed by those firms, according to one of the people familiar with the matter. As part of the deal, the PE firms would take an equity stake of approximately $1 billion, the person said, cautioning that the plans -- including the figures -- are subject to change and no final agreement has been reached. The Information first reported last week that the Claude maker has been in discussions with Blackstone and Hellman & Friedman to form a joint venture. Blackstone, Hellman & Friedman, and Permira declined to comment, while Anthropic did not respond to a Reuters request for comment. OpenAI is offering "preferred equity" in the venture -- a senior class of ownership that gives investors priority returns over common shareholders and limits their downside, three of the people said. In contrast, Anthropic is offering common equity, which does not come with those protections, one of the people said. The potential deals come as AI upends the calculus of private equity investing. The rapid advance of AI has rattled valuations across the software sector, made it harder for buyout firms to underwrite deals with confidence, and raised uncomfortable questions about the long-term viability of business models that automation could render obsolete. In the enterprise AI market, Anthropic is widely seen as ahead of OpenAI, with stronger adoption among corporate clients. As of the end of last month, OpenAI's enterprise business generated $10 billion out of a total annualized revenue of $25 billion, one of the people said. The deal could also help distribute OpenAI's enterprise offering, Frontier, one of the people said. Launched last month, the platform anchors a program called Frontier Alliances -- through which OpenAI pairs its forward-deployed engineers with consulting giants BCG, McKinsey, Accenture and Capgemini to help companies integrate AI agents into core business processes, Reuters reported last month. "As demand for AI continues to skyrocket, we want to help our customers deploy these technologies in all the ways that help them create impact," Fidji Simo, CEO of Applications at OpenAI, said in an emailed statement to Reuters. "That's why we recently announced Frontier Alliances to leverage our ecosystem of partners, and that's why we're also building a deployment arm that works directly with enterprises and partners to deeply embed AI throughout their organizations. We'll have more to share when details are finalized," Simo said.

Anthropic
VCCircle27d ago
Read update
OpenAI joins Anthropic in courting private equity firms for enterprise AI venture

Mercor AI Confirms Data Breach Following Lapsus$ Claims of 4TB Data Theft

Mercor AI has officially confirmed a severe data breach following claims by the notorious Lapsus$ hacking group that they stole 4 terabytes of sensitive company data. The incident, stemming from a recent supply chain attack on the open-source LiteLLM project, has exposed proprietary source code, internal databases, and massive amounts of user-verification data. The hacking collective Lapsus$ has listed Mercor's platform data for a live auction on the dark web, prompting interested buyers to "make an offer". The threat actors claim to have exfiltrated the entirety of the 4-terabyte dataset by breaching the company's Tailscale VPN. The extensively detailed stolen cache reportedly includes 939GB of platform source code, a 211GB user database, and 3TB of storage buckets containing video interviews and identity verification passports. Mercor AI Official Response In response to the extortion attempts, Mercor AI released a public statement emphasizing that the privacy and security of their customers and contractors remain their foundational priority. The company clarified that the breach was the direct result of a widespread supply chain attack involving the open-source routing library LiteLLM. Mercor's security team promptly contained the incident and is currently conducting a comprehensive investigation alongside leading third-party forensics experts. The root cause of Mercor's breach traces back to late March 2026, when a threat actor known as TeamPCP compromised the PyPI publishing credentials for the LiteLLM library. TeamPCP injected a three-stage malicious backdoor into versions 1.82.7 and 1.82.8, which was designed to harvest credentials and establish persistent system access. Because LiteLLM is widely integrated into AI applications, the malware executed immediately upon installation and impacted thousands of unsuspecting organizations. Founded in 2023, Mercor operates a highly successful AI recruitment platform that claims over $500 million in revenue and connects specialized domain experts with major AI firms like OpenAI and Anthropic. The startup facilitates over $2 million in daily payouts and now faces significant operational risks due to the exposure of its contractors' personal information. The leak of internal AI source code and sensitive KYC materials poses severe security implications for both the $10 billion platform and its extensive user base. Lapsus$ is a well-known cybercrime syndicate with a history of targeting high-profile technology companies using aggressive extortion tactics. The group frequently uses public data leaks and dark web auctions to pressure victims into paying ransoms after initial private negotiations fail. Their involvement in the Mercor AI breach highlights a continuing trend of threat actors exploiting upstream supply chain vulnerabilities to access massive downstream corporate datasets.

MercorAnthropic
Cyber Security News27d ago
Read update
Mercor AI Confirms Data Breach Following Lapsus$ Claims of 4TB Data Theft

OpenAI sweetens private equity pitch amid enterprise turf war with Anthropic

ChatGPT maker OpenAI is offering private-equity firms a sweeter deal than rival Anthropic as both artificial intelligence companies court buyout firms to form joint ventures aimed at raising fresh capital and accelerating adoption of enterprise AI products, according to people familiar with the talks. OpenAI is offering private-equity firms a guaranteed minimum return of 17.5%, significantly higher than typical preferred instruments, two people familiar said. It is also offering early access to its newest AI models as it seeks to enlist investors like TPG and Advent for its joint venture, three sources said. The company has recently doubled down on enterprise, an area where Anthropic has historically been stronger. By comparison, Anthropic's enterprise-focused private-equity deal offered no such returns, the sources added. OpenAI and Anthropic are competing for partnerships with buyout firms that would allow them to quickly roll out their AI tools to potentially hundreds of private, established companies owned by buyout firms. This would boost adoption of their models and encourage customer stickiness at scale. The two companies are battling for more lucrative business customers to use AI as they race to position themselves for potential public listings as early as this year. The joint venture structure could absorb high upfront costs associated with deploying engineers to customize models for clients, easing cost pressures on OpenAI and Anthropic ahead of going public, and providing clearer segment reporting that can support the IPO narrative, two of the people familiar with the discussions said. OpenAI and Anthropic are racing to snap up similar types of partnerships with PE firms, a strategy that is new to the AI sector. "There's a big race to lock in as much enterprise, as many desks as possible," said Matt Kropp at Boston Consulting Group's AI unit, adding that once a company has a customized AI model integrated into its systems, it becomes much harder to switch to a competitor. "I can see that there's a huge amount of scalability there." OpenAI, TPG and Advent declined to comment. Anthropic did not respond to a request for comment. NOT FOR EVERYONE At least two private-equity firms decided not to participate in either of the two joint ventures, citing concerns about the economics, flexibility and profit profile of the partnerships, two people said. Thoma Bravo, one of the world's largest software-focused buyout firms, decided not to participate after internal discussions led by managing partner Orlando Bravo, a person familiar with the decision said. Bravo raised questions about the long-term profit profile of joint ventures with OpenAI and Anthropic, adding that many of its portfolio companies are already deploying AI tools, the person said. Thoma Bravo declined to comment. Some private-equity investors questioned the partnerships, arguing that large private‑equity firms already have direct access to OpenAI and Anthropic without committing capital. These people said the partnerships also reflect pressure on buyout firms from their own investors to demonstrate a clearer strategy around AI. They noted that with technology valuations down, such joint ventures may not materially change access to AI tools or generate additional revenue. Any meaningful upside, they added, would likely depend on securing board seats, equity stakes or other economic terms -- only available to lead partners. Other private-equity firms are in talks with OpenAI and Anthropic about participating in the joint ventures, though many are expected to take smaller stakes without board seats or lead roles, four of the people said. SWEETENERS The investment also includes seniority over other joint venture partners and downside protection, the sources said, with more private-equity firms in discussions to invest smaller amounts in the joint venture. Reuters previously reported that OpenAI is in advanced talks with firms including TPG, Bain Capital, Advent International and Brookfield Asset Management to raise about $4 billion at a pre-money valuation of roughly $10 billion. Anthropic, which has gained traction among businesses, is pursuing a similar strategy and has been courting private equity firms including Blackstone, Hellman & Friedman and Permira for its own enterprise-focused venture, Reuters previously reported.

Anthropic
VCCircle27d ago
Read update
OpenAI sweetens private equity pitch amid enterprise turf war with Anthropic

New Fee Structure Takes Effect, PolyMarket's Daily Revenue Rises to 7th Among Crypto Protocols - Lookonchain - Looking for smartmoney onchain

On April 1st, data from Defillama shows that PolyMarket -- a prediction market -- ranked 7th among crypto protocols in daily revenue, totaling $550,000. As previously reported by BlockBeats, starting March 30th, PolyMarket will begin charging taker fees for the first time on nearly all trading categories. The new fee structure uses variable rates: crypto-related contracts have a peak rate of up to 1.8%, with actual fees adjusting dynamically based on share prices and market conditions. Sports, finance, politics, culture, weather, and general categories feature lower tiered fees, while other specified categories and peak fees for certain economic forecasts are higher, around 1.5%.

Polymarket
Lookonchain27d ago
Read update
New Fee Structure Takes Effect, PolyMarket's Daily Revenue Rises to 7th Among Crypto Protocols - Lookonchain - Looking for smartmoney onchain

Anthropic is having a month - RocketNews

Anthropic has built its public identity around the winning idea that it's the careful AI company. It publishes detailed work on AI risk, employs some of the best researchers in the field, and has been vocal about the responsibilities that come with building such powerful technology -- so vocal, of course, that it's right now battling it out with the Department of Defense. On Tuesday, alas, someone there forgot to check a box. It is, notably, the second time in a week. Last Thursday, Fortune reported that Anthropic had accidentally made nearly 3,000 internal files publicly available, including a draft blog post describing a powerful new model the company had not yet announced. Here's what happened on Tuesday: When Anthropic pushed out version 2.1.88 of its Claude Code software package, it accidentally included a file that exposed nearly 2,000 source code files and more than 512,000 lines of code -- essentially the full architectural blueprint for one of its most important products. A security researcher named Chaofan Shou noticed almost immediately and posted about it on X. Anthropic's statement to multiple outlets was nonchalant as these things go: "This was a release packaging issue caused by human error, not a security breach." (Internally, we'd guess things were less measured.) Claude Code isn't a minor product. It's a command-line tool that lets developers use Anthropic's AI to write and edit code and has become formidable enough to unsettle rivals. According to the WSJ, OpenAI pulled the plug on its video generation product Sora just six months after launching it to the public to refocus its efforts on developers and enterprises -- partly in response to Claude Code's growing momentum. What leaked was not the AI model itself but the software scaffolding around it -- the instructions that tell the model how to behave, what tools to use, and where its limits are. Developers began publishing detailed analyses almost immediately, with one describing the product as "a production-grade develope ...

Anthropic
RocketNews | Top News Stories From Around the Globe27d ago
Read update
Anthropic is having a month - RocketNews

Starlink satellite experiences on-orbit anomaly, SpaceX confirms

US aerospace company SpaceX confirmed on Monday that a Starlink satellite experienced an on-orbit anomaly on Sunday, resulting in a loss of communications and the generation of debris. Starlink satellite 34343 lost contact while operating at an altitude of approximately 560 km above Earth, according to a release via Starlink's official X account. The latest analysis indicates that the anomaly poses no new risk to the International Space Station and its crew, nor to NASA's upcoming Artemis II crewed lunar mission, according to the release. SpaceX said it will continue to monitor the satellite along with any trackable debris and coordinate with NASA and the US Space Force. SpaceX also noted that it posed no new risk to the Transporter-16 rideshare mission launched earlier on Monday, as payload deployments for that mission were conducted well above or below the Starlink constellation to avoid potential collisions. Meanwhile, LeoLabs, a US space technology company specializing in the tracking of satellites and space debris in low Earth orbit, said on Monday it detected a fragment generation event associated with Starlink 34343 on Sunday. Its analysis suggests similarities with a previous anomaly involving Starlink satellite 35956 on December 17, 2025. Such events illustrate the need for rapid characterization of anomalous events to enable clarity of the operating environment, the company said.

SpaceX
news.cgtn.com27d ago
Read update
Starlink satellite experiences on-orbit anomaly, SpaceX confirms

Anthropic Confirms Partial Source Code Leak of 'Claude Code' Assistant; 'Release Packaging Issue Caused by Human Error', Says Company | 📲 LatestLY

Mumbai, April 1: AI startup Anthropic confirmed on Tuesday that internal source code for its popular developer tool, Claude Code, was inadvertently exposed online. The company attributed the incident to a "release packaging issue" caused by human error rather than a malicious security breach. While the leak provides a rare look into the architecture of one of the industry's most successful coding assistants, Anthropic stated that no sensitive customer data or credentials were compromised during the exposure. The leak gained significant traction early Tuesday morning after a post on X (formerly Twitter) containing a link to the code was shared at 4:23 AM ET. The post has since amassed more than 21 million views, highlighting the intense industry interest in Anthropic's proprietary technology. Is Claude Down? Anthropic AI Chatbot Faces Global Outage, Users Report Errors. "This was a release packaging issue caused by human error, not a security breach," an Anthropic spokesperson said in a statement. The company noted it is currently rolling out new internal measures to prevent similar packaging mistakes in future updates. The exposure of source code is a notable setback for the San Francisco-based startup. Claude Code, which was released to the general public in May 2025, has become a cornerstone of Anthropic's commercial success. As of February 2026, the tool's run-rate revenue had reportedly swelled to more than USD 2.5 billion. Access to this code could offer competitors - including OpenAI, Google, and xAI - valuable insights into how Anthropic optimised the tool for building features, fixing bugs, and automating complex software development tasks. All three major rivals have recently increased resource allocation to develop competing AI coding environments. This incident marks the second significant data oversight for Anthropic in less than a week. On Thursday, a report from Fortune revealed that descriptions of an upcoming AI model and other internal documents were discovered in a publicly accessible data cache. These back-to-back incidents have raised questions regarding the internal data handling protocols at the company, which was founded in 2021 by former OpenAI executives with a core mission centred on AI safety and reliability. Claude New Feature Update: Anthropic's AI Assistant Allows Mac Users to Remotely Control Desktops and Execute Tasks via Smartphone. Anthropic is best known for its "Claude" family of large language models. Claude Code was designed to sit directly within a developer's terminal, allowing for a more integrated and "agentic" coding experience compared to standard chat interfaces. Its rapid adoption over the last year has made it a vital asset for the company as it seeks to maintain its position as a top-tier AI lab. Despite the leak, Anthropic maintains that the core functionality of the service remains secure for its enterprise and individual users.

xAIAnthropic
LatestLY27d ago
Read update
Anthropic Confirms Partial Source Code Leak of 'Claude Code' Assistant; 'Release Packaging Issue Caused by Human Error', Says Company | 📲 LatestLY

The Australian government held an 'announceable' with Anthropic AI - so we asked its chatbot Claude if it matters. Here's what it said.

Anthropic AI and the Aussie government ink a deal with AI credits and aspirational pledges Open AI and Grok rival Anthropic AI is on a charm offensive in Australia this week, led by CEO Dario Amodei and senior execs. While Amodei and his $500 billion private company are currently at war with the Trump administration, the Down Under sojourn featured a meeting with prime minister Anthony Albanese this morning to sign Memorandum of Understanding (MOU) supporting Australia's national AI plan, before a chinwag with treasurer Jim Chalmers in the afternoon, and offers of 5-minute one-on-ones with any politicians keen to hear more as Anthropic follows in Open AI's footsteps and opens an office in Sydney. The MOU includes $3 million in Claude API credits for four unnamed institutions working in clinical genomics, precision medicine, paediatric research and computing education, which in AI terms, is like the boss slapping $50 on the pub counter for Christmas drinks. A Claude max account costs $170 a month, so that's three years of access for less than 500 people. And this week, plenty of people have taken to social media to complain about how quickly they're suddenly burning through their Claude credits. So aside from giving the government an "announceable", what does it all mean? Who better to ask than Anthropic's chatbot, Claude? So we fed Anthropic's media release in, and asked Claude to analyse it "and tell me if it says anything important that Australian people should know about AI". Here's what Claude said: Bottom line: This is primarily a business and government partnership announcement, but it has real implications -- particularly the AI safety cooperation (which affects how the government understands and regulates AI), the medical research investments (which could affect healthcare outcomes for Australian families), and the workforce tracking (which will shape policy responses to AI-driven economic change). It's a credible and substantive announcement rather than purely promotional. Then we fed the government's announcement to Claude asking it for the same analysis. The result is quite telling and the final line suggests AI has a pretty good understanding of how staged announcements between governments and large tech companies generally play out. Here's what you need to know, according to AI. Here's what this government release adds to the picture: What's different about this version? This is the Australian government's own framing of the same deal, and the language is notably more cautious and conditional than Anthropic's release. A few things stand out: 1. "On terms that work for Australians" is doing a lot of work The headline framing -- welcoming investment on Australia's terms -- signals that the government is aware of public sensitivity around foreign tech companies. They're positioning this as Australia setting the conditions, not just receiving investment. Whether that's substantively true or mostly political messaging is worth watching. 2. Data centre expectations are front and centre The government specifically highlights Anthropic's commitment to align with Australia's data centre and AI infrastructure expectations (released just days before, on 23 March 2026). This is notable -- it suggests energy use, data sovereignty, and supply chain security are active concerns the government is trying to get ahead of. 3. The language is notably tentative Where Anthropic's release speaks confidently about what will happen, the government release uses softer language: "outlines options for future collaboration," "intent to support," "exploring investments." Australians should read this as a framework agreement, not a done deal -- the concrete commitments are still to be negotiated. 4. The political framing is aimed at workers and communities References to supporting "Australian workers, startups and small businesses" and delivering "better services faster, for all Australians" reflect clear political messaging ahead of what is expected to be a federal election period. The government wants credit for managing AI in the public interest. 5. Supply chain security is mentioned This is the most geopolitically significant line in the government release and absent from Anthropic's. Flagging "supply chain security" suggests the government is thinking about where AI infrastructure sits in the context of broader strategic competition -- particularly relevant given Australia's position in the Indo-Pacific. Overall read: Both releases describe the same agreement but with different emphases. Anthropic focuses on the science, the research partnerships, and the commercial opportunity. The government focuses on sovereignty, safety, and political accountability. For Australians, the most important question to watch over time is whether the "options for future collaboration" translate into concrete, enforceable commitments -- or remain aspirational.

Anthropic
Startup Daily27d ago
Read update
The Australian government held an 'announceable' with Anthropic AI - so we asked its chatbot Claude if it matters. Here's what it said.

Anthropic AI coding tool leaks its own source code again

Source code is a human-readable set of instructions that programmers write to build software. When companies release software, they usually compile or bundle that code into a compressed format to protect their intellectual property and internal systems. A source map is a supplementary file used during development, bridging the gap between the compressed version of a program and its original source code. The source map is an essential tool for developers when testing and fixing bugs. However, it has no place in a finished product released to the public as it effectively hands anyone the complete original recipe. The latest version of Claude Code, v2.1.88, released on March 31, still contained this file which held the complete code of 1,906 proprietary Claude Code source files covering internal API design telemetry analysis systems encryption tools and inter-process communication protocols.

Anthropic
NewsBytes27d ago
Read update
Anthropic AI coding tool leaks its own source code again

SpaceX lines up 21 banks for mega IPO, code-named project Apex

The ⁠size of the syndicate underscores the scale and complexity of the planned offering SpaceX is working with at least 21 banks on its blockbuster initial public ⁠offering, people familiar with the matter said on Tuesday, one of the largest underwriting syndicates assembled in recent years. The listing, internally codenamed Project Apex, is expected to be among the most closely watched stock market debuts on Wall Street. The public offering, expected in June, is estimated to value the rocket company controlled by founder and CEO Elon Musk at $1.75 trillion. Morgan Stanley, Goldman Sachs, JPMorgan Chase , Bank of America ⁠and Citigroup are serving as active bookrunners, or the lead banks managing the deal, the people said, asking not to be identified because the process is not public. A further 16 banks have signed on in smaller roles, they added. About half of the banks' names have ⁠not previously been reported. The ⁠size of the syndicate underscores the scale and complexity of the planned offering. Banks in addition to the active bookrunners include: * Allen & Co * Barclays * Brazil's BTG Pactual * Deutsche Bank * The Netherlands' ⁠ING Groep * Macquarie * Mizuho * Needham & Co * Raymond James * Royal Bank of Canada * Societe Generale * Banco Santander * Stifel * UBS * Wells Fargo * William Blair The banks are expected to take on roles in institutional, high-net-worth and retail investor channels as well as in different geographic regions, Reuters previously reported. The plan is subject to change and additional banks could still be added, the sources said. Texas-based SpaceX did not immediately respond to a request for comment. Bank of America, Barclays, Deutsche Bank, Goldman Sachs, JPMorgan, Mizuho, Santander and Wells Fargo ⁠declined to comment. The other banks did not immediately respond to requests for comment. Large IPO syndicates have become more common for mega deals in recent years. Chip designer ARM Holdings worked with close to 30 banks on its 2023 listing, while Alibaba Group assembled a similarly large group of underwriters for ⁠its record-breaking 2014 debut. (Reporting by Echo Wang in New York; Editing by Dawn Kopecki and Cynthia Osterman)

SpaceX
Zawya.com27d ago
Read update
SpaceX lines up 21 banks for mega IPO, code-named project Apex

Perplexity AI sued over alleged user data sharing with Meta and Google

Perplexity AI has been hit with a proposed class-action lawsuit in the United States, accusing the company of covertly sharing users' personal data with tech giants Meta and Google without proper consent. According to a report by Bloomberg, the complaint was filed on Tuesday in a federal court in San Francisco. It alleges that the AI search platform deployed hidden tracking mechanisms that allowed third parties to access sensitive user interactions, potentially breaching California privacy laws. Perplexity AI data sharing allegations raise privacy concerns The lawsuit claims that as soon as users access Perplexity's homepage, tracking tools are automatically installed on their devices. These trackers allegedly enable Meta and Google to monitor conversations between users and the AI-powered search engine. This is a developing story…

Perplexity
Firstpost27d ago
Read update
Perplexity AI sued over alleged user data sharing with Meta and Google

Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project - RocketNews

Mercor, a popular AI recruiting startup, has confirmed a security incident linked to a supply chain attack involving the open-source project LiteLLM. The AI startup told TechCrunch on Tuesday that it was "one of thousands of companies" affected by a recent compromise of LiteLLM's project, which was linked to a hacking group called TeamPCP. Confirmation of the incident comes as extortion hacking group Lapsus$ claimed it had targeted Mercor and gained access to its data. It's not immediately clear how the Lapsus$ gang obtained the stolen data from Mercor as part of TeamPCP's cyberattack. Founded in 2023, Mercor works with companies including OpenAI and Anthropic to train AI models by contracting specialized domain experts such as scientists, doctors, and lawyers from markets including India. The startup says it facilitates more than $2 million in daily payouts and was valued at $10 billion following a $350 million Series C round led by Felicis Ventures in October 2025. Mercor spokesperson Heidi Hagberg confirmed to TechCrunch that the company had "moved promptly" to contain and remediate the security incident. "We are conducting a thorough investigation supported by leading third-party forensics experts," said Hagberg. "We will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible." Earlier, Lapsus$ claimed responsibility for the apparent data breach on its leak site and shared a sample of data allegedly taken from Mercor, which TechCrunch reviewed. The sample included material referencing Slack data and what appeared to be ticketing data, as well as two videos purportedly showing conversations between Mercor's AI systems and contractors on its platform. Hagberg declined to answer follow-up questions on whether the incident was connected to claims by Lapsus$, or whether any customer or contractor data had been accessed, exf ...

MercorAnthropic
RocketNews | Top News Stories From Around the Globe27d ago
Read update
Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project - RocketNews

SpaceX lines up 21 banks for mega IPO, code-named project Apex

SpaceX is working with at least 21 banks on its blockbuster initial public offering, people familiar with the matter said on Tuesday, one of the largest underwriting syndicates assembled in recent years. The listing, internally codenamed Project Apex, is expected to be among the most closely watched stock market debuts on Wall Street. The public offering, expected in June, is estimated to value the rocket company controlled by founder and CEO Elon Musk at $1.75 trillion. Morgan Stanley, Goldman Sachs, JPMorgan Chase, Bank of America and Citigroup are serving as active bookrunners, or the lead banks managing the deal, the people said, asking not to be identified because the process is not public. A further 16 banks have signed on in smaller roles, they added. About half of the banks' names have not previously been reported. The size of the syndicate underscores the scale and complexity of the planned offering. Banks in addition to the active bookrunners include: The banks are expected to take on roles in institutional, high-net-worth and retail investor channels as well as in different geographic regions, Reuters previously reported. The plan is subject to change and additional banks could still be added, the sources said. Texas-based SpaceX did not immediately respond to a request for comment. Bank of America, Barclays, Deutsche Bank, Goldman Sachs, JPMorgan, Mizuho, Santander and Wells Fargo declined to comment. The other banks did not immediately respond to requests for comment. Large IPO syndicates have become more common for mega deals in recent years.

SpaceX
CNBC27d ago
Read update
SpaceX lines up 21 banks for mega IPO, code-named project Apex

Perplexity AI machine accused of sharing data with Meta, Google

As soon as users log into Perplexity's home page, trackers are downloaded onto their devices, giving Meta and Google full access to the conversations between them and Perplexity's AI Machine search engine, according to the proposed class-action complaint filed Tuesday in federal court in San Francisco. This allows Meta and Google "to exploit this sensitive date for their own benefit, including targeting individuals with advertising and reselling their sensitive data to additional third parties," according to the complaint. Users' personal data is shared even when they sign up for Perplexity's "Incognito" mode, according to the complaint. The suit was filed on behalf of an Utah man, identified only as John Doe, who seeks to represent a class of Perplexity users. According to the suit, the man shared information about his family's finances, his tax obligations, his investment portfolio and strategies with Perplexity's chatbot. Perplexity embedded "undetectable" tracking software into the search engine's code that automatically transmits users' conversations to Meta, Google and other third parties, according to the complaint. The lawsuit also targets Meta and Google, accusing them of violating federal and state computer privacy and fraud laws. A Meta spokesperson pointed to a Facebook help page which says it's against the tech giant's rules for advertisers to send the company sensitive information. "We have not been served any lawsuit that matches this description so we are unable to verify its existence or claims," said Jesse Dwyer, a Perplexity spokesperson. Representatives of Google didn't immediately respond to a request for comment. The case is Doe v. Perplexity AI Inc., 3:26-cv-02803, US District Court, Northern District of California (San Francisco).

Perplexity
Hindustan Times27d ago
Read update
Perplexity AI machine accused of sharing data with Meta, Google
Showing 10161 - 10180 of 11425 articles