News & Updates

The latest news and updates from companies in the WLTH portfolio.

Perplexity AI accused of sharing users' personal data with Meta, Google

According to the complaint filed, trackers are downloaded onto users' devices as soon as they log into Perplexity's home page. Perplexity AI was accused in a lawsuit of surreptitiously sharing the personal information of its users with Meta Platforms and Alphabet's Google in violation of California privacy laws. As soon as users log into Perplexity's home page, trackers are downloaded onto their devices, giving Meta and Google full access to the conversations between them and Perplexity's AI Machine search engine, according to the proposed class-action complaint filed on March 31 in federal court in San Francisco. This allows Meta and Google "to exploit this sensitive date for their own benefit, including targeting individuals with advertising and reselling their sensitive data to additional third parties," according to the complaint. Users' personal data is shared even when they sign up for Perplexity's "Incognito" mode, according to the complaint. The suit was filed on behalf of a Utah man, identified only as John Doe, who seeks to represent a class of Perplexity users. According to the suit, the man shared information about his family's finances, his tax obligations, his investment portfolio and strategies with Perplexity's chatbot. Perplexity embedded "undetectable" tracking software into the search engine's code that automatically transmits users' conversations to Meta, Google and other third parties, according to the complaint. The lawsuit also targets Meta and Google, accusing them of violating federal and state computer privacy and fraud laws. A Meta spokesperson pointed to a Facebook help page which says it's against the tech giant's rules for advertisers to send the company sensitive information. "We have not been served any lawsuit that matches this description so we are unable to verify its existence or claims," said Perplexity spokesperson Jesse Dwyer. Representatives of Google did not immediately respond to a request for comment. The case is Doe v. Perplexity AI Inc., 3:26-cv-02803, US District Court, Northern District of California (San Francisco). BLOOMBERG

Perplexity
The Straits Times27d ago
Read update
Perplexity AI accused of sharing users' personal data with Meta, Google

Anthropic accidentally releases source code for Claude AI agent

Anthropic accidentally released source code for its Claude AI agent. This release has developers examining the code for insights into future platform evolution. Experts are also raising concerns about potential security vulnerabilities. This incident follows a previous accidental public release of thousands of files, including details about a powerful new model. Anthropic inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup's plans. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leak of basic source code - the second slip-up in just a week - triggered a discussion in the community around new revelations of how Anthropic's popular coding agent works. Developers said on X that they were poring through the details to try and figure out how the startup intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure.

Anthropic
ETCIO.com27d ago
Read update
Anthropic accidentally releases source code for Claude AI agent

Anthropic accidentally releases source code for its popular Claude AI agent

The leak comes days after Fortune reported that the company accidentally made thousands of files publicly available | Image: Bloomberg By Shirin Ghaffary and Mark Anderson Anthropic PBC inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup's plans. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leak of basic source code -- the second slip-up in just a week -- triggered a discussion in the community around new revelations of how Anthropic's popular coding agent works. Developers said on X they were poring through the details to try and figure out how the startup intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure. The leak comes days after Fortune reported that the company accidentally made thousands of files publicly available, including a draft blog post that detailed a powerful upcoming model known internally as both "Mythos" and "Capybara" that presents cybersecurity risks. More From This Section Trump seeks to redefine 'regime change' in Iran war amid confusion Systematically crushing Iran, forging alliances in West Asia: Netanyahu Israel strikes Iran factory, says it supplied fentanyl for chemical weapons Musk must face class action over late disclosure of Twitter stake: Judge US judge orders Trump admin to halt White House ballroom construction

Anthropic
Business Standard27d ago
Read update
Anthropic accidentally releases source code for its popular Claude AI agent

AI giant Anthropic says 'exploring' Australia data centre investments

SYDNEY -- Artificial intelligence (AI) giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a "natural partner" for work in the booming sector. With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI. US-based Anthropic said it was "exploring investments in data centre infrastructure and energy throughout the country" after signing a memorandum of understanding with the Australian government. "The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region," the technology company said in a statement. "Australia's investment in AI safety makes it a natural partner for responsible AI development." The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to "maintain strong social licence for investment". Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books. Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain. Industry Minister Tim Ayres said Australia and Anthropic would "harness AI responsibly". New data centres -- warehouse facilities that store files and power AI tools -- are springing up worldwide. But there are increasing fears about the environmental impact of hulking data hubs. Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries. Australia last week adopted new rules governing the operation of data centres. Tech companies must show how they will source renewable energy and minimise their emissions. "As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable," the guidelines state. Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems. But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance. Washington has since described Anthropic's tools as an "unacceptable risk to national security". The United States has not only blocked use of the company's technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic's models.

Anthropic
Bangkok Post27d ago
Read update
AI giant Anthropic says 'exploring' Australia data centre investments

Anthropic Accidentally Leaks Claude Code AI Secrets: What It Means for Users

Anthropic Accidentally Leaks Claude Code Details: What Users Should Know (Image credit: AP) In a dramatic turn of events, AI giant Anthropic has accidentally revealed sensitive key details of Claude Code, its major agentic coding tool. This disclosure seemed to stem from a debugging file mistake in a public software package, which offered an unusually deep look into how one of the agentic AI systems operates. The issue began with version 2.1.88 of the @anthropic-ai/claude-code package on npm. This included a 59.8 debug file that was meant for only developers to access. It allowed anyone to access a major chunk of the tool's source code, leading to the exposure of around 512000 lines of TypeScript.

Anthropic
TimesNow27d ago
Read update
Anthropic Accidentally Leaks Claude Code AI Secrets: What It Means for Users

Anthropic accidentally exposed part of Claude Code's internal source code

Anthropic recently experienced a surge in downloads following its split from the Pentagon. Rival developers just gained insight into how Anthropic built its popular AI-powered coding assistant tool, Claude Code. The company accidentally leaked part of the internal source code for Claude Code during a release, a spokesperson confirmed to Business Insider on Tuesday. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." An X post with a link and a screenshot of what appeared to be internal source code for Claude Code had racked up 26 million views on X as of Tuesday evening. The exposed code was related to Claude Code itself, not the underlying AI models. The leak could give a leg up to Anthropic's rivals by offering an inside look into one of its most popular products. It also raises security questions about a company that has positioned itself as focused on AI safety. The leak comes after a period of growth for Anthropic, fueled by a very public breakup with the Pentagon in February. After CEO Dario Amodei refused to back down in a dispute over how its AI could be used, the Defense Department instead struck a deal with OpenAI. Last week, a US District Judge Rita Lin granted a temporary injunction blocking the supply chain risk designation. Following the dispute, Anthropic's Claude chatbot saw a surge of downloads over the past month, briefly rising to No. 1 in the US Apple App Store.

Anthropic
DNyuz27d ago
Read update
Anthropic accidentally exposed part of Claude Code's internal source code

Anthropic leak revealed part of Claude Code's internal source code due to 'human error'

Rival developers just gained insight into how Anthropic built its popular AI-powered coding assistant tool, Claude Code. The company accidentally leaked part of the internal source code for Claude Code during a release, a spokesperson confirmed to Business Insider on Tuesday. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." An X post with a link and a screenshot of what appeared to be internal source code for Claude Code had racked up 26 million views on X as of Tuesday evening. The exposed code was related to Claude Code itself, not the underlying AI models. The leak could give a leg up to Anthropic's rivals by offering an inside look into one of its most popular products. It also raises security questions about a company that has positioned itself as focused on AI safety. The leak comes after a period of growth for Anthropic, fueled by a very public breakup with the Pentagon in February. After CEO Dario Amodei refused to back down in a dispute over how its AI could be used, the Defense Department instead struck a deal with OpenAI. Last week, a US District Judge Rita Lin granted a temporary injunction blocking the supply chain risk designation. Following the dispute, Anthropic's Claude chatbot saw a surge of downloads over the past month, briefly rising to No. 1 in the US Apple App Store.

Anthropic
Business Insider27d ago
Read update
Anthropic leak revealed part of Claude Code's internal source code due to 'human error'

Anthropic accidentally releases source code for Claude AI agent - The Economic Times

Anthropic said a Claude Code release accidentally included internal source code due to human error, not a breach. No customer data was exposed. The slip-up, following another recent issue, sparked developer interest and security concerns, as people examined details and clues about plans, including references to a powerful upcoming model.Anthropic inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup's plans. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leak of basic source code -- the second slip-up in just a week -- triggered a discussion in the community around new revelations of how Anthropic's popular coding agent works. Developers said on X that they were poring through the details to try and figure out how the startup intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure. The leak comes days after Fortune reported that the company accidentally made thousands of files publicly available, including a draft blog post that detailed a powerful upcoming model known internally as both "Mythos" and "Capybara" that presents cybersecurity risks.

Anthropic
Economic Times27d ago
Read update
Anthropic accidentally releases source code for Claude AI agent - The Economic Times

Anthropic accidentally releases source code for Claude AI agent

San Francisco - Anthropic inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the start-up's plans. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leak of basic source code - the second slip-up in just a week - triggered a discussion in the community around new revelations of how Anthropic's popular coding agent works. Developers said on X they were pouring through the details to try and figure out how the start-up intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure. The leak comes days after Fortune reported that the company accidentally made thousands of files publicly available, including a draft blog post that detailed a powerful upcoming model known internally as both "Mythos" and "Capybara" that presents cybersecurity risks. BLOOMBERG

Anthropic
The Straits Times27d ago
Read update
Anthropic accidentally releases source code for Claude AI agent

Anthropic Accidentally Releases Source Code for Claude AI Agent

Anthropic PBC inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup's plans. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." The leak of basic source code -- the second slip-up in just a week -- triggered a discussion in the community around new revelations of how Anthropic's popular coding agent works. Developers said on X they were poring through the details to try and figure out how the startup intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure. The leak comes days after Fortune reportedBloomberg Terminal that the company accidentally made thousands of files publicly available, including a draft blog post that detailed a powerful upcoming model known internally as both "Mythos" and "Capybara" that presents cybersecurity risks.

Anthropic
Bloomberg Business27d ago
Read update
Anthropic Accidentally Releases Source Code for Claude AI Agent

NCIS: Sydney Season 3 Episode 13 Review: A Thrilling Conclusion Deepens Bonds Amid Chaos

We can let a collective sigh of relief that Trigger has survived. There were a few times during NCIS: Sydney Season 3 Episode 13 when it felt like his number was up. However, we didn't escape the hour without a casualty. It just made many on the team really appreciate one another and the dynamics they have. Lone Wolf Part Two is just as strong as NCIS: Sydney Season 3 Episode 12, and once again, I found Claude Jabbour's performance as Trigger captivating. There are so many layers to Trigger. Jabbour truly sells the pain in his eyes, as well as the difficulty of grasping that he has connected with this team, especially Evie. The intensity of his work on her bomb collar got to me. It was easy to see that if it were just him, he'd probably embrace his fate, but because Evie was there, he wasn't going to let her die. Evie bringing him back to the land of the conscious as seconds ticked away added that extra layer of fear. Suddenly, we have these two people with an inexplicable bond that largely formed offscreen during one of the most intense, life-and-death moments ever. Being in the thick of it together would bond them. They forged a bond by sharing trauma and getting through the darkness and into the other side. Ironically, Kyle forged a bond with his friend through similar experiences. In an effort to strip Trigger of everything he loves, he gives Trigger the same thing: a unique bond with someone who has stood on the edges of darkness with him and breached all his defenses. The dynamic between Trigger and Evie is fascinating. It's still not something I buy into romantically, but it works if we don't have to be so reductive. I tend to find that the most compelling relationships are those that reside in the gray, indefinable spaces and cracks. My only wish is that they gave us more of them in some capacity leading up to this, so that when we got to a point where they were making pacts, and Evie became Trigger's next of kin, I could feel the weight of that more. It's like we missed the exact moment when something clicked for him, specifically, where a closed-off guy like him realized that Evie could be his person. Whatever that actually means to him. After this traumatic experience, we at least have that to build on a bit more. And the series may very well intend to keep doing that now, which is fine. What does work for this duo is the two different types of darkness they hide behind. Trigger is intense and enigmatic, usually. He's been a bit of a raw nerve now because all of his dirty laundry and his history are out there. Evie has so much darkness, too. She hides behind and deflects with humor. But the self-awareness she has, to the point of expressing it openly now, makes things interesting. Through that, their connection has some roots, even though it still feels like it came out of nowhere. But they are interesting to watch. And they may both offer each other something that those around them can't necessarily deliver. Because when you have that darkness, sometimes, you're more cognizant of not wanting to dim the light in your world (Trigger with Blue, perhaps, and Evie with DeShawn). It was upsetting to learn that Trigger shot an unarmed man. He just didn't know it because Rory covered for him. Rory's intentions were all good and well. He didn't want to ruin the career of a young, up-and-coming Trigger because he did his job. Someone yelled "Gun," and Trigger shot. On paper, that's a clean shoot for him. He did what he was trained to do. But Rory's efforts to protect Trigger turned him into a monster. One's actions will always come back around to haunt a person. Kyle basically had his entire life ruined in the split second it took for a cop to mistake an inhaler for a gun. There was no end to Kyle's revenge. It was like he couldn't rest until Trigger was dead. Trigger survived the bomb collar, so Kyle attacked him at the hospital. And when he served that, somehow (because the logistics aren't sensible), he arranged for two bombs at Rory's funeral. He never got his wish, so is there an end to Kyle? The guy is relentless. The hour does the absolute most sometimes with the bomb content, though. Waiting until mere seconds remained for no real reason other than "dramatic effect" got grating. If they had to hold off, Trigger, grappling with his desire to be with Charlotte again versus calling in Evie, because he's learned to trust her, with mere seconds remaining, was the one time they used that tactic. Nevertheless, the stakes were high, and we didn't escape the hour without a bomb taking someone out. I didn't trust Rory, so it was a relief to learn that he didn't have ill will and was genuinely being shady because he was protecting Trigger. He broke protocol and did something unethical, but that doesn't make him a bad person. I've given Jabbour his kudos, but Todd Lansance wrecked me, particularly in the back half after Rory's death. It's hard not to obsess over the partnership between JD and Mackey. We've seen how they blossomed from a working dynamic into a partnership, and into something far deeper and more intimate than either of them will ever articulate. Oddly enough, trusting someone with your life in that gig is a bit easier, simpler. The real intimacy that makes them work and feels electric in those quiet moments is knowing that they trust each other with their vulnerability. JD fully leaned in and on Mackey as he processed Rory's death. There wasn't any holding back for either of them. There are two people fully displaying their emotions and care. It seems like such a small thing, but it's so important. Even Mackey panicked when she figured out why the dogs were barking. We got two "Jims" out of her rather than just calling JD. She was fully terrified that he'd get blown up in the blast, and she was beside herself, cradling his head and trying to make sure he was still alive. Her panic in the moment when he wasn't breathing was palpable. You could sense that she felt horrible that he lost someone, but she was also just so damn relieved that she didn't lose her someone. The weight of her reminding him that his friend is still a good person, and his friend, even though he did a bad thing, hangs heavy, as we still have to wait to see how this conspiracy plot will play out. But she did finally tell him about the missing photo of her son. And I'm so relieved that she's flying Trey out. He'll be a bit safer with her having eyes on him, but I'm still stressed out about this. Stakeout Sessions: * DeShawn telling Evie that Doris, the Bagel lady, is his next of kin in Australia, felt like him pulling an Evie on Evie. Seriously, it's Evie, isn't it?! * Blue taking so long to figure out that Doc and the doctor are probably doing fun adult things and not just working another case made me laugh out loud. * Most of the season has played up Evie's humor as a defense mechanism. I'm so glad the past two episodes have shown her depth. * I'm really missing some quality Blue content. We haven't had much since the beginning of the season. Over to you, NCIS: Sydney Fanatics. Was this a great two-parter? How are you feeling about the Evie/Trigger dynamic? Are you bummed by Rory's death? Let's hear all of your thoughts in the comments below. Or share this with a fellow NCIS: Sydney fan! Thanks for all of your support!

CHAOS
TV Fanatic27d ago
Read update
NCIS: Sydney Season 3 Episode 13 Review: A Thrilling Conclusion Deepens Bonds Amid Chaos

AI giant Anthropic says 'exploring' Australia data centre investments

SYDNEY, Australia -- Artificial intelligence giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a "natural partner" for work in the booming sector. With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI. US-based Anthropic said it was "exploring investments in data centre infrastructure and energy throughout the country" after signing a memorandum of understanding with the Australian government. "The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region," the technology company said in a statement. "Australia's investment in AI safety makes it a natural partner for responsible AI development." Get the latest news delivered to your inbox Sign up for The Manila Times newsletters By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy. The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to "maintain strong social licence for investment". Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books. Advertisement Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain. Industry Minister Tim Ayres said Australia and Anthropic would "harness AI responsibly". Energy-intensive New data centers -- warehouse facilities that store files and power AI tools -- are springing up worldwide. Advertisement But there are increasing fears about the environmental impact of hulking data hubs. Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries. Australia last week adopted new rules governing the operation of data centres. Tech companies must show how they will source renewable energy and minimise their emissions. Advertisement "As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable," the guidelines state. Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems. But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance. Washington has since described Anthropic's tools as an "unacceptable risk to national security". Advertisement The United States has not only blocked use of the company's technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic's models.

Anthropic
The Manila times27d ago
Read update
AI giant Anthropic says 'exploring' Australia data centre investments

Mercor says it is "one of thousands of companies" hit by the recent LiteLLM attack

The LiteLLM attack, one of the largest supply chain attacks in the AI industry, affected Mercor, whose data was stolen and posted on dark web forums. A couple of days ago, a group of hackers known as TeamPCP pulled off a ballsy supply chain attack against a core piece of AI infrastructure. They targeted LiteLLM, a super popular open-source API gateway that lets developers talk to over 100 different Large Language Models like OpenAI and Anthropic. The hackers got in by first compromising the Trivy vulnerability scanner through a misconfigured GitHub Actions workflow. With access, they snatched the PyPI publishing token for LiteLLM and pushed two malicious versions, 1.82.7 and 1.82.8, directly to the public registry. Once infected, the malware would automatically run and steal SSH keys, .env files, cloud provider credentials, cryptocurrency wallets, and AI API keys. Now, Mercor, an AI recruiting and training-data startup, has confirmed it was "one of thousands of companies" hit by the attack. In a statement to TechCrunch, Mercor spokesperson Heidi Hagberg said: We are conducting a thorough investigation supported by leading third-party forensics experts. We will continue to communicate with our customers and contractors directly as appropriate and devote the resources necessary to resolving the matter as soon as possible. The attackers were only caught because of a small bug in their code that caused a massive memory leak. Callum McMahon, an engineer at FutureSearch, who was testing an AI plugin that happened to use LiteLLM, noticed his machine kept crashing. He traced the problem back to a malicious file that was recursively spawning new processes, accidentally creating a "fork bomb" that exhausted all the system's RAM. If you think you have been infected, the first (obvious) step is to rotate all keys and secrets. After that, you need to upgrade LiteLLM. The last safe pre-attack version is 1.82.6, and the first patched post-attack version is 1.83.0. It seems there has been an increase in security incidents affecting developer tools. Shortly after the LiteLLM attack, the popular JavaScript library Axios was also compromised on NPM. In that case, hackers stole the credentials of a lead maintainer, changed the account email to an anonymous ProtonMail address, and published two poisoned versions. The malware even tries to be clever about it, self-destructing its own malicious scripts after execution and replacing its package.json with a decoy to hide its tracks.

AnthropicMercor
Neowin27d ago
Read update
Mercor says it is "one of thousands of companies" hit by the recent LiteLLM attack

AI giant Anthropic says 'exploring' Australia data centre investments

Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books Artificial intelligence giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a "natural partner" for work in the booming sector. With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI. US-based Anthropic said it was "exploring investments in data centre infrastructure and energy throughout the country" after signing a memorandum of understanding with the Australian government. "The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region," the technology company said in a statement. "Australia's investment in AI safety makes it a natural partner for responsible AI development." The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to "maintain strong social licence for investment". Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books. Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain. Industry Minister Tim Ayres said Australia and Anthropic would "harness AI responsibly". - Energy-intensive - New data centres -- warehouse facilities that store files and power AI tools -- are springing up worldwide. But there are increasing fears about the environmental impact of hulking data hubs. Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries. Australia last week adopted new rules governing the operation of data centres. Tech companies must show how they will source renewable energy and minimise their emissions. "As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable," the guidelines state. Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems. But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance. Washington has since described Anthropic's tools as an "unacceptable risk to national security". The United States has not only blocked use of the company's technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic's models.

Anthropic
Daily Mail Online27d ago
Read update
AI giant Anthropic says 'exploring' Australia data centre investments

$340 billion Anthropic that wiped trillions from stock market worldwide has source code of its most-important tool leaked on internet

$340 billion Anthropic that wiped trillions from stock market worldwide has source code of its most-important tool leaked on internetAnthropic, the AI company whose product updates have repeatedly sent global stock markets into a spin, is now dealing with an embarrassing leak of its own making. The full source code of Claude Code -- its flagship AI coding agent -- accidentally made its way onto the public internet via an npm package that shipped with a source map file it shouldn't have.The leak exposed roughly 2,200 files and 30MB of TypeScript. It reportedly wasn't the first time, either. According to engineers who dug through the code, this is at least the third time Anthropic has made this exact mistake.Developers who got their hands on the dump found more than just clean engineering. Buried inside were several unreleased features that Anthropic had been quietly building behind compile-time feature flags. One, codenamed Kairos, appears to be an always-on background agent with memory consolidation -- essentially a version of Claude that never fully switches off. Another is a full companion pet system called Buddy, complete with 18 species, rarity tiers, shiny variants, and stat distributions. There is also an Undercover Mode, described as auto-activating for Anthropic employees on public repos, which strips AI attribution from commits with no visible off switch.Coordinator Mode turns Claude into an orchestrator managing parallel worker agents. Auto Mode uses an AI classifier to silently approve tool permissions, removing the usual confirmation prompts.Beyond the hidden features, the leak gave outsiders a rare look at how a well-funded AI product actually gets built under pressure. The findings were mixed. The main user interface is a single React component -- 5,005 lines long -- containing 68 state hooks, 43 effects, and JSX nesting that goes 22 levels deep. Engineers reading it noted a TODO comment sitting next to a disabled lint rule on line 4114. The entry point file, main.tsx, runs to 4,683 lines and handles everything from OAuth login to mobile device management. Sixty-one separate files contain explicit comments about circular dependency workarounds. A type name used over 1,000 times across the codebase reads: AnalyticsMetadata_I_VERIFIED_THIS_IS_NOT_CODE_OR_FILEPATHS.One standout detail: the word "duck" is encoded in hexadecimal -- String.fromCharCode(0x64,0x75,0x63,0x6b) -- because the string apparently collides with an internal model codename that Anthropic's CI pipeline scans for. Rather than add a regex exception, every animal species in the pet system got hex-encoded.This latest incident is not isolated. Fortune reported that a separate, earlier leak this week exposed nearly 3,000 files, including a draft blog post revealing a powerful upcoming model referred to internally as both "Mythos" and "Capybara." Security researchers who reviewed the Claude Code leak also warned that it potentially allows competitors to reverse-engineer its agentic harness and that, even without proper access keys, certain internal Anthropic systems may remain reachable -- raising concerns about nation-state exploitation of the company's most capable models.Anthropic confirmed the incident but sought to limit the damage. A company spokesperson told Fortune no sensitive customer data or credentials were exposed, describing the incident as a release packaging issue caused by human error rather than a security breach, and adding that the company is rolling out measures to prevent a recurrence.The timing is uncomfortable. Bloomberg reported this week that Anthropic is in early discussions with Goldman Sachs, JPMorgan, and Morgan Stanley about a potential October IPO, with a valuation hovering around $380 billion. The company has already rattled markets this year -- its Cowork and Claude Code Security updates wiped billions from software and cybersecurity stocks in a matter of weeks.Leaking your own source code, for the third time, is not ideal pre-IPO optics.

Anthropic
The Times of India27d ago
Read update
$340 billion Anthropic that wiped trillions from stock market worldwide has source code of its most-important tool leaked on internet

AI giant Anthropic says 'exploring' Australia data centre investments

Sydney (AFP) - Artificial intelligence giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a "natural partner" for work in the booming sector. With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI. US-based Anthropic said it was "exploring investments in data centre infrastructure and energy throughout the country" after signing a memorandum of understanding with the Australian government. "The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region," the technology company said in a statement. "Australia's investment in AI safety makes it a natural partner for responsible AI development." The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to "maintain strong social licence for investment". Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books. Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain. Industry Minister Tim Ayres said Australia and Anthropic would "harness AI responsibly". Energy-intensive New data centres -- warehouse facilities that store files and power AI tools -- are springing up worldwide. But there are increasing fears about the environmental impact of hulking data hubs. Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries. Australia last week adopted new rules governing the operation of data centres. Tech companies must show how they will source renewable energy and minimise their emissions. "As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable," the guidelines state. Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems. But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance. Washington has since described Anthropic's tools as an "unacceptable risk to national security". The United States has not only blocked use of the company's technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic's models.

Anthropic
France 2427d ago
Read update
AI giant Anthropic says 'exploring' Australia data centre investments

Anthropic confirms Claude code source leak, says no sensitive data exposed

In a back-to-back similar incident, Anthropic confirms Claude Code source leak amid rising AI competition. The company insists that no user data was exposed. Anthropic has confirmed that part of the internal source code for its popular coding assistant, Claude Code, was accidentally leaked, in what the company described as a human error rather than a security breach. The incident, which surfaced on Tuesday, has drawn significant attention across the developer community after a post on X sharing access to the code garnered more than 21 million views within hours. While the exposure has raised concerns about competitive risks, the company has sought to reassure users that no sensitive information was compromised. "No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement, reports CNBC. "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Leak raises competitive concerns Although Anthropic has downplayed the security implications, the leak could still prove consequential. Source code disclosures can provide valuable insight into how a system is designed, potentially giving rival firms and independent developers a closer look at the architecture behind Claude Code, one of the company's fastest-growing AI tools. The assistant has gained traction for its coding capabilities, making the inadvertent exposure particularly sensitive from a competitive standpoint. Analysts note that even partial access to internal systems can reveal optimisation strategies, workflows, or proprietary approaches that companies typically guard closely. The timing of the leak has also intensified scrutiny, as it comes amid fierce competition in the artificial intelligence sector, where companies are racing to build increasingly capable coding assistants and enterprise tools. Second incident in a week The source code leak marks Anthropic's second high-profile data mishap in less than a week. According to a report by Fortune, unpublished documents detailing the company's upcoming AI model were recently discovered in a publicly accessible data cache. The exposed material reportedly revealed early information about a next-generation system referred to as "Claude Mythos", internally codenamed "Capybara". The documents suggest that the model represents a significant upgrade over Anthropic's current offerings, surpassing its top-tier Opus models in both scale and capability. Anthropic currently categorises its models into three tiers: Opus, Sonnet and Haiku. However, the leaked documents indicate that the new system would sit above Opus, positioning it as the company's most advanced and resource-intensive model to date. The draft materials reviewed by Fortune also highlight potential concerns around misuse, including the system's possible application in cyberattacks. The company has reportedly completed training and is proceeding with limited external testing, signalling a cautious rollout strategy. In addition to model-related information, the data cache reportedly exposed details about an invite-only CEO summit in Europe, part of Anthropic's broader enterprise push. Nearly 3,000 unpublished assets were said to have been accessible before the issue was resolved. Anthropic has acknowledged that incident as well, attributing it to a configuration error in its content management system. Together, the two episodes underscore the operational risks facing fast-growing AI firms as they scale infrastructure and accelerate product development. While neither incident involved confirmed breaches of user data, they highlight the challenges of maintaining strict internal controls in a rapidly evolving industry.

Anthropic
Firstpost27d ago
Read update
Anthropic confirms Claude code source leak, says no sensitive data exposed

Anthropic Accidentally Exposes Source Code for Claude Code

Expertise I have more than 30 years' experience in journalism in the heart of the Silicon Valley. Artificial intelligence firm Anthropic has accidentally revealed the source code for its popular coding tool Claude Code. The leak occurred Tuesday morning when the company published version 2.1.88 of Claude Code to the public npm registry and inadvertently included a source map file, exposing more than 500,000 lines of code and nearly 2,000 files. A link to an archive containing the files was posted to X by security researcher Chaofan Shou, attracting more than 27 million views. Claude AI is a versatile AI tool capable of answering questions, generating creative content like stories and poems, translating languages, transcribing and analyzing images, writing code, summarizing text and engaging people in natural, interactive conversations. An Anthropic spokesperson confirmed the leak, saying it was caused by human error and that the company is taking steps to prevent a recurrence. "Earlier today, a Claude Code release included some internal source code," the spokesperson said. "No sensitive customer data or credentials were involved or exposed." While Claude Code has reportedly been reverse-engineered, the leak provides developers with a rare look at the roadmap for perhaps Anthropics' most popular product, allowing competitors to gain insight into the coding tool's underpinnings. The tool has seen a surge in popularity in recent months, with the Claude Code app experiencing a viral moment over the holidays as people discovered its vibe coding capabilities. Anthropic launched a Super Bowl ad campaign attacking rival OpenAI for its decision to put ads in its free and low-cost ChatGPT plans.

Anthropic
CNET27d ago
Read update
Anthropic Accidentally Exposes Source Code for Claude Code

Anthropic Source Code Leaked For Second Time In A Week: Here's What It Means

Artificial intelligence startup Anthropic is facing a localised crisis yet again, after accidentally leaking the source code for its Claude Code platform, for the second time in a single week. The security lapse took place on Tuesday during a software update when a debugging file containing over 5 lakh lines of proprietary code was bundled into the update. The error was first identified by a security researcher, Chaofan Shou, who shared the discovery on X. Within hours, the post reached 21 million views, leading to a massive surge in unauthorised downloads and mirrors across GitHub. The incident has sparked a high-stakes digital pursuit as Anthropic's legal team issued a barrage of DMCA takedown notices. However, the leak took a turn when Sigrid Jin, a prominent developer and top user of Claude's API, reportedly rewrote the entire codebase in Python from scratch. Jin's version, titled "claw-code," reached 30,000 stars on GitHub in record time. ALSO READ: Oracle Layoffs: Tech Giant To Cut Thousands Of Jobs Amid Rising AI Spending? Here's What We Know What's Next For Anthropic? Because a total rewrite constitutes a new creative work, industry experts suggest it may bypass standard copyright takedown efforts. The timing of the leak is particularly damaging for the San Francisco-based firm. This marks the second time in a seven-day window that Anthropic has inadvertently exposed internal intellectual property. The repeated failures come despite the company's implementation of "Undercover Mode," a system specifically designed to prevent Claude from disclosing sensitive internal secrets. While Anthropic has attempted to scrub the code from centralised repositories, mirrors have already moved to decentralised platforms, rendering the leak effectively permanent. Anthropic has not yet commented on the status of its internal investigation or potential disciplinary actions regarding the deployment error. ALSO READ: AI Ambitions At Risk? Only 14% Enterprises Have Peak Cloud Maturity: Study Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories -- On NDTV Profit.

Anthropic
NDTV Profit27d ago
Read update
Anthropic Source Code Leaked For Second Time In A Week: Here's What It Means

Perplexity AI Machine Accused of Sharing Data With Meta, Google

Perplexity AI Inc. was accused in a lawsuit of surreptitiously sharing the personal information of its users with Meta Platforms Inc. and Alphabet Inc.'s Google in violation of California privacy laws. As soon as users log into Perplexity's home page, trackers are downloaded onto their devices, giving Meta and Google full access to the conversations between them and Perplexity's AI Machine search engine, according to the proposed class-action complaint filed Tuesday in federal court in San Francisco. This allows Meta and Google "to exploit this sensitive date for their own benefit, including targeting individuals with advertising and reselling their sensitive data to additional third parties," according to the complaint. Users' personal data is shared even when they sign up for Perplexity's "Incognito" mode, according to the complaint. The suit was filed on behalf of an Utah man, identified only as John Doe, who seeks to represent a class of Perplexity users. According to the suit, the man shared information about his family's finances, his tax obligations, his investment portfolio and strategies with Perplexity's chatbot. Perplexity embedded "undetectable" tracking software into the search engine's code that automatically transmits users' conversations to Meta, Google and other third parties, according to the complaint. The lawsuit also targets Meta and Google, accusing them of violating federal and state computer privacy and fraud laws. A Meta spokesperson pointed to a Facebook help page which says it's against the tech giant's rules for advertisers to send the company sensitive information. "We have not been served any lawsuit that matches this description so we are unable to verify its existence or claims," said Jesse Dwyer, a Perplexity spokesperson. Representatives of Google didn't immediately respond to a request for comment. The case is Doe v. Perplexity AI Inc., 3:26-cv-02803, US District Court, Northern District of California (San Francisco).

Perplexity
Bloomberg Business27d ago
Read update
Perplexity AI Machine Accused of Sharing Data With Meta, Google
Showing 10181 - 10200 of 11425 articles