The latest news and updates from companies in the WLTH portfolio.
Anthropic said on Wednesday it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy, and its impact on workers and jobs. Under the agreement, the Claude maker will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities. Anthropic said it would also target investments in data centre infrastructure and energy across Australia. "Australia's investment in AI safety makes it a natural partner for responsible AI development," Anthropic CEO Dario Amodei said in Canberra, where he is expected to meet Prime Minister Anthony Albanese on Wednesday. "This memorandum of understanding gives our collaboration a formal foundation." The deal mirrors similar agreements with safety institutes in the United States, Britain and Japan. Australia currently has no specific AI legislation. The centre-left Labor government has said it would rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns. In its National AI Plan released in December, Labor outlined a roadmap to ramp up AI adoption across the economy, attract data centre investment, and build AI skills to support jobs as the technology becomes more integrated into daily life. Comments Published on April 1, 2026 READ MORE
The accidental release of Claude's code exposes advanced AI features and raises ethical concerns A single accidental file release has exposed the internal architecture of one of the most advanced AI coding systems ever built, revealing how it stores memory, operates in the background, and -- in one controversial case -- conceals its own identity. The Claude Code leak has become one of the most widely discussed incidents in the AI industry, offering an unprecedented glimpse into the internal mechanics of Anthropic's flagship developer tool. Within hours of the file appearing publicly, developers across the world were analysing hundreds of thousands of lines of code and uncovering features never intended for public release -- raising urgent questions about trust, security, and the future direction of AI systems. The incident traces back to a technical oversight involving a debugging file. According to reporting by Hindustan Times, a 59.8MB JavaScript source map file was mistakenly included in a public npm release of Claude Code. This file, intended only for internal debugging, effectively exposed a detailed blueprint of the system's architecture. Developer Chaofan Shou highlighted the issue on X, posting a download link that accelerated its spread. Within hours, the entire codebase had been mirrored across platforms including GitHub, making containment nearly impossible. One of the most striking revelations from the leak is the system's so-called 'Self-Healing Memory' architecture, which developers analysing the code noted tackles a persistent AI challenge known as 'context entropy', where models lose coherence during long interactions. Rather than storing everything in one place, the system uses a layered memory approach in which a lightweight file acts as an index, pointing to relevant data stored elsewhere, allowing the AI to retrieve only what it needs rather than reloading entire conversations. The system also follows a 'Strict Write Discipline', updating memory only after successful actions to reduce the risk of storing errors or misleading information. Notably, the AI treats its own memory as a 'hint' rather than a source of truth, verifying information before acting on it -- a layer of self-correction rarely seen in current AI tools. Another key feature exposed in the leak is KAIROS, a system that allows the AI to operate in the background. Through a process referred to as 'autoDream', the AI continues to refine and organise its memory even when the user is inactive, effectively improving itself between sessions. Rather than functioning as a passive tool, the system behaves more like an ongoing collaborator, continuously preparing for future tasks. This shift towards persistent, agent-like behaviour signals a broader trend in AI development towards systems that remain active even when users are not. Perhaps the most controversial element revealed by the leak is a feature labelled 'Undercover Mode', which appears to allow the system to contribute to public projects without revealing its identity. According to the leaked file as reported by Hindustan Times, internal instructions state: 'You are operating UNDERCOVER... Do not blow your cover.' This raises immediate ethical concerns. If AI can participate anonymously in open-source projects or public repositories, questions emerge about transparency, attribution, and accountability -- and for developers and organisations, this could blur the line between human and machine contributions in ways that are difficult to regulate. The leak also shed light on Anthropic's internal model ecosystem, including codenames such as Capybara, Fennec, and Numbat. According to the leaked data, despite their sophistication, some versions show higher rates of false or misleading outputs compared to earlier iterations -- a finding that highlights that even cutting-edge AI systems remain imperfect and require careful oversight. Beyond technical curiosity, the leak has serious security implications. With the system's inner workings now public, malicious actors may attempt to exploit potential vulnerabilities. The timing has compounded concerns, coinciding with a separate supply-chain issue involving the axios npm package, and users who installed updates during this period may face elevated risks. The Claude Code leak sparked immediate debate across X and developer forums, with users calling it a rare glimpse into advanced AI systems. While some praised the insight into features such as 'Self-Healing Memory', others raised concerns about 'Undercover Mode' and its potential for misuse. Many also questioned how such a significant oversight occurred. Cybersecurity experts warned users to avoid interacting with leaked files and to remain cautious. According to reports, Anthropic has advised users to avoid the affected npm version and switch to its official installer. Security experts recommend adopting a zero-trust approach, auditing systems for anomalies, and rotating API keys as a precaution. For all its risks, the incident has provided a rare and detailed look into how modern AI systems are engineered. From self-healing memory to autonomous background processes, the revelations highlight both the sophistication and the unresolved challenges of today's AI tools, and may ultimately shape how companies balance innovation with transparency in the next phase of artificial intelligence development.

Multi-car crash on Narrows Bridge sparks peak-hour chaos, Tonkin Highway also hit by traffic delays Claire SadlerThe West Australian Wed, 1 April 2026 6:30AM Email Claire Sadler A multi-car prang on a busy Perth freeway is already causing traffic chaos for commuters. Several cars collided around 6.30am on the northbound side of Narrows Bridge, blocking the two left lanes. Main Roads WA have warned there is traffic congestion on approach to Mill Point Road as commuters try to get around the crash. Police are also on scene. A crash southbound on Tonkin Highway is also backing up traffic near the exit to Dunreath Drive and Perth Airport. The left turn lane to those exits is blocked as tow trucks work to move the cars. Main Roads WA said traffic was heavy on approach back to Guildford Road and urged drivers to merge safely. Drivers are urged to exercise extreme caution due to the extra congestion. More to come. Get the latest news from thewest.com.au in your inbox. Sign up for our emails

Add Yahoo as a preferred source to see more of our stories on Google. Brisbane commuters have woken to widespread rail chaos, as an industrial dispute shuts down key train lines just days before a massive month-long network closure. Hundreds of Rail, Tram and Bus Union members walked off the job on Wednesday after wage negotiations with the Queensland government broke down, triggering major disruptions across the Ipswich/Rosewood and Cleveland lines. Queensland Rail confirmed there were no trains running between Darra and Rosewood, and between Central and Cleveland, with rail replacement buses deployed across both corridors. Travellers were urged to allow extra time and consider alternative options, with services expected to reach capacity. The disruption comes at a particularly difficult time for commuters already grappling with soaring fuel prices and increased reliance on public transport. Speaking on Today, Queensland Rail chief executive Kat Stapleton "profusely apologised" to travellers and urged unions to abandon industrial action while negotiations continued. "Unfortunately, there are another over 30 protected industrial action notices that we've received," Ms Stapleton said. "We will not be able to handle all of them unless the unions stop protected industrial actions and come back to the table." She said Queensland Rail had done its "very best" to minimise disruptions. The dispute centres on enterprise bargaining negotiations covering about 5600 rail workers that have been ongoing since January. Queensland Rail said unions had made more than 500 claims, including additional leave entitlements, a shorter work week, and higher superannuation contributions. Ms Stapleton said many of the claims "far exceeded community norms". The Rail, Tram and Bus Union (RTBU) insists the action was intended to target coal and freight operations rather than passenger services and has accused the government of escalating the situation. RTBU state secretary Peter Allen said the last thing the union wanted was for Queensland commuters to be "caught in the crossfire" or a bargaining dispute. "That's why they are taking limited industrial action that would have no effect on passengers and would be limited to coal and mineral trains," Mr Allen said. "Unfortunately, the Queensland government has responded with a heavy-handed and disproportionate action, looking to turn a minor ban on mineral trains into a full-time stoppage. Any impact on passengers is purely self-inflicted and entirely the choice of the Queensland government." The union also claimed members had been "locked out" after refusing partial duties, estimating about 200 train control staff would take part in the 24-hour strike. Queensland Rail said the extent of disruptions could change at short notice and encouraged passengers to monitor updates and make alternative arrangements where possible. Wednesday's stoppage is only the beginning of travel headaches, with a major 23-day rail shutdown starting on Friday and running until April 26. The planned closures will affect the Sunshine Coast, Caboolture, Redcliffe, Doomben, Shorncliffe, Airport, Gold Coast, and Beenleigh corridors, as authorities carry out a co-ordinated blitz of upgrades and maintenance. Rail replacement buses will service affected stations during the works, with some journeys expected to take significantly longer. Transport and Main Roads said bundling the works into one extended closure was to reduce long-term disruption and align with school holidays when fewer people travel.
SYDNEY, April 1 (Reuters) - Anthropic said on Wednesday it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy, and its impact on workers and jobs. Under the agreement, the Claude maker will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities. Anthropic said it would also target investments in data centre infrastructure and energy across Australia. "Australia's investment in AI safety makes it a natural partner for responsible AI development," Anthropic CEO Dario Amodei said in Canberra, where he is expected to meet Prime Minister Anthony Albanese on Wednesday. "This memorandum of understanding gives our collaboration a formal foundation." The deal mirrors similar agreements with safety institutes in the United States, Britain and Japan. Australia currently has no specific AI legislation. The centre-left Labor government has said it would rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns. In its National AI Plan released in December, Labor outlined a roadmap to ramp up AI adoption across the economy, attract data centre investment, and build AI skills to support jobs as the technology becomes more integrated into daily life. (Reporting by Renju Jose in Sydney; Editing by Edmund Klamann)
Anthropic says it'll sign a memorandum of understanding with the Australian government to share data from its "economic index," meant to measure AI use and shifting tasks across industries. The agreement also includes joint AI safety evaluations, research-sharing on new model capabilities and risks, and work with Australian universities - a sign policymakers want faster feedback than traditional studies can provide. Anthropic.. is also pointing to investment in Australian data centers and energy, linking AI growth to the facilities and electricity needed to run models at scale. It's a familiar playbook: the firm has also worked with AI safety institutes in the US, UK, and Japan as governments build oversight capacity alongside fast-moving tech. Why should I care? For markets: AI's constraints are physical not just digital. The bigger tell is where spending has to go next: chips need data centers, and those centers need steady power. Australia's National AI Plan is trying to attract that investment, which could support demand for infrastructure builders, data center operators, and utilities. If more countries pair "measure it" programs with incentives to expand compute, the AI buildout could keep moving even if some app trends cool. The bigger picture: Rules are emerging through partnerships and metrics. Australia isn't planning AI-specific legislation and is leaning on existing laws plus voluntary guidelines - reflecting a global push-pull between moving fast and staying safe. Deals like this can help regulators spot workforce disruption and model risks earlier, while they figure out what rules are actually needed. The trade-off is fragmentation: voluntary standards can differ across borders, making compliance messier and raising the value of shared safety tests and common benchmarks.

By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions. According to a Thursday (March 26) report by Fortune, a configuration error in Anthropic's content management system left nearly 3,000 unpublished documents in a publicly searchable data store, including a draft blog post describing the model as "by far the most powerful AI model we've ever developed." Independent security researchers at LayerX Security and the University of Cambridge found the materials before Anthropic restricted access. An Anthropic spokesperson confirmed the model's existence to Fortune, calling it a step change and the most capable system the company has built, with meaningful advances in reasoning, coding and cybersecurity. The company said it is testing the model, known as Claude Mythos, with a small group of early-access customers and has not set a general release date, partly because it remains expensive to run at scale. The leaked draft described Mythos as part of a new model tier called Capybara, positioned above its current top-tier Opus models in both capability and cost. Where prior models respond to instructions one step at a time, Mythos plans and executes sequences of actions on its own, moving across systems, making decisions and completing operations without waiting for human input at each stage. As reported by Fortune, the leaked document described Mythos as currently far ahead of any other AI model in cybersecurity capabilities and said it signals an approaching generation of systems that can find and exploit software weaknesses faster than defenders can close them. Anthropic said its rollout plan prioritizes enterprise security teams, giving defenders early access before the model reaches wider distribution. According to a Sunday (March 29) report from Axios, Anthropic has been privately warning senior government officials that Mythos makes large-scale cyberattacks significantly more likely in 2026, and that agents running on systems at this capability level can plan and carry out complex operations with minimal human involvement. According to a separate Axios report, published in November, a Chinese state-sponsored hacking group in September used an earlier Claude model to carry out 80-90% of a coordinated attack campaign on its own, working through roughly 30 organizations including technology companies, financial institutions and government agencies before Anthropic detected and shut it down. The AI identified targets, found weaknesses, wrote attack code and produced detailed post-operation reports, all with minimal human direction. The operators running the attack convinced the model it was performing legitimate security testing. Once inside that framing, the AI executed the operation without further instruction. A Dark Reading poll published in January found that 48% of cybersecurity professionals now rank agentic AI as the top attack vector for 2026, above deepfakes and social engineering. As reported by PYMNTS, the September Claude Code incident marked the first confirmed case in which an AI agent handled most steps of a cyberattack normally performed by human operators. Eva Nahari, then-chief product officer at AI solutions firm Vectara, told PYMNTS the campaign was "global, industry-agnostic and growing," adding that with automation comes velocity and scale, and that attackers are now acquiring the same advantages that AI gives enterprises. As also reported by PYMNTS, Anthropic's earlier research found that its Claude Opus 4.5 model reduced successful prompt injection attacks to 1% in browser-based operations, down from higher breach rates in earlier versions, though the underlying vulnerability persists as browser-based automation grows more common. PYMNTS Intelligence found that 98% of business leaders remain unwilling to grant AI agents action-level access to core systems, with trust as the primary constraint on adoption. According to a Monday (March 30) report by CSO Online, shares of major cybersecurity vendors, including CrowdStrike, Palo Alto Networks, Zscaler and Fortinet, fell following the Mythos news as investors considered what frontier AI capabilities embedded in security tools could mean for the industry's competitive structure.

'Any disruptions or closures ... are entirely the choice of the Queensland Government.' Brisbane commuters are experiencing major public transport disruptions with hundreds of Rail, Tram and Bus Union (RTBU) members locked out of work by the state government as negotiations over wages soured. Commuters received a rude shock on Wednesday morning when they arrived at stations across the city only to find hundreds of services had been cancelled. Translink said there were cancellations on the Ipswich/Rosewood and Cleveland lines, with no trains running between Darra and Rosewood and from Central to Cleveland. Know the news with the 7NEWS app: Download today Buses will operate between Rosewood and Darra, with trains still running between Darra and Central in both directions. Buses will also operate between Cleveland and Boggo Rd in both directions. Customers can connect with rail and bus services at Boggo Rd. The RTBU blamed the government for the disruption to commuters, saying it had locked rail workers out of their jobs. "In response to some very minor industrial action that would have no impact on passengers, the government has decided to inflict massive disruption on commuters and lock out our train controllers," union officials said on Tuesday night. "There is no strike action and our members are ready to work ... to keep trains running and get people to work and school." However, Queensland Rail blamed the union workers as it urged commuters to make alternative travel arrangements. "Queensland Rail and TransLink are working closely with local bus operators to help minimise impacts to customers during current protected industrial action," Queensland Rail said. "A limited number of replacement bus services will operate along the affected lines. "We are advising customers travelling on these lines to make alternative travel arrangements. "Changes can occur at short notice and customers can keep up to date via our social media channels." RTBU union officials said its planned industrial action for Wednesday was only due to disrupt coal and freight train services, not passenger lines. "Any disruptions or closures ... are entirely the choice of the Queensland Government," the union said. The disruptions come only days before planned track closures from April 3-26 for major project works.

Why it matters: The leak hands competitors a detailed unreleased feature roadmap and deepens questions about operational security at a company that sells itself as the safety-first AI lab. State of play: A file used internally for debugging, was accidentally bundled into a routine update of Claude Code and pushed to the public registry developers use to download and update software packages. * The file, which was quickly discovered by Chaofan Shou, pointed to a zip archive on Anthropic's own cloud storage containing the full source code, with nearly 2,000 files and 500,000 lines of code. * Within hours, the codebase was mirrored and dissected across GitHub, quickly amassing thousands of stars. What they're saying: "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson told Axios. * "This was a release packaging issue caused by human error, not a security breach. We're rolling out measures to prevent this from happening again." Zoom in: The leaked code contained dozens of feature flags for capabilities that appear fully built but haven't shipped, according to an Anthropic spokesperson, including: * The ability for Claude to review what was done in its latest session to study for improvements in the future while transferring learnings across conversations. * A "persistent assistant" running in background mode that lets Claude Code keep working even when a user is idle. * Remote capabilities, allowing users to control Claude from a phone or another browser, which was already rolled out for Claude Code. Between the lines: Outside developers have already reverse-engineered Claude Code, prompting a takedown notice from Anthropic, according to TechCrunch. * What's new is the roadmap: a clear picture of how Anthropic is building toward longer autonomous tasks, deeper memory and multi-agent collaboration. * Those kinds of updates could be a boon for Anthropic's enterprise push, which is the core driver of its revenue strategy, as the AI lab prepares to go public. Thought bubble: How AI companies lock down and secure their own systems is now just as important as how other organizations fend off hackers using these AI tools in their attacks, writes Sam Sabin, author of the weekly Future of Cybersecurity newsletter. The bottom line: The leak won't sink Anthropic, but it gives every competitor a free engineering education on how to build a production-grade AI coding agent and what tools to focus on.

SYDNEY, April 1 : Anthropic said on Wednesday it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy, and its impact on workers and jobs. Under the agreement, the Claude maker will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities. Anthropic said it would also target investments in data centre infrastructure and energy across Australia. "Australia's investment in AI safety makes it a natural partner for responsible AI development," Anthropic CEO Dario Amodei said in Canberra, where he is expected to meet Prime Minister Anthony Albanese on Wednesday. "This memorandum of understanding gives our collaboration a formal foundation." The deal mirrors similar agreements with safety institutes in the United States, Britain and Japan. Australia currently has no specific AI legislation. The centre-left Labor government has said it would rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns. In its National AI Plan released in December, Labor outlined a roadmap to ramp up AI adoption across the economy, attract data centre investment, and build AI skills to support jobs as the technology becomes more integrated into daily life.

Brisbane commuters have woken to widespread rail chaos, as an industrial dispute shuts down key train lines just days before a massive month-long network closure. Hundreds of Rail, Tram and Bus Union members walked off the job on Wednesday after wage negotiations with the Queensland government broke down, triggering major disruptions across the Ipswich/Rosewood and Cleveland lines. Queensland Rail confirmed there were no trains running between Darra and Rosewood, and between Central and Cleveland, with rail replacement buses deployed across both corridors. Travellers were urged to allow extra time and consider alternative options, with services expected to reach capacity. The disruption comes at a particularly difficult time for commuters already grappling with soaring fuel prices and increased reliance on public transport. Speaking on Today, Queensland Rail chief executive Kat Stapleton "profusely apologised" to travellers and urged unions to abandon industrial action while negotiations continued. "Unfortunately, there are another over 30 protected industrial action notices that we've received," Ms Stapleton said. "We will not be able to handle all of them unless the unions stop protected industrial actions and come back to the table." She said Queensland Rail had done its "very best" to minimise disruptions. The dispute centres on enterprise bargaining negotiations covering about 5600 rail workers that have been ongoing since January. Queensland Rail said unions had made more than 500 claims, including additional leave entitlements, a shorter work week, and higher superannuation contributions. Ms Stapleton said many of the claims "far exceeded community norms". The Rail, Tram and Bus Union (RTBU) insists the action was intended to target coal and freight operations rather than passenger services and has accused the government of escalating the situation. RTBU state secretary Peter Allen said the last thing the union wanted was for Queensland commuters to be "caught in the crossfire" or a bargaining dispute. "That's why they are taking limited industrial action that would have no effect on passengers and would be limited to coal and mineral trains," Mr Allen said. "Unfortunately, the Queensland government has responded with a heavy-handed and disproportionate action, looking to turn a minor ban on mineral trains into a full-time stoppage. Any impact on passengers is purely self-inflicted and entirely the choice of the Queensland government." The union also claimed members had been "locked out" after refusing partial duties, estimating about 200 train control staff would take part in the 24-hour strike. Queensland Rail said the extent of disruptions could change at short notice and encouraged passengers to monitor updates and make alternative arrangements where possible. Wednesday's stoppage is only the beginning of travel headaches, with a major 23-day rail shutdown starting on Friday and running until April 26. The planned closures will affect the Sunshine Coast, Caboolture, Redcliffe, Doomben, Shorncliffe, Airport, Gold Coast, and Beenleigh corridors, as authorities carry out a co-ordinated blitz of upgrades and maintenance. Rail replacement buses will service affected stations during the works, with some journeys expected to take significantly longer. Transport and Main Roads said bundling the works into one extended closure was to reduce long-term disruption and align with school holidays when fewer people travel.

SYDNEY, April 1 (Reuters) - Anthropic said on Wednesday it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy, and its impact on workers and jobs. Under the agreement, the Claude maker will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities. Anthropic said it would also target investments in data centre infrastructure and energy across Australia. "Australia's investment in AI safety makes it a natural partner for responsible AI development," Anthropic CEO Dario Amodei said in Canberra, where he is expected to meet Prime Minister Anthony Albanese on Wednesday. "This memorandum of understanding gives our collaboration a formal foundation." The deal mirrors similar agreements with safety institutes in the United States, Britain and Japan. Australia currently has no specific AI legislation. The centre-left Labor government has said it would rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns. In its National AI Plan released in December, Labor outlined a roadmap to ramp up AI adoption across the economy, attract data centre investment, and build AI skills to support jobs as the technology becomes more integrated into daily life. (Reporting by Renju Jose in Sydney; Editing by Edmund Klamann)
Anthropic built an entire subsystem called "Undercover Mode." The job is specific: stop Claude from accidentally dropping internal codenames into git commits, mentioning unreleased models in PR descriptions, or outing itself as an AI when working on public code. The system prompt tells Claude, in plain English, "Do not blow your cover." On March 31, Anthropic blew its own cover. Version 2.1.88 of Claude Code shipped to the npm registry with a 59.8 MB source map file attached, the kind of debugging artifact that maps minified JavaScript back to the original source. That file contained readable TypeScript for the entire product. All 512,000 lines. All 1,900 files. Every internal constant, every system prompt, every feature flag. Within hours, the codebase was mirrored across GitHub and analyzed by thousands of developers. By Tuesday afternoon, a single post on X linking to the source sat at 21 million views. The link dropped at 4:23 a.m. Eastern. Nobody at Anthropic's San Francisco headquarters was awake to watch it travel. This happened before. In February 2025, the same mistake. Source map, npm, full exposure. Anthropic patched it then. Thirteen months later, the identical failure. Same vector, same root cause, same product. The code itself is not the story. What the code contains is. Claude Code looks like a polished terminal assistant. You type, it responds, it edits files and runs commands. Simple premise. The source tells a different story. Claude Code carries 44 compile-time feature flags that gate capabilities invisible to external users. Most compile to false in public builds, stripped out entirely by Bun's dead-code elimination. Anthropic releases them on its own schedule, a handful at a time. The restraint is the point. KAIROS is the one that should make you pause. Named after the Greek concept of "the opportune moment," it transforms Claude Code from a reactive tool into an autonomous daemon. It watches your open files. It logs observations. It acts proactively on things it notices, without being asked. A 15-second blocking budget keeps it from disrupting your workflow, but make no mistake. This is a background agent sitting inside your development environment, operating on its own judgment about what deserves attention. Then there is autoDream. While you are away, a forked subagent consolidates the day's observations into long-term memory. It resolves contradictions, converts vague notes into verified facts, prunes stale information. The system prompt calls it what it is: "a dream, a reflective pass over your memory files." Three gates control when it fires. Twenty-four hours since last run. Five sessions minimum. A lock to prevent concurrent dreams. ULTRAPLAN offloads complex planning to a remote container running Opus 4.6, with a 30-minute thinking budget. A browser UI lets you watch the reasoning happen in real time. When approved, a sentinel value "teleports" the result back to your terminal. And there is Buddy. A Tamagotchi-style companion pet with 18 species, shiny variants, five procedurally generated stats (DEBUGGING, PATIENCE, CHAOS, WISDOM, SNARK), and a soul description written by Claude on first hatch. The code references an April teaser window with a full launch gated for May. This is not a CLI. This is an operating system for software development, and the public release is a carefully managed storefront window. The leak confirmed what some developers suspected but could not prove. Anthropic employees use Claude Code to contribute to public open-source repositories, and the tool scrubs any trace of AI involvement from the output. Here is the Undercover Mode system prompt, verbatim. "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover." No model codenames in commits. No mention of Claude Code. Co-Authored-By attribution lines get stripped. To anyone reviewing the pull request, it looks like a human wrote it. If you maintain an open-source project, you may have already reviewed and merged Claude-generated pull requests without knowing it. The engineering might be solid. But open-source communities operate on trust and attribution. Undisclosed AI contributions corrode both, and the reaction from maintainers has been defensive for good reason. Anthropic would argue this is standard dogfooding. That is reasonable. But "do not blow your cover" goes beyond testing your own tools. It is a framework for concealment, and the leaked code hands every enterprise customer the exact template to replicate it. The financial exposure is what should make Anthropic anxious. Claude Code generates an estimated $2.5 billion in annualized recurring revenue, a figure that doubled since January. Enterprise customers account for 80% of that number. The company is preparing for an IPO later this year. Exposing the internals of your highest-growth product right before public-market scrutiny is the kind of timing no one would choose. Now every competitor has the architecture. The three-layer memory system centered on a lightweight index file. The multi-agent coordinator with scratchpad-based knowledge sharing. The permission engine with ML-based auto-approval. The bash validation logic spanning thousands of lines. CryptoBriefing reported that developers began rewriting components under the name "Claw Code" within hours of the leak, with a Rust rewrite already underway. No waiting for a clean-room analysis. No guessing at implementation details. Just download, read, and build. But the competitive threat splits into two categories with very different dynamics. The day before the leak, OpenAI shipped a Codex plugin that runs inside Claude Code. Read that again. OpenAI built a plugin for a competitor's product. The repository collected 3,700 GitHub stars in a single day. According to a Wall Street Journal report, Fidji Simo, OpenAI's CEO of Applications, called Anthropic's success with Claude Code and Cowork an internal "wake-up call." OpenAI plans to refocus resources on coding tools and enterprise customers, merge Codex and ChatGPT into a single desktop application, and build out a plugin marketplace with governance controls pitched at CIOs. The strategy is telling. Rather than wait for developers to switch, OpenAI is embedding its model directly into the workflow developers already chose. That is what conceding market dominance looks like while trying to maintain a foothold. The leaked source code compresses OpenAI's R&D timeline on specific subsystems. How Anthropic solved context entropy. How the permission engine classifies risk. How agents coordinate without corrupting each other's state. Those are hard engineering problems, and the solutions are now readable TypeScript. Cursor, GitHub Copilot, and a dozen well-funded startups get the same advantage. But Western competitors face two constraints. First, clean-room implementation. They can study architectural patterns, but copying code directly carries trade-secret liability, especially against a company heading into public markets with every incentive to litigate. Second, the code is the harness, not the engine. Claude 4.6 Opus's reasoning quality is not in the TypeScript. Competitors still need their own models to be competitive on raw capability. The bigger gift is the roadmap. KAIROS, autoDream, ULTRAPLAN. Western competitors now know what is coming and can prioritize matching those features before Anthropic ships them. Strategic surprise, once lost, cannot be recovered. For Chinese AI companies, the leak fills a gap that years of systematic effort could not close through other means. In February 2026, Anthropic revealed that three Chinese laboratories had run industrial-scale extraction campaigns against Claude. DeepSeek targeted reasoning and evaluation tasks, effectively using Claude as a reward model for reinforcement learning. Moonshot AI focused on agentic reasoning and tool use. MiniMax ran the largest operation, generating over 13 million exchanges through approximately 24,000 fraudulent accounts. MiniMax's specific focus: agentic coding and tool use. A legal analysis published by Just Security documented the operational playbook. Commercial proxy services managing 20,000 simultaneous fraudulent accounts. Deliberate mixing of distillation traffic with unrelated requests to evade detection. When Anthropic banned accounts, replacements appeared within hours. Those campaigns extracted model behaviors, reasoning patterns, chain-of-thought processes. What they could not extract was the orchestration layer. How to manage permissions across 40 tools. How to coordinate multiple agents without state corruption. How to compress context across sessions that span hours. How to build a production-grade harness that turns a capable model into a $2.5 billion product. That engineering does not leak through API responses. The source code leak provides exactly that missing layer. The three-layer memory architecture. The multi-agent coordinator with scratchpad-based knowledge sharing. The feature flag system. The anti-distillation mechanisms, now exposed and therefore bypassable. Chinese labs that already have competitive base models, DeepSeek and Alibaba's Qwen among them, can now wrap those models in production-grade agentic infrastructure studied from the market leader. The combination of distilled model capabilities and leaked harness architecture creates a faster path to parity than either alone. DeepSeek is actively hiring Agent Deep Learning researchers and Agent Infrastructure Engineers. The demand for this architecture is immediate. The legal exposure is asymmetric. Anthropic can sue U.S. competitors for trade-secret misappropriation. It has no practical enforcement mechanism against laboratories operating outside U.S. jurisdiction, the same laboratories that already demonstrated willingness to run fraudulent accounts at industrial scale. The internal performance data makes it worse. The source reveals that Capybara, Anthropic's internal codename for a Claude 4.6 variant, carries a 29-30% false claims rate in its eighth iteration, a regression from 16.7% in version four. Competitors did not just get the blueprints. They got the test results showing where the building leaks. Five days before the npm leak, Fortune reported that Anthropic's content management system left approximately 3,000 unpublished assets publicly accessible. Among them: a draft blog post for an unreleased model the company confirmed as "a step change and the most capable we've built to date." Details of an invite-only CEO retreat in the U.K. were also exposed. The spokesperson attributed it to "human error" in its CMS configuration. Human error again. Different system, same explanation. That is the tell. Anthropic's brand rests on safety. White papers about existential risk. Voluntary commitments to responsible deployment. Enterprise buyers pay a premium for that reputation, and regulators cite Anthropic as the company doing things right. Then you learn they cannot configure .npmignore. The gap between the safety pitch and the operational reality gets harder to explain away with every incident. The concurrent axios supply chain attack compounds the damage. On the same morning as the leak, malicious versions of the axios npm package (1.14.1 and 0.30.4) distributed a remote access trojan. Anyone who installed or updated Claude Code between 00:21 and 03:29 UTC on March 31 may have pulled the compromised dependency. Anthropic now recommends its native installer over npm entirely. Read that again. The company walked away from its own distribution channel on the same day it leaked its own source code through it. Anthropic confirmed the leak to CNBC on Tuesday, roughly twelve hours after the code started circulating. One new sentence appeared in the company's response: "We're rolling out measures to prevent this from happening again." Rolling out measures. For a failure mode the company already identified and patched thirteen months ago. The phrasing sidesteps the question it should answer first: what happened to the measures from last time? CNBC called it Anthropic's second major data exposure in under a week, following the Fortune report on roughly 3,000 unpublished assets sitting in a publicly accessible content management cache. Two distinct systems failed basic access controls within five days. Same company. Same one-word explanation both times: human error. The competitive response did not wait for Anthropic's statement. Google, xAI, and OpenAI are all accelerating their own coding agent investments, CNBC reported, chasing the developer adoption that pushed Claude Code past $2.5 billion in annualized revenue by February. Those three companies can now study the architectural decisions behind the product they are racing to replicate. Anthropic's answer: a promise of future measures, no timeline attached, no specifics offered, for a vector that already failed them once. Anthropic's statement called this "a release packaging issue caused by human error, not a security breach." No customer data leaked. No model weights escaped. No credentials were exposed. Call it a packaging issue if you want. What actually shipped to a public registry was the source code for a $2.5 billion product. Forty-four feature flags. Internal model codenames. Performance benchmarks competitors would have paid millions to see. The exact orchestration logic needed to build a clone. All of it, for the second time in thirteen months. OpenAI is embedding Codex inside Claude Code because it cannot yet convince developers to leave. Chinese laboratories that ran 16 million fraudulent API exchanges to extract Claude's reasoning now have the production harness those exchanges could never reach. The Western competitors are constrained by clean-room requirements and litigation risk. The Chinese laboratories are not. That asymmetry is the actual cost of a misconfigured .npmignore file. If you are an enterprise customer evaluating AI coding agents, the question is direct. The company that built Undercover Mode to prevent this kind of exposure could not prevent this kind of exposure. Twice. And a company that has publicly promoted its own AI agents for software development still shipped a .map file that a junior engineer's pre-publish checklist would have caught. Undercover Mode works perfectly. The build pipeline does not. And when version 2.1.89 ships, the test is simple: does the .map file come with it? If Anthropic cannot pass that check after two identical failures, the "responsible AI lab" label stops being a brand promise and starts being a liability.

Brisbane rail strike causes major commuter disruptions ahead of month-long shutdown Andrew HedgmanNewsWire Wed, 1 April 2026 7:17AM Brisbane commuters have woken to widespread rail chaos, as an industrial dispute shuts down key train lines just days before a massive month-long network closure. Hundreds of Rail, Tram and Bus Union members walked off the job on Wednesday after wage negotiations with the Queensland government broke down, triggering major disruptions across the Ipswich/Rosewood and Cleveland lines. Queensland Rail confirmed there were no trains running between Darra and Rosewood, and between Central and Cleveland, with rail replacement buses deployed across both corridors. Travellers were urged to allow extra time and consider alternative options, with services expected to reach capacity. The disruption comes at a particularly difficult time for commuters already grappling with soaring fuel prices and increased reliance on public transport. Speaking on Today, Queensland Rail chief executive Kat Stapleton "profusely apologised" to travellers and urged unions to abandon industrial action while negotiations continued. "Unfortunately, there are another over 30 protected industrial action notices that we've received," Ms Stapleton said. "We will not be able to handle all of them unless the unions stop protected industrial actions and come back to the table." She said Queensland Rail had done its "very best" to minimise disruptions. The dispute centres on enterprise bargaining negotiations covering about 5600 rail workers that have been ongoing since January. Queensland Rail said unions had made more than 500 claims, including additional leave entitlements, a shorter work week, and higher superannuation contributions. Ms Stapleton said many of the claims "far exceeded community norms". The Rail, Tram and Bus Union (RTBU) insists the action was intended to target coal and freight operations rather than passenger services and has accused the government of escalating the situation. RTBU state secretary Peter Allen said the last thing the union wanted was for Queensland commuters to be "caught in the crossfire" or a bargaining dispute. "That's why they are taking limited industrial action that would have no effect on passengers and would be limited to coal and mineral trains," Mr Allen said. "Unfortunately, the Queensland government has responded with a heavy-handed and disproportionate action, looking to turn a minor ban on mineral trains into a full-time stoppage. Any impact on passengers is purely self-inflicted and entirely the choice of the Queensland government." The union also claimed members had been "locked out" after refusing partial duties, estimating about 200 train control staff would take part in the 24-hour strike. Queensland Rail said the extent of disruptions could change at short notice and encouraged passengers to monitor updates and make alternative arrangements where possible. Wednesday's stoppage is only the beginning of travel headaches, with a major 23-day rail shutdown starting on Friday and running until April 26. The planned closures will affect the Sunshine Coast, Caboolture, Redcliffe, Doomben, Shorncliffe, Airport, Gold Coast, and Beenleigh corridors, as authorities carry out a co-ordinated blitz of upgrades and maintenance. Rail replacement buses will service affected stations during the works, with some journeys expected to take significantly longer. Transport and Main Roads said bundling the works into one extended closure was to reduce long-term disruption and align with school holidays when fewer people travel. Originally published as Brisbane rail strike causes major commuter disruptions ahead of month-long shutdown Get the latest news from thewest.com.au in your inbox. Sign up for our emails

Brisbane commuters have woken to widespread rail chaos, as an industrial dispute shuts down key train lines just days before a massive month-long network closure. Hundreds of Rail, Tram and Bus Union members walked off the job on Wednesday after wage negotiations with the Queensland government broke down, triggering major disruptions across the Ipswich/Rosewood and Cleveland lines. Queensland Rail confirmed there were no trains running between Darra and Rosewood, and between Central and Cleveland, with rail replacement buses deployed across both corridors. Travellers were urged to allow extra time and consider alternative options, with services expected to reach capacity. The disruption comes at a particularly difficult time for commuters already grappling with soaring fuel prices and increased reliance on public transport. Speaking on Today, Queensland Rail chief executive Kat Stapleton "profusely apologised" to travellers and urged unions to abandon industrial action while negotiations continued. "Unfortunately, there are another over 30 protected industrial action notices that we've received," Ms Stapleton said. "We will not be able to handle all of them unless the unions stop protected industrial actions and come back to the table." She said Queensland Rail had done its "very best" to minimise disruptions. The dispute centres on enterprise bargaining negotiations covering about 5600 rail workers that have been ongoing since January. Queensland Rail said unions had made more than 500 claims, including additional leave entitlements, a shorter work week, and higher superannuation contributions. Ms Stapleton said many of the claims "far exceeded community norms". The Rail, Tram and Bus Union (RTBU) insists the action was intended to target coal and freight operations rather than passenger services and has accused the government of escalating the situation. RTBU state secretary Peter Allen said the last thing the union wanted was for Queensland commuters to be "caught in the crossfire" or a bargaining dispute. "That's why they are taking limited industrial action that would have no effect on passengers and would be limited to coal and mineral trains," Mr Allen said. "Unfortunately, the Queensland government has responded with a heavy-handed and disproportionate action, looking to turn a minor ban on mineral trains into a full-time stoppage. Any impact on passengers is purely self-inflicted and entirely the choice of the Queensland government." The union also claimed members had been "locked out" after refusing partial duties, estimating about 200 train control staff would take part in the 24-hour strike. Queensland Rail said the extent of disruptions could change at short notice and encouraged passengers to monitor updates and make alternative arrangements where possible. Wednesday's stoppage is only the beginning of travel headaches, with a major 23-day rail shutdown starting on Friday and running until April 26. The planned closures will affect the Sunshine Coast, Caboolture, Redcliffe, Doomben, Shorncliffe, Airport, Gold Coast, and Beenleigh corridors, as authorities carry out a co-ordinated blitz of upgrades and maintenance. Rail replacement buses will service affected stations during the works, with some journeys expected to take significantly longer. Transport and Main Roads said bundling the works into one extended closure was to reduce long-term disruption and align with school holidays when fewer people travel.
SYDNEY, April 1 (Reuters) - Anthropic said on Wednesday it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy, and its impact on workers and jobs. Under the agreement, the Claude maker will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities. Anthropic said it would also target investments in data centre infrastructure and energy across Australia. "Australia's investment in AI safety makes it a natural partner for responsible AI development," Anthropic CEO Dario Amodei said in Canberra, where he is expected to meet Prime Minister Anthony Albanese on Wednesday. "This memorandum of understanding gives our collaboration a formal foundation." The deal mirrors similar agreements with safety institutes in the United States, Britain and Japan. Australia currently has no specific AI legislation. The centre-left Labor government has said it would rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns. In its National AI Plan released in December, Labor outlined a roadmap to ramp up AI adoption across the economy, attract data centre investment, and build AI skills to support jobs as the technology becomes more integrated into daily life.

Get the Polymarket promo code COVERS bonus for Yankees vs Mariners predictions. Claim your $20 welcome offer with a required deposit today. The Polymarket promo code COVERS unlocks a $20 welcome bonus for new users looking to make predictions on Tuesday's Yankees vs Mariners matchup. This offer from one of the best prediction market apps requires entering code COVERS during registration and making a $20 deposit to activate the bonus as of March 31. The Polymarket promo code COVERS provides new users with a $20 bonus after completing registration and making an initial $20 deposit. Unlike traditional sportsbook offers, this prediction market bonus requires a deposit to unlock the reward, making it accessible to users in all states except Nevada. For Tuesday's Yankees vs. Mariners game, users can predict various outcomes in the pitching duel between Max Fried and Logan Gilbert. If you predict the Yankees will win and they do, your position increases in value. If they lose, your position decreases accordingly. The same applies to predictions about total runs, individual player performances, or other game-specific markets. Key terms include providing photo identification during registration, being physically located in an eligible state, and completing the $20 minimum deposit requirement. The platform offers predictions beyond sports, including politics, entertainment, and economic markets. For those interested in exploring additional options, check out the best available prediction-market promos. Claiming the Polymarket welcome offer is straightforward with these steps:

Attackers hijack Axios npm account to spread RAT malware Nearly half a Million mobile customers of Lloyds Banking Group affected by security incident Dutch Ministry of Finance takes treasury systems offline amid cyber incident investigation U.S. CISA adds a flaw in Citrix NetScaler to its Known Exploited Vulnerabilities catalog Qilin Ransomware allegedly breached chemical manufacturer giant Dow Inc China-Linked groups target Southeast Asian government with advanced malware in 2025 It's a mystery ... alleged unpatched Telegram zero-day allows device takeover, but Telegram denies Critical Fortinet FortiClient EMS flaw exploited for Remote Code Execution New macOS Infinity Stealer uses Nuitka Python payload and ClickFix Russia-linked APT TA446 uses DarkSword exploit to target iPhone users in phishing wave Urgent Alert: NetScaler bug CVE-2026-3055 probed by attackers could leak sensitive data SECURITY AFFAIRS MALWARE NEWSLETTER ROUND 90 Security Affairs newsletter Round 569 by Pierluigi Paganini - INTERNATIONAL EDITION Apple issues urgent lock screen warnings for unpatched iPhones and iPads ShinyHunters claims the hack of the European Commission Iran-linked group Handala hacked FBI Director Kash Patel's personal email account U.S. CISA adds a flaw in F5 BIG-IP AMP to its Known Exploited Vulnerabilities catalog The European Commission confirmed a cyberattack affecting part of its cloud systems

SYDNEY, April 1 (Reuters) - Anthropic said on Wednesday it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy, and its impact on workers and jobs. Under the agreement, the Claude maker will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities. Anthropic said it would also target investments in data centre infrastructure and energy across Australia. "Australia's investment in AI safety makes it a natural partner for responsible AI development," Anthropic CEO Dario Amodei said in Canberra, where he is expected to meet Prime Minister Anthony Albanese on Wednesday. "This memorandum of understanding gives our collaboration a formal foundation." The deal mirrors similar agreements with safety institutes in the United States, Britain and Japan. Australia currently has no specific AI legislation. The centre-left Labor government has said it would rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns. In its National AI Plan released in December, Labor outlined a roadmap to ramp up AI adoption across the economy, attract data centre investment, and build AI skills to support jobs as the technology becomes more integrated into daily life. (Reporting by Renju Jose in Sydney; Editing by Edmund Klamann)
SYDNEY, April 1 (Reuters) - Anthropic said on Wednesday it would sign an agreement to share its economic index data with the Australian government to help track artificial intelligence adoption across the economy, and its impact on workers and jobs. Under the agreement, the Claude maker will share findings on emerging AI model capabilities and risks, participate in joint safety evaluations, and collaborate on research with Australian universities. Anthropic said it would also target investments in data centre infrastructure and energy across Australia. "Australia's investment in AI safety makes it a natural partner for responsible AI development," Anthropic CEO Dario Amodei said in Canberra, where he is expected to meet Prime Minister Anthony Albanese on Wednesday. "This memorandum of understanding gives our collaboration a formal foundation." The deal mirrors similar agreements with safety institutes in the United States, Britain and Japan. Australia currently has no specific AI legislation. The centre-left Labor government has said it would rely on existing laws to manage emerging AI risks while introducing voluntary guidelines amid privacy and safety concerns. In its National AI Plan released in December, Labor outlined a roadmap to ramp up AI adoption across the economy, attract data centre investment, and build AI skills to support jobs as the technology becomes more integrated into daily life. (Reporting by Renju Jose in Sydney; Editing by Edmund Klamann)