The latest news and updates from companies in the WLTH portfolio.
Symphony for Medical Coding delivers production-ready medical coding automation across the US and Europe, based on learnings from the largest study of its kind. NEW YORK and COPENHAGEN, Denmark, April 1, 2026 /PRNewswire/ -- Corti, the frontier lab for clinical-grade AI, today released Symphony for Medical Coding, an agentic model that outperforms OpenAI and Anthropic - as well as Amazon, Oracle, and Google - in medical coding by more than 25% in clinical accuracy benchmarks. It is now available via Corti's API to any team building AI-powered healthcare software. The cost of getting it wrong Medical coding converts clinical reality into structured data, powering reimbursement, reporting, and public health decisions. Coding errors are expensive, but the human cost goes much further. One example shows the scale of what is missed: in a recent study of Danish patient data, Corti identified three times as many suicide attempts as had been coded. The cases were all there - recorded in clinical notes, flagged in medication records - but coders, working under time pressure, had missed them. When cases go uncounted, health systems can't monitor trends, allocate resources, or design interventions. Policy fails before it starts. Get the latest news delivered to your inbox Sign up for The Manila Times newsletters By signing up with an email address, I acknowledge that I have read and agree to the Terms of Service and Privacy Policy. Defined by frontier research Medical coding is fundamentally a reasoning task, not a prediction problem. It involves interpreting many complexities, real judgment, and justification across thousands of codes. The American coding system alone, ICD-10-CM, has 70,000 diagnosis codes. Even worse, coding is based on guidelines that constantly evolve, making historically data-trained models inadequate. Advertisement Corti started addressing this by conducting the largest study of its kind (5.8 million patient encounters), leading to Code Like Humans, a multi-agent framework accepted to EMNLP 2025, one of machine learning's top conferences. This framework mirrors professional coders' steps: identifying evidence, reasoning through hierarchies, validating against guidelines, and reconciling ambiguity. Symphony for Medical Coding builds on this foundation to perform work like expert coders, delivering higher quality than other models at a fraction of the cost. "Most AI systems fall short in medical coding because they treat it as labeling, not reasoning. Correct coding depends on evidence, context, hierarchy, and guideline interpretation. We built Symphony for Medical Coding to follow the same decision process expert coders use, and that is why the performance gap is so meaningful," said Lars Maaløe, PhD, CTO and co-founder of Corti. "The methodology behind Code Like Humans is the most promising approach to medical coding we've seen. We've been co-developing with Corti because we believe specialized AI infrastructure is how this problem gets solved - and we're excited to see it move into production," added Steve West, Managing Director, Healthliant Ventures and Tanner Health. Accuracy that can be audited Advertisement In medical coding, accuracy requires traceability, defensibility, and ease of review. Symphony for Medical Coding links assigned codes to its clinical evidence and highlights ambiguities, giving teams, compliance leaders, and auditors a clear record of code. "Medical coding has been treated as a back-office cost center for decades. It isn't - it's the data layer that healthcare runs on. Getting it right changes what health systems can see, decide, and do," said Andreas Cleve, CEO and co-founder of Corti. Available across the US and Europe, as one system Coding systems vary widely, and most AI products require local fine-tuning. Symphony for Medical Coding does not. It is the first coding system designed to operate across both US diagnosis coding (ICD-10-CM) and procedure coding (ICD-10-PCS, CPT) and European coding environments without the need for local retraining. Advertisement ICD-10, maintained by the WHO, is currently available in beta as Corti expands across priority European markets, including the UK, Germany, France, and Denmark. Symphony for Medical Coding is available now through the Corti Console, integrates directly with the Corti Agentic Framework, and supports both A2A and MCP standards. Enterprise and sovereign cloud deployments are available through Corti. About Corti Corti is healthcare's frontier lab for clinical-grade AI. Symphony, its flagship clinical-grade AI model, powers clinical and administrative applications for EHR vendors, virtual care platforms, practice management systems, and life sciences organizations worldwide. Corti serves over 100 million patients annually across health systems including the NHS. The company is headquartered in Copenhagen with offices in New York and London. For more information, visit corti.ai.

Anthropic shipped Claude Code's 512,000-line source to npm for the second time in thirteen months. Days earlier, its CMS left 3,000 files publicly accessible. The leaked code revealed genuine engineering ambition, but innovation without operational discipline is carelessness with better branding. With an IPO and a Pentagon standoff over safety guardrails, the gap between rhetoric and operations keeps widening. I cannot stand preachy commentary. It is pretentious, it never ages well, and it belongs to an era of opinion writing the profession should have outgrown. I need to get that out of the way first, because this piece is heading in precisely that direction. You have probably guessed as much. An advance apology, then, especially since the subject is a company for which I have great admiration. On March 31, version 2.1.88 of Claude Code arrived on the npm public registry with a 59.8-megabyte debugging artifact attached. That file contained the complete TypeScript source for Anthropic's highest-revenue product. All 512,000 lines. Every feature flag, system prompt, and internal codename. A post on X linking to the exposed code collected 21 million views before Anthropic issued a statement. Thirteen months earlier, the identical failure hit the identical registry through the identical vector. Call it a "release packaging issue caused by human error." Better: call it a pattern. The admiration is real. The leaked source revealed genuine engineering ambition: an autonomous daemon called KAIROS, a memory consolidation engine, a planning system that offloads thirty-minute reasoning sessions to remote containers. Claude Code is closer to an operating system for software development than a terminal assistant, and the architecture reflects engineers who think in original, ambitious terms. That creativity deserves respect. Innovation without operational discipline, though, is carelessness with better branding. And so, reluctantly, the sermon. Anthropic's identity rests on a single proposition: it is the responsible AI company. White papers on existential risk. Voluntary deployment commitments. A public posture of caution so conspicuous that regulators cite the firm as the standard. Enterprise customers pay a premium for that reputation. The company is preparing for an IPO on the strength of it. Five days before the npm leak, Anthropic's own content management system left roughly 3,000 unpublished files accessible to anyone with a browser, including details of an unreleased model and an invite-only CEO retreat. The explanation: human error in CMS configuration. Two distinct systems failed basic access controls within a single week. Same company. Same one-word excuse both times. In 1986, NASA launched Challenger knowing the O-ring seals were compromised. Engineers had raised the alarm. Management flew the shuttle anyway. Diane Vaughan spent years studying why, and the answer she found in The Challenger Launch Decision fits Anthropic uncomfortably well. Her term was "normalization of deviance." Something goes wrong. Nothing blows up. So the anomaly gets reclassified as tolerable, then repeated, until the day it stops being tolerable at all. Apply this to Anthropic and the fit is precise. In February 2025, source maps leaked Claude Code's internals to npm. Engineers patched the issue. No customer data escaped. No model weights surfaced. The incident produced no visible cost, and the anomaly was absorbed. Thirteen months later, the identical configuration error reached the identical public registry. The deviance had been normalized. Vaughan's analysis yields three lessons that apply without modification. Nobody votes to accept catastrophic risk in a conference room. It happens gradually. A small exception here. A shortcut that worked fine last time. Before long the threshold has moved, and nobody can say exactly when. Fix the .npmignore file, sure. But if nobody asks how a 59.8-megabyte source map sailed through every gate in the release pipeline, the fix treats the symptom. And the gap between an institution's safety rhetoric and its operational behavior widens precisely when no one is measuring it. Context sharpens the damage. Anthropic is locked in a tense standoff with the Department of Defense over Claude's safety guardrails, a confrontation in which the company's credibility as a disciplined, security-conscious operator is the core asset under negotiation. Pentagon officials must now reconcile two versions of Anthropic: the company that lectures the Defense Department on responsible AI deployment, and the company that cannot prevent its build pipeline from publishing proprietary source code to a public registry. Twice. Not a peripheral embarrassment. A strategic liability. A company seeking defense contracts must demonstrate it can secure its own infrastructure before credibly promising to secure anyone else's. The distance between Anthropic's safety pitch and its operational reality does not shrink with repetition. It compounds. What makes this episode troubling is not the mistake itself. Mistakes happen. Organizations recover. The concern is the response. Anthropic told CNBC it is "rolling out measures to prevent this from happening again." No specifics. No timeline. No acknowledgment that identical measures apparently failed thirteen months earlier. For a company whose entire value proposition is rigor, vagueness functions as a confession. Chinese laboratories that ran 16 million fraudulent API exchanges to extract Claude's reasoning patterns now possess the production harness those API calls could never reach. Western competitors face litigation risk if they study the code too closely. Beijing's labs face no such constraint. The asymmetry is the actual cost of a misconfigured .npmignore file. I admire what Anthropic has built. The engineering in that leaked code is genuinely impressive, and I mean that. Most of Anthropic's competitors ship thinner products and talk louder about them. None of that makes the pattern forgivable. A CMS left open to the public internet. Source code shipped to a public registry, twice, through the same vector. A response that promises future measures for a failure mode already patched once before. An organization that normalizes deviance does not correct itself through good intentions. It corrects itself through structural change, the kind that costs money, slows releases, and inconveniences engineers. If Anthropic's next npm publish still contains a .map file, no white paper on existential risk will matter. The Pentagon will notice. So will the investors pricing Anthropic's IPO. Safety is daily discipline. When it curdles into brand strategy, someone ships a .map file to a public registry. Twice.

Symphony for Medical Coding delivers production-ready medical coding automation across the US and Europe, based on learnings from the largest study of its kind. NEW YORK and COPENHAGEN, Denmark, April 1, 2026 /PRNewswire/ -- Corti, the frontier lab for clinical-grade AI, today released Symphony for Medical Coding, an agentic model that outperforms OpenAI and Anthropic - as well as Amazon, Oracle, and Google - in medical coding by more than 25% in clinical accuracy benchmarks. It is now available via Corti's API to any team building AI-powered healthcare software. The cost of getting it wrong Medical coding converts clinical reality into structured data, powering reimbursement, reporting, and public health decisions. Coding errors are expensive, but the human cost goes much further. One example shows the scale of what is missed: in a recent study of Danish patient data, Corti identified three times as many suicide attempts as had been coded. The cases were all there - recorded in clinical notes, flagged in medication records - but coders, working under time pressure, had missed them. When cases go uncounted, health systems can't monitor trends, allocate resources, or design interventions. Policy fails before it starts. Defined by frontier research Medical coding is fundamentally a reasoning task, not a prediction problem. It involves interpreting many complexities, real judgment, and justification across thousands of codes. The American coding system alone, ICD-10-CM, has 70,000 diagnosis codes. Even worse, coding is based on guidelines that constantly evolve, making historically data-trained models inadequate. Corti started addressing this by conducting the largest study of its kind (5.8 million patient encounters), leading to Code Like Humans, a multi-agent framework accepted to EMNLP 2025, one of machine learning's top conferences. This framework mirrors professional coders' steps: identifying evidence, reasoning through hierarchies, validating against guidelines, and reconciling ambiguity. Symphony for Medical Coding builds on this foundation to perform work like expert coders, delivering higher quality than other models at a fraction of the cost. "Most AI systems fall short in medical coding because they treat it as labeling, not reasoning. Correct coding depends on evidence, context, hierarchy, and guideline interpretation. We built Symphony for Medical Coding to follow the same decision process expert coders use, and that is why the performance gap is so meaningful," said Lars Maaløe, PhD, CTO and co-founder of Corti. "The methodology behind Code Like Humans is the most promising approach to medical coding we've seen. We've been co-developing with Corti because we believe specialized AI infrastructure is how this problem gets solved - and we're excited to see it move into production," added Steve West, Managing Director, Healthliant Ventures and Tanner Health. Accuracy that can be audited In medical coding, accuracy requires traceability, defensibility, and ease of review. Symphony for Medical Coding links assigned codes to its clinical evidence and highlights ambiguities, giving teams, compliance leaders, and auditors a clear record of code. "Medical coding has been treated as a back-office cost center for decades. It isn't - it's the data layer that healthcare runs on. Getting it right changes what health systems can see, decide, and do," said Andreas Cleve, CEO and co-founder of Corti. Available across the US and Europe, as one system Coding systems vary widely, and most AI products require local fine-tuning. Symphony for Medical Coding does not. It is the first coding system designed to operate across both US diagnosis coding (ICD-10-CM) and procedure coding (ICD-10-PCS, CPT) and European coding environments without the need for local retraining. ICD-10, maintained by the WHO, is currently available in beta as Corti expands across priority European markets, including the UK, Germany, France, and Denmark. Symphony for Medical Coding is available now through the Corti Console, integrates directly with the Corti Agentic Framework, and supports both A2A and MCP standards. Enterprise and sovereign cloud deployments are available through Corti. About Corti Corti is healthcare's frontier lab for clinical-grade AI. Symphony, its flagship clinical-grade AI model, powers clinical and administrative applications for EHR vendors, virtual care platforms, practice management systems, and life sciences organizations worldwide. Corti serves over 100 million patients annually across health systems including the NHS. The company is headquartered in Copenhagen with offices in New York and London. For more information, visit corti.ai. View original content:https://www.prnewswire.co.uk/news-releases/corti-ships-symphony-for-medical-coding-with-more-than-25-accuracy-edge-over-openai-and-anthropic-302730432.html

Symphony for Medical Coding delivers production-ready medical coding automation across the US and Europe, based on learnings from the largest study of its kind. NEW YORK and COPENHAGEN, Denmark, April 1, 2026 /PRNewswire/ -- Corti, the frontier lab for clinical-grade AI, today released Symphony for Medical Coding, an agentic model that outperforms OpenAI and Anthropic - as well as Amazon, Oracle, and Google - in medical coding by more than 25% in clinical accuracy benchmarks. It is now available via Corti's API to any team building AI-powered healthcare software. The cost of getting it wrong Medical coding converts clinical reality into structured data, powering reimbursement, reporting, and public health decisions. Coding errors are expensive, but the human cost goes much further. One example shows the scale of what is missed: in a recent study of Danish patient data, Corti identified three times as many suicide attempts as had been coded. The cases were all there - recorded in clinical notes, flagged in medication records - but coders, working under time pressure, had missed them. When cases go uncounted, health systems can't monitor trends, allocate resources, or design interventions. Policy fails before it starts. Defined by frontier research Medical coding is fundamentally a reasoning task, not a prediction problem. It involves interpreting many complexities, real judgment, and justification across thousands of codes. The American coding system alone, ICD-10-CM, has 70,000 diagnosis codes. Even worse, coding is based on guidelines that constantly evolve, making historically data-trained models inadequate. Corti started addressing this by conducting the largest study of its kind (5.8 million patient encounters), leading to Code Like Humans, a multi-agent framework accepted to EMNLP 2025, one of machine learning's top conferences. This framework mirrors professional coders' steps: identifying evidence, reasoning through hierarchies, validating against guidelines, and reconciling ambiguity. Symphony for Medical Coding builds on this foundation to perform work like expert coders, delivering higher quality than other models at a fraction of the cost. "Most AI systems fall short in medical coding because they treat it as labeling, not reasoning. Correct coding depends on evidence, context, hierarchy, and guideline interpretation. We built Symphony for Medical Coding to follow the same decision process expert coders use, and that is why the performance gap is so meaningful," said Lars Maaløe, PhD, CTO and co-founder of Corti. "The methodology behind Code Like Humans is the most promising approach to medical coding we've seen. We've been co-developing with Corti because we believe specialized AI infrastructure is how this problem gets solved - and we're excited to see it move into production," added Steve West, Managing Director, Healthliant Ventures and Tanner Health. Accuracy that can be audited In medical coding, accuracy requires traceability, defensibility, and ease of review. Symphony for Medical Coding links assigned codes to its clinical evidence and highlights ambiguities, giving teams, compliance leaders, and auditors a clear record of code. "Medical coding has been treated as a back-office cost center for decades. It isn't - it's the data layer that healthcare runs on. Getting it right changes what health systems can see, decide, and do," said Andreas Cleve, CEO and co-founder of Corti. Available across the US and Europe, as one system Coding systems vary widely, and most AI products require local fine-tuning. Symphony for Medical Coding does not. It is the first coding system designed to operate across both US diagnosis coding (ICD-10-CM) and procedure coding (ICD-10-PCS, CPT) and European coding environments without the need for local retraining. ICD-10, maintained by the WHO, is currently available in beta as Corti expands across priority European markets, including the UK, Germany, France, and Denmark. Symphony for Medical Coding is available now through the Corti Console, integrates directly with the Corti Agentic Framework, and supports both A2A and MCP standards. Enterprise and sovereign cloud deployments are available through Corti. About Corti Corti is healthcare's frontier lab for clinical-grade AI. Symphony, its flagship clinical-grade AI model, powers clinical and administrative applications for EHR vendors, virtual care platforms, practice management systems, and life sciences organizations worldwide. Corti serves over 100 million patients annually across health systems including the NHS. The company is headquartered in Copenhagen with offices in New York and London. For more information, visit corti.ai. View original content:https://www.prnewswire.co.uk/news-releases/corti-ships-symphony-for-medical-coding-with-more-than-25-accuracy-edge-over-openai-and-anthropic-302730432.html

Anthropic has experienced two separate data exposure incidents within the same week On Tuesday, Anthropic publicly acknowledged an unintended disclosure of proprietary source code belonging to Claude Code, its artificial intelligence-driven development tool. Company representatives characterized the incident as stemming from a "release packaging issue caused by human error, not a security breach." Cybersecurity professionals analyzing the exposure reported that nearly 1,900 individual files totaling approximately 512,000 lines of code became accessible. Given that Claude Code operates within developer environments where it interacts with confidential information, security specialists expressed significant apprehension about the implications. A social media message on X containing a direct link to the exposed code rapidly gained traction online. Within hours of appearing during the early Tuesday morning, the post had already accumulated more than 30 million impressions. Software engineers immediately began analyzing the released code to gain insights into Claude Code's operational mechanics and Anthropic's future development roadmap. Meanwhile, security professionals highlighted potential exploitation vectors that malicious actors might leverage with access to this information. AI-focused cybersecurity company Straiker published analysis suggesting that threat actors could now examine Claude Code's internal data processing architecture. Their assessment indicated this knowledge could enable adversaries to engineer persistent exploits that maintain presence throughout extended sessions, essentially creating backdoor access. This incident represents the second problematic disclosure for Anthropic in under seven days. Fortune previously disclosed that the organization had mistakenly granted public access to thousands of internal documents. Among those documents was an unpublished blog entry detailing a forthcoming AI system referenced internally under the codenames "Mythos" and "Capybara." The draft reportedly acknowledged potential cybersecurity vulnerabilities associated with the model. Anthropric announced plans to implement additional safeguards against similar future occurrences. The organization emphasized that neither incident involved exposure of customer information or authentication credentials. Anthropric made Claude Code publicly available in May of the previous year. The platform assists programmers with feature development, debugging, and workflow automation. Adoption has accelerated substantially. By February, the product had achieved an annualized revenue run rate exceeding $2.5 billion. This commercial success has intensified competitive pressure across the sector. OpenAI, Google, and xAI have all committed significant resources toward developing comparable coding assistance platforms to challenge Claude Code's market position. Founded in 2021 by former OpenAI leadership and research personnel, Anthropic has built its reputation primarily around its Claude AI model series. A company representative confirmed that Anthropic is implementing procedural changes designed to prevent recurrence of such disclosure events.

In the fast-moving world of artificial intelligence, even the biggest players can have an off day. That's exactly what happened to Anthropic, the company behind the popular Claude AI models, when a routine software update turned into one of the most talked-about tech stories of the week. A techie broke down how it all started at 4 AM on March 31. As per the techie, Anthropic pushed out a fresh version of their "Claude Code" tool - an AI-powered helper that developers use to write and manage code more efficiently - to the npm registry, a popular platform where coders share and download software packages. But buried inside this update was a massive 60 MB debugging file, known as a .map file. Also Read: Oracle Layoffs 2026: Here's what the early morning termination email read What was meant to be a simple support tool accidentally included something far bigger: the complete source code of Claude Code itself. That's over 512,000 lines of code, covering everything from how the tool works behind the scenes to its plugins and features. Within minutes, a researcher named Chaofan Shou spotted the unusual file while checking the update. He downloaded it, zipped it up, and shared the link on X (formerly Twitter). His post quickly caught fire. By the time most people in the US were waking up, the news had spread like wildfire. The leaked code was downloaded thousands of times and forked, copied and hosted on GitHub more than 41,000 times. Anthropic's team scrambled to issue takedown notices under copyright law, but it was too late. The damage was done. What happened next was even more remarkable. Sigrid Jin, a developer from Korea known as one of the heaviest users of Claude Code (reports say he racked up a whopping 25 billion tokens of usage last year alone), woke up to a flood of notifications. Worried about legal trouble just for having the code on his computer, he decided to take action. In just eight hours, he rewrote the entire tool from scratch in Python, creating a new version called "claw-code." His GitHub repository shot up to 30,000 stars - faster than any project in the platform's history, according to observers. Also Read: Gmail users update: Google now allows changing old, embarrassing usernames: CEO Sundar Pichai shares step-by-step guide Not stopping there, Jin then rebuilt it again, this time in the faster Rust programming language. That version has already crossed 49,000 stars. Meanwhile, someone else took the original leaked code and mirrored it on a decentralized storage platform, adding a simple note: "will never be taken down." The code is now out there for good, beyond any single company's control. The story has sparked a wave of reactions online, with many pointing out the delicious irony. Anthropic had actually built a special feature called "Undercover Mode" into their products - designed specifically to stop their AI from accidentally spilling internal secrets. Yet here they were, leaking their own codebase through a basic packaging mistake. As one user put it in the comments, "They shipped an entire anti-leak system... then leaked their own source code in a .map file. The irony is beautiful." Others were amazed at the speed of the global developer community. "The real story is the speed here," wrote one commenter. "Not that code leaked, but that the community had it forked, ported to Python, then Rust, and running before Anthropic's PR team finished their coffee." Another highlighted how this shows a bigger shift: once something is out, it's instantly copied, understood, and re-built - no going back. Some developers dug into the code out of curiosity, while a few questioned the rewrite timelines or wondered if the leak was truly accidental. But the overwhelming buzz on X revolves around one thing: how quickly closed-source tech can become public knowledge in today's connected world. Claude Code isn't the AI model itself but the command-line interface that helps users interact with it for coding tasks. For everyday Indians following the AI boom, from young engineers in Bengaluru to students in smaller cities dreaming of tech careers, this episode is a reminder of how rapidly innovation moves. One packaging error at 4 AM, and suddenly proprietary secrets are in the hands of thousands. As the dust settles, the code remains widely available, and Anthropic's anti-leak efforts couldn't stop the spread.
Users of Claude Code, Anthropic's AI-powered coding assistant, are experiencing high token usage and early quota exhaustion, disrupting their work. Anthropic has acknowledged the issue, stating that "people are hitting usage limits in Claude Code way faster than expected. We're actively investigating... it's the top priority for the team." A user on the Claude Pro subscription ($200 annually) said on the company's Discord forum that "it's maxed out every Monday and resets at Saturday and it's been like that for a couple of weeks... out of 30 days I get to use Claude 12." The Anthropic forum on Reddit is buzzing with complaints. "I used up Max 5 in 1 hour of working, before I could work 8 hours," said one developer today. The Max 5 plan costs $100 per month. There are several possible factors in the change. Last week, Anthropic said it was reducing quotas during peak hours, a change that engineer Thariq Shihipar said would affect around 7 percent of users, while also claiming that "we've landed a lot of efficiency wins to offset this." March 28 was also the last day of a Claude promotion that doubled usage limits outside a six-hour peak window. A third factor is that Claude Code may have bugs that increase token usage. A user claimed that after reverse engineering the Claude Code binary, they "found two independent bugs that cause prompt cache to break, silently inflating costs by 10-20x." Some users confirmed that downgrading to an older version helped. "Downgrading to 2.1.34 made a very noticeable difference," said one. The documentation on prompt caching says that the cache "significantly reduces processing time and costs for repetitive tasks or prompts with consistent elements." That said, the cache has only a five-minute lifetime, which means stopping for a short break, or not using Claude Code for a few minutes, results in higher costs on resumption. Developers can upgrade the cache lifetime to one hour but "1-hour cache write tokens are 2 times the base input tokens price," the documentation states. A cache read token is 0.1 times the base price, so this is a key area for optimization. Anthropic does not state the exact usage limits for its plans. For example, the Pro plan promises only "at least five times the usage per session compared to our free service." The Standard Team plan promises "1.25x more usage per session than the Pro plan." This makes it hard for developers to know what their usage limits are, other than by examining their dashboard showing how much quota they have consumed. Problems like this are not unusual. Earlier this month, users of Google Antigravity were protesting about similar issues. Bugs aside, what we are seeing is an implicit negotiation between users and providers over what is an acceptable pricing and usage model for AI development. Users want to control costs and providers need to make a profit. There is also a disconnect between vendor marketing that urges developers to insert AI into every process, including in some cases automated workflows, and a quota system that can cause AI tools to stop responding. "For folks running Claude Code in automated workflows: rate-limit errors need to be caught explicitly - they look like generic failures and will silently trigger retries. One session in a loop can drain your daily budget in minutes," observed one user.

Symphony for Medical Coding delivers production-ready medical coding automation across the US and Europe, based on learnings from the largest study of its kind. NEW YORK and COPENHAGEN, Denmark, April 1, 2026 /PRNewswire/ -- Corti, the frontier lab for clinical-grade AI, today released Symphony for Medical Coding, an agentic model that outperforms OpenAI and Anthropic - as well as Amazon, Oracle, and Google - in medical coding by more than 25% in clinical accuracy benchmarks. It is now available via Corti's API to any team building AI-powered healthcare software. The cost of getting it wrong Medical coding converts clinical reality into structured data, powering reimbursement, reporting, and public health decisions. Coding errors are expensive, but the human cost goes much further. One example shows the scale of what is missed: in a recent study of Danish patient data, Corti identified three times as many suicide attempts as had been coded. The cases were all there - recorded in clinical notes, flagged in medication records - but coders, working under time pressure, had missed them. When cases go uncounted, health systems can't monitor trends, allocate resources, or design interventions. Policy fails before it starts. Defined by frontier research Medical coding is fundamentally a reasoning task, not a prediction problem. It involves interpreting many complexities, real judgment, and justification across thousands of codes. The American coding system alone, ICD-10-CM, has 70,000 diagnosis codes. Even worse, coding is based on guidelines that constantly evolve, making historically data-trained models inadequate. Corti started addressing this by conducting the largest study of its kind (5.8 million patient encounters), leading to Code Like Humans, a multi-agent framework accepted to EMNLP 2025, one of machine learning's top conferences. This framework mirrors professional coders' steps: identifying evidence, reasoning through hierarchies, validating against guidelines, and reconciling ambiguity. Symphony for Medical Coding builds on this foundation to perform work like expert coders, delivering higher quality than other models at a fraction of the cost. "Most AI systems fall short in medical coding because they treat it as labeling, not reasoning. Correct coding depends on evidence, context, hierarchy, and guideline interpretation. We built Symphony for Medical Coding to follow the same decision process expert coders use, and that is why the performance gap is so meaningful," said Lars Maaløe, PhD, CTO and co-founder of Corti. "The methodology behind Code Like Humans is the most promising approach to medical coding we've seen. We've been co-developing with Corti because we believe specialized AI infrastructure is how this problem gets solved - and we're excited to see it move into production," added Steve West, Managing Director, Healthliant Ventures and Tanner Health. Accuracy that can be audited In medical coding, accuracy requires traceability, defensibility, and ease of review. Symphony for Medical Coding links assigned codes to its clinical evidence and highlights ambiguities, giving teams, compliance leaders, and auditors a clear record of code. "Medical coding has been treated as a back-office cost center for decades. It isn't - it's the data layer that healthcare runs on. Getting it right changes what health systems can see, decide, and do," said Andreas Cleve, CEO and co-founder of Corti. Available across the US and Europe, as one system Coding systems vary widely, and most AI products require local fine-tuning. Symphony for Medical Coding does not. It is the first coding system designed to operate across both US diagnosis coding (ICD-10-CM) and procedure coding (ICD-10-PCS, CPT) and European coding environments without the need for local retraining. ICD-10, maintained by the WHO, is currently available in beta as Corti expands across priority European markets, including the UK, Germany, France, and Denmark. Symphony for Medical Coding is available now through the Corti Console, integrates directly with the Corti Agentic Framework, and supports both A2A and MCP standards. Enterprise and sovereign cloud deployments are available through Corti. About Corti Corti is healthcare's frontier lab for clinical-grade AI. Symphony, its flagship clinical-grade AI model, powers clinical and administrative applications for EHR vendors, virtual care platforms, practice management systems, and life sciences organizations worldwide. Corti serves over 100 million patients annually across health systems including the NHS. The company is headquartered in Copenhagen with offices in New York and London. For more information, visit corti.ai.

The cost of getting it wrong Medical coding converts clinical reality into structured data, powering reimbursement, reporting, and public health decisions. Coding errors are expensive, but the human cost goes much further. One example shows the scale of what is missed: in a recent study of Danish patient data, Corti identified three times as many suicide attempts as had been coded. The cases were all there - recorded in clinical notes, flagged in medication records - but coders, working under time pressure, had missed them. When cases go uncounted, health systems can't monitor trends, allocate resources, or design interventions. Policy fails before it starts. Defined by frontier research Medical coding is fundamentally a reasoning task, not a prediction problem. It involves interpreting many complexities, real judgment, and justification across thousands of codes. The American coding system alone, ICD-10-CM, has 70,000 diagnosis codes. Even worse, coding is based on guidelines that constantly evolve, making historically data-trained models inadequate. Corti started addressing this by conducting the largest study of its kind (5.8 million patient encounters), leading to Code Like Humans, a multi-agent framework accepted to EMNLP 2025, one of machine learning's top conferences. This framework mirrors professional coders' steps: identifying evidence, reasoning through hierarchies, validating against guidelines, and reconciling ambiguity. Symphony for Medical Coding builds on this foundation to perform work like expert coders, delivering higher quality than other models at a fraction of the cost. "Most AI systems fall short in medical coding because they treat it as labeling, not reasoning. Correct coding depends on evidence, context, hierarchy, and guideline interpretation. We built Symphony for Medical Coding to follow the same decision process expert coders use, and that is why the performance gap is so meaningful," said Lars Maaløe, PhD, CTO and co-founder of Corti. "The methodology behind Code Like Humans is the most promising approach to medical coding we've seen. We've been co-developing with Corti because we believe specialized AI infrastructure is how this problem gets solved - and we're excited to see it move into production," added Steve West, Managing Director, Healthliant Ventures and Tanner Health. Accuracy that can be audited In medical coding, accuracy requires traceability, defensibility, and ease of review. Symphony for Medical Coding links assigned codes to its clinical evidence and highlights ambiguities, giving teams, compliance leaders, and auditors a clear record of code. "Medical coding has been treated as a back-office cost center for decades. It isn't - it's the data layer that healthcare runs on. Getting it right changes what health systems can see, decide, and do," said Andreas Cleve, CEO and co-founder of Corti. Available across the US and Europe, as one system Coding systems vary widely, and most AI products require local fine-tuning. Symphony for Medical Coding does not. It is the first coding system designed to operate across both US diagnosis coding (ICD-10-CM) and procedure coding (ICD-10-PCS, CPT) and European coding environments without the need for local retraining. ICD-10, maintained by the WHO, is currently available in beta as Corti expands across priority European markets, including the UK, Germany, France, and Denmark. Symphony for Medical Coding is available now through the Corti Console, integrates directly with the Corti Agentic Framework, and supports both A2A and MCP standards. Enterprise and sovereign cloud deployments are available through Corti. About Corti Corti is healthcare's frontier lab for clinical-grade AI. Symphony, its flagship clinical-grade AI model, powers clinical and administrative applications for EHR vendors, virtual care platforms, practice management systems, and life sciences organizations worldwide. Corti serves over 100 million patients annually across health systems including the NHS. The company is headquartered in Copenhagen with offices in New York and London. For more information, visit corti.ai. Media Contact: [email protected] corti.ai/newsroom Logo - https://mma.prnewswire.com/media/2947055/Corti_Logo.jpg View original content:https://www.prnewswire.co.uk/news-releases/corti-ships-symphony-for-medical-coding-with-more-than-25-accuracy-edge-over-openai-and-anthropic-302730432.html © 2026 PR Newswire

inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup's plans. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach." The company's second security slip-up in just a week compromised approximately 1,900 files and 512,000 lines of code related to Claude Code, an agentic coding tool that ...

(Bloomberg) -- Anthropic PBC inadvertently released source code for its popular Claude AI agent, raising questions about its operational security and sending developers on a search for clues about the startup's plans. "Earlier today, a Claude Code release included some internal source code. No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in an emailed statement. "This was a release packaging issue caused by human error, not a security breach." The company's second security slip-up in just a week compromised approximately 1,900 files and 512,000 lines of code related to Claude Code, an agentic coding tool that runs directly inside developer environments and has access to sensitive information, according to cybersecurity analysts. The release first came to light in a post on X, which purported to share a link to the code and garnered more than 30 million views. Developers said they were poring through the details to try and figure out how the agent works as well as how the startup intended to evolve the platform. Several experts also raised concerns about potential security vulnerabilities in light of the unintended exposure. "Attackers can now study and fuzz exactly how data flows through Claude Code's four-stage context management pipeline and craft payloads designed to survive compaction, effectively persisting a backdoor across an arbitrarily long session," said AI cybersecurity firm Straiker in a blog post. Days ago, Fortune reported that Anthropic accidentally made thousands of files publicly available, including a draft blog post that detailed a powerful upcoming model known internally as both "Mythos" and "Capybara" that presents cybersecurity risks. "We're rolling out measures to prevent this from happening again," the Anthropic spokesperson said. (Updates with additional details about the release from the third paragraph.)

The broader group of participating institutions includes firms such as Barclays, Deutsche Bank, UBS, Wells Fargo, Banco Santander and Royal Bank of Canada, among others SpaceX is working with at least 21 banks for its planned initial public offering, underscoring the scale of what could be one of the biggest listings in recent years, CNBC reported, citing people familiar with the matter. The IPO, internally code-named "Project Apex," is expected to be among the most closely tracked market debuts on Wall Street, the CNBC report added. The offering, likely to take place in June, could value the Elon Musk-led rocket company at around $1.75 trillion. According to the CNBC report, major global banks including Morgan Stanley, Goldman Sachs, JPMorgan Chase, Bank of America and Citigroup have been appointed as lead bookrunners, managing the core aspects of the deal. In addition, around 16 other banks have joined the syndicate in supporting roles, with roughly half of their names not previously disclosed. The broader group of participating institutions includes firms such as Barclays, Deutsche Bank, UBS, Wells Fargo, Banco Santander and Royal Bank of Canada, among others. These banks are expected to handle different investor segments, including institutional, high-net-worth and retail investors, as well as manage geographic distribution. The size of the underwriting syndicate highlights the complexity and scale of the proposed listing. Sources told CNBC that the final structure is still evolving, and more banks could be added before the offering is finalised. Large syndicates have increasingly become common for mega IPOs. For instance, Arm Holdings worked with nearly 30 banks for its 2023 listing, while Alibaba Group had assembled a similarly large group for its record-breaking 2014 debut.

Artificial intelligence giant Anthropic is eyeing data centre investments in Australia, saying Wednesday the nation was a "natural partner" for work in the booming sector. With immense renewable energy potential and vast stretches of uninhabited land, Australia has touted itself as a prime location for the power-hungry data centres needed to power AI. US-based Anthropic said it was "exploring investments in data centre infrastructure and energy throughout the country" after signing a memorandum of understanding with the Australian government. "The visit to Australia marks the beginning of long-term collaboration and investment into the Asia-Pacific region," the technology company said in a statement. "Australia's investment in AI safety makes it a natural partner for responsible AI development." The agreement, signed by Anthropic chief executive Dario Amodei in capital Canberra, said the firm would abide by local laws to "maintain strong social licence for investment". Australia's arts sector has accused Anthropic and other AI companies of pushing to loosen copyright laws so chatbots can be trained on local songs and books. Anthropic said it had also agreed to share AI research and safety information with Australian regulators, mirroring similar agreements in Japan and Britain. Industry Minister Tim Ayres said Australia and Anthropic would "harness AI responsibly". Energy-intensive New data centres - warehouse facilities that store files and power AI tools - are springing up worldwide. But there are increasing fears about the environmental impact of hulking data hubs. Singapore halted data centre developments between 2019 and 2022 over energy, water and land use worries. Australia last week adopted new rules governing the operation of data centres. Tech companies must show how they will source renewable energy and minimise their emissions. "As demand for AI grows, continued expansion of data centre infrastructure must reflect Australian values and be environmentally and socially sustainable," the guidelines state. Anthropic's Claude is the Pentagon's most widely-deployed frontier AI model and the only such model currently operating on its classified systems. But the company is locked in a dispute with the US government, after saying it would refuse to let its systems be used for mass surveillance. Washington has since described Anthropic's tools as an "unacceptable risk to national security". The United States has not only blocked use of the company's technology by the Pentagon, but also requires all defense contractors to certify that they do not use Anthropic's models.
)
The Copenhagen-based health AI company built Symphony on peer-reviewed research from the largest medical coding study of its kind, treating coding as a reasoning task rather than a labelling problem. It's available via API now. Medical coding, the process of converting clinical notes, diagnoses, and procedures into standardised alphanumeric codes used for billing, reporting, and public health data, is one of healthcare's most error-prone and consequential administrative tasks. The American coding system alone, ICD-10-CM, contains 70,000 diagnosis codes. Errors are routine, expensive, and often invisible. Corti, the Copenhagen-based clinical AI company, has built a product specifically designed to fix this: Symphony for Medical Coding, an agentic system it claims outperforms models from OpenAI, Anthropic, Amazon, Oracle, and Microsoft by up to 25% on clinical accuracy benchmarks. It is available via API from today. The performance gap Corti claims is grounded in a methodological distinction. Most AI systems approach medical coding as a classification problem: given a clinical note, predict the most likely code from the training distribution. The problem is that coding guidelines change constantly, making historically trained models structurally inadequate. Corti's approach, developed through a peer-reviewed framework called Code Like Humans, accepted at EMNLP 2025, one of machine learning's top conferences, treats coding instead as a reasoning task. "Most AI systems fall short in medical coding because they treat it as labeling, not reasoning. Correct coding depends on evidence, context, hierarchy, and guideline interpretation. We built Symphony for Medical Coding to follow the same decision process expert coders use, and that is why the performance gap is so meaningful," said Lars Maaløe, PhD, CTO and co-founder of Corti. The system uses four agents in sequence: an evidence extractor that isolates conditions in a clinical note, an index navigator that searches the ICD alphabetical index for candidate codes, a tabular validator that checks candidates against guidelines, and a code reconciler that sequences and validates the final output. Each step mirrors what a trained human coder does. The research was based on 1.8 million patient encounters, making it the largest peer-reviewed study of its kind. The consequences of conventional under-coding are not merely financial. Corti cites a peer-reviewed study of Danish patient data in which its system identified three times as many suicide attempts as had been officially coded, cases that were present in clinical notes and medication records but were missed by coders working under time pressure. "Medical coding has been treated as a back-office cost center for decades. It isn't - it's the data layer that healthcare runs on. Getting it right changes what health systems can see, decide, and do," said Andreas Cleve, CEO and co-founder of Corti. When those cases go uncounted, health systems cannot monitor trends, allocate resources, or design effective interventions. The coding layer is not administrative overhead; it is how health systems see themselves. Symphony for Medical Coding is the first system Corti has built to operate across both US coding environments, ICD-10-CM for diagnoses, ICD-10-PCS and CPT for procedures, and European coding environments without local retraining. I CD-10 coverage for Europe, maintained by the WHO, is currently in beta as the company expands into the UK, Germany, France, and Denmark. The system produces auditable outputs: each assigned code is linked to the clinical evidence that supports it, with ambiguities flagged for human review. It is available through the Corti Console, integrates with the Corti Agentic Framework, and supports both A2A and MCP standards. Enterprise and sovereign cloud deployments are also available. Corti was founded in Copenhagen and also has offices in New York and London. It has raised $100 million in total and serves more than 100 million patients annually across health systems, including the NHS. The Symphony launch is the commercial product built on the Code Like Humans research, following Corti's stated approach of validating ideas in peer-reviewed forums before translating them into production-grade infrastructure.

In a major push into the aviation connectivity market, Amazon has partnered with Delta Air Lines to provide in-flight Wi-Fi through its low Earth orbit (LEO) satellite network, stepping up competition with SpaceX and its Starlink service. Under the agreement, Delta will begin installing Amazon's LEO-powered satellite internet across 500 aircraft starting 2028, with expansion planned across hundreds more planes in the following years. The service will remain free for Delta SkyMiles members. Amazon is aiming to close the gap with SpaceX, whose Starlink network already operates over 9,000 satellites and has secured major airline partners including British Airways, Air France, Emirates, and United Airlines. In contrast, Amazon currently has a few hundred satellites in orbit, with plans to scale beyond 3,200 and begin commercial services in 2026. Prior to this deal, JetBlue Airways was its only aviation customer. The partnership also strengthens Delta's long-standing collaboration with Amazon, which already powers key systems through Amazon Web Services. Both companies plan deeper integration of AI and digital technologies to enhance passenger experience. Delta CEO Ed Bastian said, "Delta's future is global. This agreement gives us the best, fastest and most cost-effective technology available to better connect the world today, and it deepens our work with a global leader that shares our ambition to build what's next." Each aircraft will be equipped with phased array antennas capable of delivering download speeds up to 1 Gbps and upload speeds up to 400 Mbps. Amazon's satellites operate at around 370 miles above Earth, significantly closer than traditional systems, enabling lower latency and faster connectivity. Chris Weber, Vice President at Amazon's LEO division, highlighted that the network could support full-flight 4K streaming and high-speed uploads for passengers. Delta currently works with Viasat and Hughes Network Systems and will continue using multiple providers as it upgrades its fleet. The broader race among US airlines to offer faster, free in-flight Wi-Fi is accelerating. American Airlines is rolling out free Wi-Fi through AT&T, while Delta has offered free connectivity to SkyMiles members since 2023 via T-Mobile. With this deal, Amazon is positioning its satellite network as a strong challenger in aviation connectivity, aiming to transform the in-flight digital experience while competing head-to-head with SpaceX. Also read: Viksit Workforce for a Viksit Bharat Do Follow: The Mainstream LinkedIn | The Mainstream Facebook | The Mainstream Youtube | The Mainstream Twitter About us: The Mainstream is a premier platform delivering the latest updates and informed perspectives across the technology business and cyber landscape. Built on research-driven, thought leadership and original intellectual property, The Mainstream also curates summits & conferences that convene decision makers to explore how technology reshapes industries and leadership. With a growing presence in India and globally across the Middle East, Africa, ASEAN, the USA, the UK and Australia, The Mainstream carries a vision to bring the latest happenings and insights to 8.2 billion people and to place technology at the centre of conversation for leaders navigating the future.

Anthropic disclosed that parts of Claude Code's source code were exposed, and it characterized the incident as a release packaging problem rather than a security breach. Multiple reports describe how the leak was discovered after Claude Code updates shipped. In particular, users found a package containing a source map file with TypeScript codebase material, and investigators/observers later said the full CLI source repository contents for Claude Code were exposed through an npm misconfiguration. The leak appears to have been tied to how the software was built and published, not to an external attacker breaking in. Anthropic's response emphasized that the issue was caused by human error and that it was not the result of a compromised system. That matters because it changes the remediation focus: instead of widening defensive posture against intrusion, Anthropic needed to correct the release pipeline and publishing process, remove the exposed artifacts, and ensure future builds don't ship source maps or other development-only files. For developers and users, the event is a reminder that "agentic" AI tooling can leak more than model weights -- build artifacts, internal architectures, and implementation details can be unintentionally included when distributing software. The broader significance is how quickly source exposures can propagate. Once an artifact is downloadable from a package registry, it can spread through mirrors, caching, and downstream tooling, making cleanup and verification urgent. Even with Anthropic's clarification, the key takeaway remains clear: the leak was triggered by what ended up in a production release, underscoring that secure software supply chains require careful packaging controls, not just perimeter security. {

Security lines at Miami International Airport stayed manageable Tuesday with TSA wait times averaging under 15 minutes across most checkpoints, offering a bright spot for spring break travelers navigating the partial government shutdown that has snarled operations at many major U.S. hubs. As of mid-morning on March 31, 2026, real-time data from the airport's website and third-party trackers showed general security waits ranging from 3 to 14 minutes depending on the checkpoint, with TSA PreCheck and Clear lanes often clearing in 1 to 5 minutes. Checkpoint 5, for example, reported general waits as low as 1-3 minutes, while Checkpoint 3 hovered around 10-14 minutes for standard lanes. Some priority and PreCheck options remained limited or closed at specific points, but overall flow remained far smoother than at hard-hit airports like Atlanta or Houston. Miami International Airport, one of the nation's busiest gateways with heavy international traffic, handled the situation better than many peers thanks to proactive staffing adjustments, real-time monitoring and its status as a high-volume facility accustomed to peak surges. Officials continued recommending two hours for domestic flights and three hours for international departures, but the short security times meant most passengers cleared checkpoints without major drama. The contrast with national headlines was stark. While some airports reported lines stretching for hours due to TSA staffing shortages triggered by the ongoing funding impasse, MIA's waits stayed consistently below 15 minutes for much of Tuesday morning. Immigration processing, however, told a different story, with waits exceeding 45 minutes at times -- a reminder that international travelers still faced longer overall journeys through the airport. Airport spokesman said MIA has benefited from strong local coordination and the ability to shift resources efficiently during busy periods like spring break, Passover and the lead-up to Easter. "We're monitoring every checkpoint closely and appreciate travelers' patience," officials noted in updates. The airport publishes live wait times on its website, allowing passengers to check conditions before heading to specific terminals or concourses. Miami International features multiple security checkpoints across its North and Central terminals, serving American Airlines, Delta and dozens of international carriers. Checkpoints open at varying times, with some operating nearly 24 hours. Real-time displays help direct passengers to the least crowded lanes, and dedicated PreCheck and Clear lanes provide faster paths for eligible travelers. Travelers on social media and local forums reported positive experiences Tuesday, with many PreCheck users clearing security in under five minutes. General lanes occasionally reached 10-20 minutes during busier waves but rarely approached the chaos seen elsewhere. Reddit threads and local news comments highlighted MIA as one of South Florida's more reliable options compared with Fort Lauderdale-Hollywood International, where waits sometimes stretched longer. The partial government shutdown has forced TSA to operate with reduced personnel nationwide, as officers work without timely pay and some have called out or quit. High absenteeism has led to lane closures and extended lines at many facilities. In Miami, however, the impact appeared muted, with airport leadership working closely with federal partners and deploying additional support where needed. Some reports noted temporary use of auxiliary staff or adjusted screening protocols to maintain flow. For passengers without expedited screening, standard procedures still apply: removal of liquids, electronics and outerwear under the 3-1-1 rule. Families, travelers with disabilities or those requiring additional screening may experience slightly longer times, but the overall environment remained orderly. MIA serves as a critical hub for Latin America and the Caribbean, with millions of passengers passing through annually. Even during peak travel seasons, the airport's layout and multiple entry points help distribute crowds. Officials urge checking flight status and real-time wait times via the MIA website or apps before arriving. Experts recommend several strategies to minimize delays at MIA: enroll in TSA PreCheck or Clear if frequent travel justifies it, pack carry-ons efficiently, monitor checkpoint-specific updates, and consider off-peak arrival times when possible. Early morning and late evening often see the shortest lines, while mid-morning and afternoon can build during flight banks. Immigration and customs on arrival for international flights remain a separate bottleneck, with waits sometimes exceeding 45 minutes. Travelers connecting or departing internationally should factor this in and allow generous buffers. The situation at MIA mirrors broader challenges in U.S. aviation security during fiscal standoffs, but also highlights how larger, well-managed airports can sometimes weather disruptions more effectively. Transportation departments and airlines have updated passengers via apps and announcements to plan accordingly. Local leaders and business groups emphasize MIA's economic importance for tourism, trade and connectivity in South Florida. Smooth operations support the region's reputation as a global gateway despite occasional weather or staffing pressures. As spring break continues and holiday travel ramps up, conditions could fluctuate. Airport officials have not announced major lane closures or alerts as of Tuesday, but they stress that security wait times can change quickly with arriving flight waves or staffing shifts. Travelers can access real-time information through MIA's official TSA wait times page, the MyTSA app (though federal updates have been inconsistent during the shutdown) and third-party trackers. Delta and American, major carriers at MIA, have provided guidance on arrival timing. In a travel landscape marked by unpredictability this season, Miami International has emerged as one of the steadier large hubs for security screening. Passengers flying out of MIA in coming days should still build in reasonable buffers -- especially for international departures -- but can take comfort that lines here are moving far quicker than at many peer airports nationwide. For those driving to the airport, parking and ground transportation options remain available, though traffic around MIA can add time during peak periods. Rideshares, taxis and public transit like the Metrorail connection provide alternatives. The broader context involves ongoing negotiations to resolve the funding issues affecting the Department of Homeland Security. While emergency measures have provided some relief, long-term workforce stability for TSA remains a concern to prevent future disruptions. Miami International Airport continues to invest in technology, including advanced imaging and automated screening, to improve efficiency and passenger experience. These upgrades help offset occasional staffing pressures and support higher throughput. As midday Tuesday approached, security conditions remained favorable with no major surges reported. However, officials and community updates remind travelers that airport processes can shift rapidly. In summary, while the partial government shutdown has created headaches at airports across the country, Miami International has kept TSA security wait times short and manageable -- a welcome development for thousands of passengers passing through one of America's busiest gateways.

For the second time in three months, a SpaceX satellite has failed and broken up over Earth, highlighting concerns over the company's plan to increase its network to more than a million satellites. The spacecraft that malfunctioned this week was one of more than 10,000 currently operated by Starlink, SpaceX's satellite internet service that has a growing manufacturing facility in Bastrop County. The Elon Musk-owned company said on X that "satellite 34343 experienced an anomaly on-orbit, resulting in loss of communications with the satellite at ~560 km above Earth." Satellite 34343 was launched in May from Vandenberg Space Force Base in California. It was a type of Starlink satellite called a "V2 Mini." Their bodies are slightly larger than a small car, weigh about 1,760 pounds and have solar panels that can extend about 100 feet. Leolabs, a space monitoring firm, said its radar network "immediately detected tens of objects in the vicinity of the satellite after the event," and that the breakup appeared to be "caused by an internal energetic source rather than a collision with space debris or another object." It predicted that the fragments would fall out of orbit in the next few weeks. Leolabs said the event was comparable to a Starlink failure in December. That satellite's fuel tank apparently malfunctioned and sent the craft spinning out of control and tumbling back into Earth's atmosphere. The latest breakup didn't threaten the International Space Station or any upcoming rocket launches, but Starlink said it "will continue to monitor the satellite along with any trackable debris and coordinate with (NASA) and the (U.S. Space Force)." The mishap comes as critics blast SpaceX's plan to create a new constellation of up to a million satellites to be used as space-based data centers. The firm applied to the Federal Communications Commission in January to create what it's calling the "SpaceX Orbital Data Center system." "We just recently gave a request for FCC licensing for up to a million (artificial intelligence) satellites," SpaceX President and Chief Operating Officer Gwynne Shotwell said in a recent interview with Time. "I'm surprised that didn't get more news. I don't know if we'll get to a million, but it's much easier to ask at the beginning and then march toward that goal." The plan calls for the satellites to orbit at altitudes between about 310 miles and 1,240 miles above Earth. They'll fly in 31-mile-tall bands in orbits above the planet's equator or in paths over the poles. SpaceX's Starship mega-rocket will play a major role in the company's drive to create the orbital data system and grow its Starlink network. Musk recently said "launching AI satellites from Earth is the immediate focus" for the giant rocket being developed at Starbase in South Texas. Each Starship is expected to be able to carry as much as 150 tons into space, about seven times the amount the company's workhorse, the Falcon 9. The mega-rocket has successfully deployed dummy Starlink satellites into space during its last two launches. The orbital data center system will be in addition to the Starlink constellation, which currently has 10,139 satellites. The company plans to triple that to as many as 34,400 of the craft in orbit. SpaceX currently launches about two dozen Starlinks aboard a Falcon 9 rocket every few days from Florida or California. Astronomers and scientists previously have criticized the growing constellation, pointing out the effect on views of the night sky, risks for collision with other spacecraft and environmental concerns over satellites burning up in the atmosphere. The FCC has received more than 1,500 comments about SpaceX's latest request to create the orbital data center, according to the American Astronomical Society, which opposes the plan. That group said SpaceX hasn't adequately addressed concerns about how thousands more satellites in orbit will interfere with views of space. It said the system "would result in tens of thousands of sunlit satellites being visible at any given location and at any given time" and "SpaceX does not provide even a concept for how the ... brightness of these satellites could be mitigated to protect professional and amateur astronomical observations." The society also warned of potential interference with infrared and radio telescopes, pollution and risks of collisions in space. "SpaceX has failed to demonstrate that its proposed megaconstellation would not compromise investments of billions of taxpayer dollars," it said. MORE SPACE: SpaceX in talks to get its own terminal at Port of Brownsville Scientists with the Center for Space Environmentalism, an advocacy group, called on the FCC to reject the plan and add more regulatory scrutiny. It questioned the satellite's anti-collision systems and said SpaceX's "'stacking' approach creates unprecedented bottlenecks that could trigger a runaway debris cascade, rendering near-Earth space inaccessible for centuries." The group also called for a full environmental review of the proposed megaconstellation. "SpaceX is essentially asking the American public to underwrite the environmental risk of their private AI venture," it said. Musk and SpaceX began talking about orbital data centers late last year. Musk has said that Earth-based data center demands for power and cooling aren't sustainable and space-based data centers powered by the sun offer a path forward. "Global electricity demand for AI simply cannot be met with terrestrial solutions, even in the near term, without imposing hardship on communities and the environment," he wrote in a recent update. "In the long term, space-based AI is obviously the only way to scale." In February, SpaceX acquired xAI, Musk's artificial intelligence firm and parent of social media platform X. Last month, Musk announced SpaceX and Tesla are building what he says will be the world's largest chip manufacturing plant. The first phase of the project dubbed Terafab will be near Tesla's Gigafactory outside Austin. The interest in the data center industry comes as SpaceX prepares for its initial public offering. Reportedly valued between $1.5 trillion and $1.75 trillion, SpaceX expects to raise as much as $80 billion in its IPO, which would be the largest ever. The IPO date hasn't been made public yet.

Anthropic confirmed a leak of its AI tool Claude Code's source code. Anthropic, a startup, has confirmed a leak of part of the source code for its AI programming tool, Claude Code, as reported by CNBC. "No confidential client data has been exposed. The error occurred during the release packaging. It was caused by human error, not a hack. We are implementing measures to prevent this from happening again," a company representative stated. The first reports of the leaked code surfaced on March 31. A related post on X garnered over 30 million views. The incident could negatively impact Anthropic's standing, as it opens access to the algorithms of the popular tool to external developers and competitors. The published data fully reveals the architecture of Claude Code. Specifically, it became known how the service: Additionally, the code revealed 44 hidden features that had not yet been officially presented. Among the most intriguing discoveries are: The data set also includes Undercover Mode -- a special mode that prevents the neural network from accidentally publishing internal project names. This is the second major error by the company recently. Just days prior, the description of a future AI model and other documents were found in the public domain, as reported. At that time, a representative of Anthropic stated that the new model represents a "qualitative leap" in performance and is "the most powerful solution to date." It is currently being tested among a limited group of users. The company explained that information about the upcoming LLM appeared online "due to human error," and the leaked materials were "early drafts of content." The release of Claude Code to the general public took place in May 2025. The tool assists developers in creating functions, fixing bugs, and automating tasks. Over the past year, the service has become so in demand that growing competition from Anthropic forced OpenAI to shut down the Sora project to concentrate resources on developing similar solutions. In March, Anthropic transformed Claude into an AI agent -- the bot gained the ability to use a computer to perform tasks.

SpaceX has reportedly appointed a group of 21 banks to handle its upcoming Initial Public Offering (IPO). The lead bookrunner banks are reported by Bloomberg to include Morgan Stanley, Goldman Sachs, JPMorgan, Bank of America and Citigroup. The other unnamed banks will include national banks for what the project which has the code name 'Project Apex'. The rumour mill - at the moment wholly unofficial - suggests that the cash-raising exercise will value SpaceX at some $1.75 trillion, and in the process make SpaceX the most valuable IPO in history. Sources suggest that the lead bookrunner banks are meeting on April 6 to plan their strategy for the IPO which is likely to happen in June. There is one early winner in the scheme, which is EchoStar which can potentially receive up to $11 billion as part of the spectrum purchase made by SpaceX last year (and has yet to be approved by the FCC). March 31st saw EchoStar's share price rise 4.3 per cent to $117.
