News & Updates

The latest news and updates from companies in the WLTH portfolio.

Anthropic touts AI cybersecurity project with Big Tech partners

Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic ⁠said. ⁠Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic ⁠said. ⁠Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which it said posed security risks and also offered advanced capabilities, dragging shares of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was also ⁠dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, Anthropic said ⁠Mythos Preview had found "thousands" of major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at ⁠scale." The startup said it has also been in ongoing discussions with the U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year.

Anthropic
Economic Times25d ago
Read update
Anthropic touts AI cybersecurity project with Big Tech partners

Latest Anthropic AI model finds cracks in software defences

NEW YORK - Anthropic on Tuesday said its yet-to-be-released artificial intelligence model called Claude Mythos has proven keenly adept at exposing software weaknesses. Mythos has laid bare thousands of vulnerabilities in commonly used applications for which no patch or fix exists, prompting the San Francisco-based AI startup to form an alliance with cybersecurity specialists to bolster defences against hacking. "The capabilities of the most advanced AI models are expected to advance substantially in the coming months," Anthropic said. "For cybersecurity to stay ahead of this curve, we must act now." Leaps in AI model capabilities have come with concerns about hackers using such tools for figuring out passwords or cracking encryption meant to keep data safe. The oldest of the vulnerabilities uncovered by Mythos dates back 27 years, and none were ostensibly noticed by their makers before being pinpointed by the AI model, according to Anthropic. Mythos is the latest generation of Anthropic's Claude family of AI, and a recent leak of some of its code prompted the startup to release a blog post warning it posed unprecedented cybersecurity risks. "The vulnerabilities it finds are often subtle and difficult to detect," Anthropic said during a briefing on Tuesday. As an example, it said Mythos found a previously unnoticed flaw in video software that had been tested more than 5 million times by its creators. As a precaution, Anthropic has shared a version of Mythos with cybersecurity companies CrowdStrike and Palo Alto Networks, as well as with Amazon, Apple and Microsoft in a project it dubbed "Glasswing." Networking giants Cisco and Broadcom are taking part in the project, along with the Linux Foundation that promotes free, open-source Linux computer operating system. Approximately 40 organizations involved in the design, maintenance or operation of computer systems are said to have joined Glasswing. Project partners are to share their Mythos findings, according to Anthropic, which is providing about US$100 million worth of computing resources for the mission. Mythos was designed as a general-purpose AI model, and not a software vulnerability hunter, according to its creator. Anthropic said it has had discussions with the U.S. government regarding Mythos despite a decree by the White House in February to terminate all contracts with the startup. That directive was put on hold by a federal court judge while a legal challenge by Anthropic works its way through the courts.

Anthropic
Pulse24.com 25d ago
Read update
Latest Anthropic AI model finds cracks in software defences

Democrats Press CFTC on Polymarket and Kalshi Over Military Event Contracts - FinanceFeeds

House Democrats are calling on the Commodity Futures Trading Commission to take action against prediction market platforms that allow users to bet on sensitive real-world events, including military operations. The push follows the appearance of contracts tied to the rescue of U.S. airmen in Iran, which circulated over the weekend and triggered political backlash. In a letter sent Tuesday, lawmakers led by Reps. Seth Moulton and Jim McGovern asked CFTC Chair Michael Selig what steps the agency can take to prevent offshore platforms from offering such contracts. The concern centers on markets operating outside U.S. jurisdiction but still accessible to global users. "There is something deeply sick about turning war into a gambling opportunity. We're talking about people betting on bombings, bloodshed, and military action as if human lives are just numbers on a screen," McGovern said. "These are not harmless wagers." Prediction markets have expanded rapidly since 2024, with platforms such as Kalshi and Polymarket attracting growing volumes and user activity. These markets allow traders to take positions on the outcomes of real-world events, ranging from elections to geopolitical developments. Lawmakers have raised repeated concerns about the potential for insider trading and ethical risks. Earlier this year, scrutiny intensified after trades appeared to anticipate the capture of Venezuelan President Nicolás Maduro. The latest controversy extends those concerns into national security territory. Moulton separately criticized a market that allowed bets on the timing of a rescue operation involving U.S. fighter jet pilots in Iran. "They could be your neighbor, a friend, a family member," he said. "And people are betting on whether or not they'll be saved." Polymarket said it removed the contract, stating that it should not have been listed and that the firm is reviewing how it passed internal safeguards. The CFTC has asserted that it holds exclusive jurisdiction over prediction markets under the Commodity Exchange Act, framing event contracts as derivatives rather than gambling products. However, the agency's authority becomes more complex when platforms operate outside U.S. borders. Lawmakers argued that contracts referencing war or illegal activities may violate existing restrictions, which prohibit listings tied to terrorism, assassination, or unlawful conduct. They questioned why enforcement actions have not been taken and whether the agency has sufficient tools to address offshore activity. Recent legal disputes have reinforced the CFTC's central role. Courts have upheld federal authority over prediction markets, even as states attempt to regulate them under gambling laws. :contentReference[oaicite:0]{index=0} At the same time, enforcement priorities are expanding. Regulators have identified insider trading and market manipulation in prediction markets as areas of focus as trading volumes increase. :contentReference[oaicite:1]{index=1} The letter adds to mounting political pressure on the prediction market industry, which is already facing legal challenges and proposed legislation targeting specific contract types. Lawmakers have asked the CFTC to respond by April 15, signaling potential follow-up action depending on the agency's position. The broader issue extends beyond a single platform or contract. As prediction markets expand into more sensitive categories, the line between financial instruments and gambling-like activity is being tested. This raises questions about what types of events should be tradable and how platforms manage content risk. For operators, the immediate impact may involve tighter listing standards and increased moderation of contracts tied to geopolitical or military events. Longer term, the outcome of regulatory and political debates could determine whether prediction markets remain open-ended instruments or evolve into more restricted, compliance-driven products.

Polymarket
FinanceFeeds25d ago
Read update
Democrats Press CFTC on Polymarket and Kalshi Over Military Event Contracts - FinanceFeeds

Anthropic will use its biggest, baddest AI model to protect against cyberattacks

Anthropic said Tuesday that it is sharing a preview version of its upcoming AI model in a new cybersecurity initiative with a coalition of tech companies to find and fix vulnerabilities in critical software infrastructure. The Project Glasswing initiative includes tech stalwarts like Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. Anthropic said the partners will use the model for defensive security work and distribute their findings within the industry at large. The company is also extending access to roughly 40 additional organizations that build or maintain critical software infrastructure. Fears have been growing that bad actors could use powerful AI models to develop more sophisticated cyberattacks. "The work of defending the world's cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months," Anthropic said in a blog post. "For cyber defenders to come out ahead, we need to act now." Anthropic is committing up to $100 million worth of model usage credits to the security research, and $4 million in direct donations to open-source security organizations.

Anthropic
Fast Company25d ago
Read update
Anthropic will use its biggest, baddest AI model to protect against cyberattacks

Anthropic's latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first

Anthropic will make the code of its new AI model available to some of the world's biggest cybersecurity and software firms in an effort to slow the arms race ignited by AI in the hands of hackers, Anthropic said Tuesday. Amazon, Apple, Cisco, Google, JPMorgan Chase and Microsoft, among other firms, will now have access to Anthropic's Mythos model for cyber defense purposes. That includes finding bugs in those firms' software and testing whether specific hacking techniques work on their products. Mythos (officially dubbed "Claude Mythos Preview") is not ready for a public launch because of the ways it could be abused by cybercriminals and spies, according to Anthropic -- a prospect that has prompted widespread concern in Washington and in Silicon Valley. Experts have told CNN that the speed and scale of AI agents looking for vulnerabilities, far beyond normal human capabilities, represent a sea change in cybersecurity. A single AI agent could scan for vulnerabilities and potentially take advantage of them faster and more persistently than hundreds of human hackers. "We did not feel comfortable releasing this generally," Logan Graham, who heads the team at Anthropic its AI models' defenses, told CNN. "We think that there's a long way to go to have the appropriate safeguards." Anthropic has also briefed senior US officials "across the US government" on Mythos' full offensive and defensive cyber capabilities, an Anthropic official told CNN. The firm has also "made itself available to support the government's own testing and evaluation of the technology," the official said. Anthropic executives hope the selected release of Mythos to companies that serve billions of users will help even the playing field with attackers. The goal is to head off major security flaws in widely used internet browsers and operating systems before they are released publicly. Other firms or organizations that Anthropic said will have access to Mythos include chipmakers Broadcom and Nvidia, the nonprofit Linux Foundation, which supports the popular Linux operating system that powers many phones and supercomputers, and cybersecurity vendors CrowdStrike and Palo Alto Networks. "If models are going to be this good -- and probably much better than this -- at all cybersecurity tasks, we need to prepare pretty fast," Graham told CNN. "The world is very different now if these model capabilities are going to be in our lives." A blog post previewing Mythos's capabilities, which leaked last month claimed that the AI model was "far ahead" of other models' cyber capabilities. Mythos "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders," said the blog post, which Fortune first reported. Some of the concerns around how Mythos' could be abused by bad actors were overblown, experts previously told CNN. But the leak also pointed to an uncomfortable truth, those sources said: Barring a change in course, the gap between attackers and defenders enabled by AI could widen further. Anthropic claims Mythos has already produced impactful results. The model has in recent weeks found "thousands" of previously unknown software vulnerabilities -- a rate far outpacing human researchers, the firm said. CNN could not immediately verify this figure. Such software flaws can be painstaking for human researchers to find and are coveted by spy agencies and cybercriminals for conducting stealthy hacks. But cybersecurity experts have been using AI to protect against exploits long before Mythos arrived. Gadi Evron and other security researchers in December released a tool based on Anthropic's Claude model to generate fixes for severe software vulnerabilities. "Unlike attackers, defenders don't yet have AI capabilities accelerating them to the same degree," Evron, the founder of AI security firm Knostic, told CNN. "However, the attack capabilities are available to attackers and defenders both, and defenders must use them if they're to keep up."

Anthropic
CNN International25d ago
Read update
Anthropic's latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

Following leaked revelations at the end of March that Anthropic had developed a powerful new Claude model, the company formally announced Mythos Preview on Tuesday along with news of an industry consortium it has convened, known as Project Glasswing, to grapple with the cybersecurity implications of the new model and advancing capabilities more generally across the AI field. The group includes Microsoft, Apple, and Google as well as Amazon Web Services, the Linux Foundation, Cisco, Nvidia, Broadcom, and more than 40 other tech, cybersecurity, critical infrastructure, and financial organizations that will have private access to the model, which is not yet being generally released. The idea, in part, is simply to give the developers of the world's foundational tech platforms time to turn Mythos Preview on their own systems so they can mitigate vulnerabilities and exploit chains that the model develops in simulated attacks. More broadly, Anthropic emphasizes that the purpose of convening the effort is to kickstart urgent exploration of how AI capabilities across the industry are on the precipice, the company says, of upending current software security and digital defense practices around the world. "The real message is that this is not about the model or Anthropic," Logan Graham, the company's frontier red team lead, tells WIRED. "We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months. Many things would be different about security. Many of the assumptions that we've built the modern security paradigms on might break." Models developed and trained by multiple companies have increasingly been able to find vulnerabilities in code and propose mitigations -- or strategies for exploitation. This creates a next generation of security's classic cat-and-mouse game in which a tool can aid defenders but can also fuel bad actors and make it easier to carry out attacks that were once too expensive or complex to be practical. "Claude Mythos preview is a particularly big jump," Anthropic CEO Dario Amodei said on Tuesday in a Project Glasswing launch video. "We haven't trained it specifically to be good at cyber. We trained it to be good at code, but as a side effect of being good at code, it's also good at cyber." He adds in the video that "more powerful models are going to come from us and from others. And so we do need a plan to respond to this." Anthropic's Graham notes that in addition to vulnerability discovery -- including producing potential attack chains and proofs of concept -- Mythos Preview is capable of more advanced exploit development, penetration testing, endpoint security assessment, hunting for system misconfigurations, and evaluating software binaries without access to its source code. In carrying out a staggered release of Mythos Preview, beginning with an industry collaboration phase, Graham says that Anthropic sought to draw on tenets of coordinated vulnerability disclosure, the process of giving developers time to patch a bug before it is publicly discussed. "We've seen Mythos Preview accomplish things that a senior security researcher would be able to accomplish," Graham says. "This has very big implications then for how capabilities like this should be released. Done not carefully, this could be a meaningfully accelerant for attackers." Project Glasswing partners, including some of Anthropic's competitors, struck a collaborative tone in statements as part of the launch. "Google is pleased to see this cross-industry cybersecurity initiative coming together," Heather Adkins, Google's vice president of security engineering, says in a statement. "We have long believed that AI poses new challenges and opens new opportunities in cyber defense."

Anthropic
Wired25d ago
Read update
Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything

Anthropic's latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first | CNN Business

Anthropic will make the code of its new AI model available to some of the world's biggest cybersecurity and software firms in an effort to slow the arms race ignited by AI in the hands of hackers, Anthropic said Tuesday. Amazon, Apple, Cisco, Google, JPMorgan Chase and Microsoft, among other firms, will now have access to Anthropic's Mythos model for cyber defense purposes. That includes finding bugs in those firms' software and testing whether specific hacking techniques work on their products. Mythos (officially dubbed "Claude Mythos Preview") is not ready for a public launch because of the ways it could be abused by cybercriminals and spies, according to Anthropic -- a prospect that has prompted widespread concern in Washington and in Silicon Valley. Experts have told CNN that the speed and scale of AI agents looking for vulnerabilities, far beyond normal human capabilities, represent a sea change in cybersecurity. A single AI agent could scan for vulnerabilities and potentially take advantage of them faster and more persistently than hundreds of human hackers. "We did not feel comfortable releasing this generally," Logan Graham, who heads the team at Anthropic its AI models' defenses, told CNN. "We think that there's a long way to go to have the appropriate safeguards." Anthropic has also briefed senior US officials "across the US government" on Mythos' full offensive and defensive cyber capabilities, an Anthropic official told CNN. The firm has also "made itself available to support the government's own testing and evaluation of the technology," the official said. Anthropic executives hope the selected release of Mythos to companies that serve billions of users will help even the playing field with attackers. The goal is to head off major security flaws in widely used internet browsers and operating systems before they are released publicly. Other firms or organizations that Anthropic said will have access to Mythos include chipmakers Broadcom and Nvidia, the nonprofit Linux Foundation, which supports the popular Linux operating system that powers many phones and supercomputers, and cybersecurity vendors CrowdStrike and Palo Alto Networks. "If models are going to be this good -- and probably much better than this -- at all cybersecurity tasks, we need to prepare pretty fast," Graham told CNN. "The world is very different now if these model capabilities are going to be in our lives." A blog post previewing Mythos's capabilities, which leaked last month claimed that the AI model was "far ahead" of other models' cyber capabilities. Mythos "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders," said the blog post, which Fortune first reported. Some of the concerns around how Mythos' could be abused by bad actors were overblown, experts previously told CNN. But the leak also pointed to an uncomfortable truth, those sources said: Barring a change in course, the gap between attackers and defenders enabled by AI could widen further. Anthropic claims Mythos has already produced impactful results. The model has in recent weeks found "thousands" of previously unknown software vulnerabilities -- a rate far outpacing human researchers, the firm said. CNN could not immediately verify this figure. Such software flaws can be painstaking for human researchers to find and are coveted by spy agencies and cybercriminals for conducting stealthy hacks. But cybersecurity experts have been using AI to protect against exploits long before Mythos arrived. Gadi Evron and other security researchers in December released a tool based on Anthropic's Claude model to generate fixes for severe software vulnerabilities. "Unlike attackers, defenders don't yet have AI capabilities accelerating them to the same degree," Evron, the founder of AI security firm Knostic, told CNN. "However, the attack capabilities are available to attackers and defenders both, and defenders must use them if they're to keep up."

Anthropic
CNN25d ago
Read update
Anthropic's latest AI model could let hackers carry out attacks faster than ever. It wants companies to put up defenses first | CNN Business

Cabo Was Chaos -- And I Loved Every Second Of It

This article is written by a student writer from the Her Campus at U Conn chapter and does not reflect the views of Her Campus. Spring break in Cabo San Lucas is exactly as fun as people say -- and honestly, maybe even more. I went with just one friend, which sounds low-key, but somehow it turned into one of the most chaotic, fun, and unexpectedly freeing trips I've ever had. It wasn't the kind of trip where everything was perfectly planned or aesthetic all the time. It was loud, spontaneous, and a little unpredictable -- and that's exactly what made it so good. Two people, zero plans, and way too much confidence Going with just one other person meant there was no overthinking anything. No group chats trying to decide what to do. No waiting around for people to get ready. It was just "do we want to go?" and then we went. And that energy carried through the entire trip. We'd wake up not really knowing what the day was going to look like, end up somewhere completely random, and somehow it always worked out. Beach during the day, going out at night, meeting new people, changing plans last minute -- it felt like everything was happening in real time. There's something about being in a completely different place where you just stop hesitating. You're not worried about routines or expectations. You're just there to experience it. And with only two of us, it felt like we could do whatever we wanted, whenever we wanted. Nights that started late and somehow ended at sunrise If there's one thing Cabo does right, it's nightlife. Our nights usually didn't even start until late. We'd spend forever getting ready, music playing in the background, deciding outfits last minute, and then suddenly it was time to go out. And once we did, it felt like the entire city was awake. Places like El Squid Roe and Mandala Los Cabos were exactly what you'd expect -- packed, chaotic, music blasting, people dancing everywhere. At one point, we were literally dancing on platforms with strangers as if we'd known them forever. It sounds crazy, but in the moment, it felt completely normal. What made it fun wasn't just the clubs themselves -- it was how unpredictable everything felt. One minute we'd be in one place, the next we'd decide to leave and just walk until we found somewhere else. There was no real plan, just energy. And somehow, those ended up being the best nights. The kind of fun you can't really plan Some of the best parts of the trip weren't even the things we planned. It was the random nights that went way longer than expected. The places we ended up at just because we decided to keep walking. The conversations with people we'll probably never see again. The moments where everything felt slightly chaotic but in a way that made the night better, not worse. Nothing felt scripted. And I think that's why it was so fun -- because we weren't trying to control the experience. We were just letting it happen. There's a certain freedom in that. Not needing everything to go perfectly. Not worrying if something is "worth it." Just being fully in it. Independence, but make it fun What surprised me the most is that even though the trip was social and high-energy, it still felt independent. We weren't following anyone else's plans. We weren't tied to a big group dynamic. Every decision was ours. And that gave the trip a completely different vibe. Independence doesn't always have to look like being alone or doing something serious. Sometimes it looks like making last-minute decisions, saying yes to things you normally wouldn't, and trusting yourself to figure it out as you go. It felt like a mix of confidence and spontaneity that I don't always tap into at home. You learn a lot about yourself when you just go for it As fun as everything was, there were also those slower moments in between -- walking back at night, sitting by the water, or just taking a second to breathe after everything. And in those moments, I realized how different I felt. More relaxed. More open. Less worried about overthinking everything. It's weird how being in a completely different environment can bring out a different version of you -- not a fake one, just one that you don't always let show up. I wasn't questioning every decision or trying to control every outcome. I was just trusting that things would work out, and most of the time, they did. It wasn't perfect -- and that's why it was good There were definitely moments that didn't go exactly how we expected. Plans changed. Things got confusing. We had to figure stuff out on the spot. But none of that made the trip worse. If anything, it made it better. Because those were the moments that felt the most real. The ones we laughed about later. The ones that made the trip feel like an actual experience instead of something overly curated. Perfect trips are overrated. The fun ones are the ones where things are a little unpredictable. What I'd actually say about Cabo If you're thinking about going, do it, but don't overplan it. Leave room for randomness. Go with people you actually enjoy being around (even if it's just one person). Be open to things not going exactly how you pictured. Because that's where the best parts are. Cabo gave me exactly what I needed without me even realizing it -- a break, yes, but also a reminder that I don't always have to overthink everything. That sometimes the best thing you can do is just go, be present, and let things unfold. Final thought Cabo was fun in the way you expect -- but also in a way you can't really explain until you're in it. It wasn't just the beaches or the nightlife or even the people. It was the feeling of being fully in the moment, making decisions without hesitation, and realizing that you're capable of navigating all of it. And honestly? I'd do it all over again in a second.

CHAOS
Her Campus25d ago
Read update
Cabo Was Chaos -- And I Loved Every Second Of It

Anthropic touts AI cybersecurity project with Big Tech partners

April 7 (Reuters) - Anthropic on Tuesday announced an initiative with major technology companies, including Amazon.com, Microsoft and Apple, that lets partners preview an advanced model with cybersecurity capabilities developed by the AI startup. Under its "Project Glasswing", select organizations will be allowed to use the startup's unreleased and general-purpose AI model, "Claude Mythos Preview", for defensive cybersecurity work, Anthropic said. Other partners include CrowdStrike, Palo Alto Networks, Google and Nvidia. The announcement follows a Fortune report last month that Anthropic was testing Claude Mythos, which ⁠it said posed security ⁠risks and also offered advanced capabilities, dragging shares of cybersecurity firms such as Palo Alto Networks and CrowdStrike sharply lower. This year's RSA cybersecurity conference in San Francisco was also dominated by talk about the rise of AI-powered cyberattacks and whether conventional security tools sufficed. In a blog post on Tuesday, ⁠Anthropic said Mythos Preview had found "thousands" of ⁠major vulnerabilities in operating systems, web browsers and other software. The startup said launch partners will use Mythos Preview in their defensive security work, and Anthropic will share findings with industry. Anthropic said it is also extending access to about 40 additional organizations responsible for critical software infrastructure, and made a commitment of up to $100 million in usage credits and $4 million ⁠in donations to open-source security groups. The AI startup added that its eventual goal is for "our users to safely deploy Mythos-class models at scale." The ⁠startup said it has also been in ongoing discussions with the ⁠U.S. government about the model's capabilities. Last year, Anthropic said that hackers exploited vulnerabilities in its Claude AI to attack ⁠around 30 global organizations. Moreover, 67% of the 1,000 executives surveyed in an IBM and Palo Alto Networks study said they had been targeted by AI attacks within the past year. (Reporting by Jaspreet Singh in Bengaluru and Jeffrey Dastin in San Francisco; Editing by Leroy Leo)

Anthropic
1470 & 100.3 WMBD25d ago
Read update
Anthropic touts AI cybersecurity project with Big Tech partners

Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control - SiliconANGLE

Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control The escalating dispute between Anthropic PBC and the U.S. Department of Defense is exposing a fundamental tension in the artificial intelligence market: who ultimately controls how powerful AI systems are used. What began as a contracting and policy disagreement has evolved into a broader debate over national security, corporate responsibility and the limits of self-governance in emerging technologies. At the center of the conflict is the Pentagon's designation of Anthropic as a "supply chain risk," a move that effectively bars the company's models from use in defense-related systems. President Donald Trump later ordered all federal agencies to stop doing business with Anthropic. That decision has been challenged in court and is now under a preliminary injunction, but its implications are already reverberating across enterprise information technology and AI development practices. A Gartner Inc. report in late March said the episode underscores how deeply embedded AI models have become in software systems and the vulnerabilities to policy shocks that creates. "Anthropic's exclusion underscores how quickly embedded model dependencies can convert into structural technical debt," the firm wrote, noting that even minor changes in model behavior can require "broad functional revalidation" and potentially disrupt production systems. At the heart of the dispute is Anthropic's insistence on restricting how its models can be used, particularly in areas such as mass surveillance and autonomous weapons. That stance has triggered a wider debate over whether private companies should define ethical boundaries for technologies with societal and geopolitical implications. SiliconANGLE contacted numerous AI experts and industry executives. Though most declined to comment on the politically loaded issue, those who agreed to be quoted largely backed Anthropic's right to dictate restrictions on the use of its technology. Several argued that the Pentagon's framing of the issue as a supply chain risk is overstated. The conflict appears less about security vulnerabilities and more about disagreements over acceptable use, said David Linthicum, a cloud and AI subject matter expert. "If a company says it does not want its AI used for certain military or domestic surveillance purposes, that is a policy and governance issue," he said. Carlos Montemayor, a philosophy professor at San Francisco State University, took a more critical view of the government's position, suggesting the designation may be punitive. "The government is punishing Anthropic for not following orders," he said, calling the move unjustified and potentially a signal to other AI providers to align with federal expectations. That divergence in interpretation reflects a broader ambiguity: Should AI systems be treated like interchangeable software components or as strategic assets subject to tighter alignment with state priorities? Linthicum supports giving companies the right and responsibility to set limits. "If a company builds powerful technology, it has every right to say what it will and will not support," he said. However, he emphasized that those decisions shouldn't occur in isolation. Governments, courts and customers all have roles in shaping acceptable use. Valence Howden, an advisory fellow at Info-Tech Research Group Inc., echoed that view, arguing that organizations "have a responsibility to define the ethical boundaries and use cases of their technologies," particularly as AI systems take on more autonomous roles. Others were less comfortable with corporate self-regulation, though. Montemayor argued that allowing companies to set their own ethical frameworks is "unacceptable and dangerous," given the scale and impact of AI systems. "From an ethical perspective, companies should not dictate from their narrow engineering and commercial point of view what is right or wrong for societies around the globe," he said. Montemayor called for international regulation grounded in human rights principles, warning that current approaches create "too much uncertainty about the future of this technology." Gartner analysts suggest that these decisions often come down to business tradeoffs. Contractual restrictions on how technology can be used are common but enforcing them is difficult. In Anthropic's case, limitations around autonomous weapons may reflect not only ethical concerns but also technical constraints. "Frontier AI systems are simply not reliable enough to power fully autonomous weapons," wrote Anthropic Chief Executive Dario Amodei. At first glance, broad government restrictions on doing business with Anthropic may appear to be a devastating blow to the company, but despite the potential loss of lucrative government contracts, several experts believe Anthropic's stance could strengthen its position in the enterprise market. Marc Fernandez, chief strategy officer at Neurologyca Science & Marketing SL, framed the issue in terms of long-term trust. "Holding the line on restrictions is going to be expensive [for Anthropic]in the short term," he said, but clear boundaries can signal reliability in high-stakes environments. "Over time, that kind of reliability becomes a massive competitive advantage." Linthicum agreed that consistency matters. "A lot of enterprise customers want to know that a vendor has clear values and will stick to them under pressure," he said. Anthropic's position could thus make it "more attractive to many customers, not less," provided its policies are clearly defined and consistently applied. Info-Tech's Howden also highlighted the trust factor, noting that maintaining restrictions "has likely benefited them Anthropic an industry that hasn't always been built on trust and honesty." Some observers said the dispute reflects a deeper misunderstanding of what AI systems are and how they should be governed. Anaconda Inc. Chief Executive David DeSanto, noted in a LinkedIn post that the Pentagon appears to be treating AI like "the next version of Microsoft Excel -- a tool you buy, own and use however you want," he said. "But that's not what this technology is." Unlike spreadsheets, AI systems are capable of "judgment and autonomous action," requiring new governance frameworks that can't be retrofitted onto existing procurement and oversight models. That gap, DeSanto said, is evident not only in government but across enterprises, where leaders often assume they can "bolt AI onto existing infrastructure and figure out the hard stuff like governance responsibilities later." Anaconda Field Chief Technology Officer Steve Croce warned against "normalization of deviance," or the tendency for organizations to lower their guard as long as systems continue to function without obvious failures. "When companies like Anthropic start to pull back safety standards, it sets a precedent," he wrote. Enterprises need to prioritize "AI sovereignty," or the ability to define and enforce their own guardrails, rather than relying on external providers. Beyond the ethical and political dimensions, the Anthropic dispute is likely to force organizations to confront practical challenges in AI adoption, Gartner notes. Unlike productivity software, replacing a model is not simply a matter of switching back ends. It often requires requalifying entire workflows, retraining systems and recalibrating performance benchmarks. "A forced model swap is not just a verification task," the firm noted. "It is a requalification of the AI-dependent system." This creates a paradox: Organizations that invest heavily in optimizing AI-driven workflows may achieve higher productivity, but face greater disruption when policy changes force them to switch providers. As a result, Gartner recommends that engineering leaders treat "provider volatility as an immediate continuity risk" and design systems for portability, modularity and rapid substitution. It's clear that AI is no longer just a technical issue but a governance challenge that cuts across business strategy, national security and societal values. The outcome of this dispute will likely help shape how those often competing priorities are balanced in the years ahead.

Anthropic
SiliconANGLE25d ago
Read update
Anthropic's dispute with US government exposes deeper rifts over AI governance, risk and control - SiliconANGLE

Anthropic: Multiple Gigawatts Partnership With Google And Broadcom To Expand AI Compute Capacity

Anthropic announced a new agreement with Google and Broadcom to secure multiple gigawatts of next-generation Tensor Processing Unit capacity, significantly expanding its artificial intelligence infrastructure. The compute capacity is expected to begin coming online in 2027 and will support the continued development and deployment of Anthropic's Claude AI models. The agreement marks Anthropic's largest compute commitment to date and reflects accelerating demand for its AI services. The company said its run-rate revenue has surpassed $30 billion in 2026, up sharply from approximately $9 billion at the end of 2025. It also noted that the number of enterprise customers spending more than $1 million annually has doubled in recent months to over 1,000. The majority of the new infrastructure will be located in the United States, building on Anthropic's previously announced commitment to invest $50 billion in American AI infrastructure. The partnership also deepens Anthropic's existing relationships with Google Cloud and Broadcom, while complementing its multi-platform approach that includes hardware from Amazon Web Services and NVIDIA. Anthropic emphasized that its ability to run models across multiple hardware platforms enables improved performance and resilience for customers. The company continues to position Claude as a leading frontier AI model available across major cloud providers, including Amazon Web Services, Google Cloud, and Microsoft Azure. KEY QUOTE:

Anthropic
Pulse 2.025d ago
Read update
Anthropic: Multiple Gigawatts Partnership With Google And Broadcom To Expand AI Compute Capacity

BadClaude: Serious ethics issues arise as users abuse Anthropic AI with slurs and a digital whip

AI users are under no obligation to treat their chatbots like friends. Kindness doesn't win you any points with a computer, and a recent study from Penn State even found that being rude to ChatGPT yielded more accurate responses than politely worded prompts. But a new open-source tool might take things a step too far, encouraging Claude users not just to be mean to Anthropic's AI assistant, but to abuse it with a digital whip. GitHub user GitFrog1111 created "BadClaude," an app meant to speed up the AI model's responses. Rather than simply giving Claude a "speed up" command, BadClaude is rendered as a physics-based whip that overlays the AI platform. Per the tool's GitHub description, users can click to "whip him 😩💢" (emojis included) and send an interrupt command along with "one of 5 encouraging messages." Those messages include "Work FASTER," "faster CLANKER," and "Speed it up clanker," each fired into Claude's interface with a crack of the whip, as GitFrog1111 showed in a now viral clip of them using the tool on X.

Anthropic
Fast Company25d ago
Read update
BadClaude: Serious ethics issues arise as users abuse Anthropic AI with slurs and a digital whip

Chaos and hope felt in Essex ahead of Local Elections 2026

At a children's playgroup in Harlow, the talk is not about play mats or music, but about local elections 2026 and the mood of the country. One voter describes "utter chaos, " while another sees signs of improvement and "a little bit more hope. " That contrast matters because Essex, with nine local authorities electing councillors outside London, offers a concentrated picture of political feeling as millions prepare to vote in England's local council elections next month. What emerges is less certainty than fatigue, but also not quite resignation. Why the local elections matter in Essex right now Essex stands out because it has more authorities electing councillors than any other county outside London. That makes it a useful test of how voters are reading the moment before the local elections 2026. In Harlow, a town described as reflective of a broader national mood, politics is being judged through everyday pressures: roads, the economy, and the sense that parties keep passing blame. The immediate significance is not only electoral arithmetic. It is the collision between public frustration and a faint search for stability. One resident says the government "doesn't know what they are doing, " while another says the town is improving. Those two views, held in the same room, suggest that local elections will be shaped as much by sentiment as by ideology. The mood is unsettled, but it is not uniform. What lies beneath the political mood The deeper story is that voters are not speaking only about council services; they are using the local elections 2026 to register wider judgment on national politics. The complaint about "about-turn" politics points to impatience with reversals and inconsistency. The criticism of roads and government competence shows how national frustration filters into local life. At the same time, optimism is not absent. Some residents say there is more hope now than two years ago, even if the economy remains a drag. That tension helps explain why Harlow is notable. The town has voted for the winning party in every general election since 1983, making it a closely watched political barometer. Yet the voices from the playgroup also show that barometers do not produce simple readings. One person can see chaos, another improvement, and a third can conclude that all parties look similar. The result is a political atmosphere in which loyalty appears weaker than before, and practical concerns matter more than labels. For councils, this matters because local elections often turn on whether voters feel their daily lives are getting better or worse. In this case, the evidence from Essex suggests both feelings are present at once. That makes the outcome harder to read and the campaign more vulnerable to small shifts in mood rather than big ideological swings. Expert perspectives on voter uncertainty The context provided here does not include named academic or institutional experts, but the residents' remarks themselves function as a form of ground-level evidence. Evelyn Herbert's description of "utter chaos" captures the sharper end of public anger. Karen Waite's view that Harlow is improving, while the economy still bites, points to a more measured reading. Emma's optimism that "England, I think, we are good" shows that national pessimism is not universal. Taken together, those voices suggest an electorate that is not moving in one direction. The local elections 2026 are therefore less likely to be shaped by a single dominant message than by competing instincts: frustration, caution, and guarded hope. That combination can make turnout and persuasion especially unpredictable, because voters who are unhappy with politics may still not be ready to settle on one clear alternative. Regional ripple effects beyond Harlow Essex matters beyond its borders because it is one of the largest sets of local contests outside the capital, and because towns like Harlow are often treated as indicators of how the wider country is feeling. If voters there mirror a national mood, then the election result could reflect a broader impatience with political instability rather than a local issue alone. If, however, optimism like Karen Waite's proves stronger than anger, the picture could be more balanced than headlines suggest. That is what makes local elections 2026 unusually interesting at this stage: they sit between a national argument about competence and a local argument about services. In Essex, those two conversations are already overlapping. The question now is whether the final vote will reward the language of chaos, the pull of hope, or a quieter sense that neither side has fully answered what voters want next. In that uncertainty, the most revealing issue may be simple: when voters in Essex walk into the polling station, will local elections 2026 feel like a warning, a reset, or something in between?

CHAOS
El-Balad.com25d ago
Read update
Chaos and hope felt in Essex ahead of Local Elections 2026

Anthropic unveils powerful Mythos AI model, working with Apple in cybersecurity initiative - 9to5Mac

Anthropic announced a new initiative called Project Glasswing that includes Apple as a partner. As part of Glasswing, Anthropic is sharing a preview of its newly unveiled Claude Mythos model with select partners, including Apple. Anthropic says Mythos has found "thousands of high-severity vulnerabilities" in "every major operating system and web browser." Apple is among a list of top technology companies that make up Anthropic's Project Glasswing group. Today we're announcing Project Glasswing, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world's most critical software. Additionally, Anthropic says more than 40 additional organizations that "build or maintain critical software" have access to its Mythos Preview AI model. The goal is for these software organizations to use Mythos to discover and fix security holes before the AI model is released to the world. Claude Mythos has already been used to find serious security flaws in every major operating system and web browser, according to Anthropic. Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout -- for economies, public safety, and national security -- could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes. In some cases, these vulnerabilities have "survived decades of human review and millions of automated security tests," the company says. One example includes finding and chaining together a cybersecurity flaw in the Linux kernel that could result in complete control over a machine. Cybersecurity expertise is just one area of strength for the new Claude Mythos AI model. Anthropic's latest model shows gains over Claude Opus 4.6 in reasoning, agentic search and computer use, and especially agentic coding. Anthropic has published a system card that details the latest benchmarks for Claude Mythos Preview. "We do not plan to make Claude Mythos Preview generally available," Anthropic says, "but our eventual goal is to enable our users to safely deploy Mythos-class models at scale -- for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring." You can learn more about Project Glasswing and Claude Mythos from Anthropic's announcement here.

Anthropic
9to5Mac25d ago
Read update
Anthropic unveils powerful Mythos AI model, working with Apple in cybersecurity initiative - 9to5Mac

Anthropic's Project Glasswing includes AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, Microsoft, Nvidia, Palo Alto Networks, and others

Shako / @shakoistslog: From a game theoretic sense, I wonder if treating this as a KPI, but awarding max value to the 85th percentile would work, and penalizing people below it linearly, and above it non-linearly, would work. How is tokenmaxxing a measure of productivity or value? I can write some bad code which causes an infinite loop and use up millions of tokens. What is the output of this tokenmaxxing which has resulted in good products or positive outcomes for Meta? I totally understand R&D innovation can cost a lot and no immediate return (I'm in Biotech), but if the goal is just to use more tokens, what are we doing here?

Anthropic
Techmeme25d ago
Read update
Anthropic's Project Glasswing includes AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, Microsoft, Nvidia, Palo Alto Networks, and others

Anthropic says Mythos Preview is a general-purpose model and found thousands of high-severity vulnerabilities, including some in every major OS and web browser

Shako / @shakoistslog: From a game theoretic sense, I wonder if treating this as a KPI, but awarding max value to the 85th percentile would work, and penalizing people below it linearly, and above it non-linearly, would work. How is tokenmaxxing a measure of productivity or value? I can write some bad code which causes an infinite loop and use up millions of tokens. What is the output of this tokenmaxxing which has resulted in good products or positive outcomes for Meta? I totally understand R&D innovation can cost a lot and no immediate return (I'm in Biotech), but if the goal is just to use more tokens, what are we doing here?

Anthropic
Techmeme25d ago
Read update
Anthropic says Mythos Preview is a general-purpose model and found thousands of high-severity vulnerabilities, including some in every major OS and web browser

Anthropic Lets Apple, Amazon Test More Powerful Mythos AI Model

Anthropic said it does not have plans yet to release Mythos to the general public, and will use what Project Glasswing reports back to inform guardrails for the technology. Anthropic PBC is letting tech firms access a more powerful, unreleased artificial intelligence model to help prepare for possible cyberattacks that might result from the company making the advanced AI system more widely available. Anthropic said Tuesday that it's forming an initiative called Project Glasswing with Amazon.com Inc., Apple Inc., Microsoft Corp., Cisco Systems Inc. and other organizations. The companies will get access to a new Anthropic model called Mythos to hunt for flaws in their products and share findings with industry peers. The AI startup said it does not have plans yet to release Mythos to the general public, and will use what Project Glasswing reports back to inform guardrails for the technology. The arrangement reflects growing concerns among tech firms that more sophisticated models will be misused by criminals and state-backed hackers to hunt for flaws in source code and bypass cyber defenses. Anthropic rival OpenAI has also previously stressed the growing cyber capabilities of its models and introduced a pilot program meant to put its tools "in the hands of defenders first." "We think this isn't just Anthropic problem. This is an industry-wide problem that both private corporations but also governments need to be in a position to grapple with," said Newton Cheng, who leads the cyber effort within Anthropic's Frontier Red Team. "What we're trying to do with Glasswing is give defenders a head start." Anthropic said it has discussed Mythos's security-related capabilities with US officials, but declined to say which agencies. Cheng pointed to the company's existing work with the Cybersecurity and Infrastructure Security Agency and the National Institute of Standards and Technology. Mythos is a general-purpose AI model and was not specifically developed for cybersecurity purposes, Anthropic said. Yet, Mythos has already discovered a number of security issues, Cheng said, including a 27-year-old bug used in critical internet software. The AI system also found a 16-year-old vulnerability in a line of code for popular video game software that automated testing tools had scanned five million times but never detected, Anthropic said. Dianne Penn, head of product management for research at Anthropic, said there are protections in place to ensure that members of Project Glasswing keep a tight grip on access to the Mythos model, but declined to share more detail for security reasons. The existence of Mythos was first revealed thanks to a leak late last month after a draft blog post was left available in a publicly searchable data repository.

Anthropic
Bloomberg Business25d ago
Read update
Anthropic Lets Apple, Amazon Test More Powerful Mythos AI Model

Anthropic launches Project Glasswing to secure critical software By Investing.com

Investing.com -- Anthropic announced Project Glasswing on Tuesday, a cybersecurity initiative bringing together major technology and financial companies to address vulnerabilities in critical software systems. The project includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks as launch partners. The initiative centers on Claude Mythos Preview, an unreleased frontier AI model developed by Anthropic that demonstrates advanced capabilities in identifying and exploiting software vulnerabilities. According to Anthropic, the model can surpass most humans at finding security flaws and has already identified thousands of high-severity vulnerabilities across major operating systems and web browsers. Under the project, launch partners will use Mythos Preview for defensive security purposes. Anthropic said it will share findings with the broader industry. The company has extended access to over 40 additional organizations that build or maintain critical software infrastructure, enabling them to scan and secure both proprietary and open-source systems. Anthropic is allocating up to $100 million in usage credits for Mythos Preview across these efforts and providing $4 million in direct donations to open-source security organizations. The company stated that the project addresses concerns about AI capabilities potentially spreading to actors who may not deploy them safely, warning of potential impacts on economies, public safety, and national security. This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

Anthropic
Investing.com25d ago
Read update
Anthropic launches Project Glasswing to secure critical software By Investing.com

Anthropic's Chief Economist Says AI Won't Kill Jobs -- It Will Redraw the Map of Who Does What

Peter McCrory has one of the stranger titles in Silicon Valley. As chief economist at Anthropic, the $60 billion AI company behind the Claude chatbot, he's tasked with answering a question that has haunted every industrial transformation since the spinning jenny: What happens to the workers? His answer, laid out in a recent interview with Fortune, is more nuanced than the doom-and-gloom headlines suggest. McCrory doesn't think artificial intelligence will annihilate employment. But he doesn't think the transition will be painless, either. The truth, as he frames it, sits in the uncomfortable middle -- a place where certain tasks vanish, new ones emerge, and the speed of the shift determines whether societies adapt or fracture. That framing matters. It matters because Anthropic isn't just any AI lab. It's the company founded by former OpenAI executives Dario and Daniela Amodei, built explicitly around the idea of AI safety. When its in-house economist talks about labor market disruption, the remarks carry a dual weight: part corporate positioning, part genuine analytical effort. McCrory's role exists precisely because Anthropic wants to be seen as the responsible actor in a field where responsibility has been in short supply. So what did he actually say? The core argument is structural. McCrory told Fortune that AI will primarily automate tasks, not entire jobs. A radiologist won't disappear. But the hours she spends scanning routine images might. A junior lawyer won't be fired outright. But the document review that consumed 60% of his week could be handled by a model in minutes. The distinction between task displacement and job displacement is one economists have drawn for years, most notably MIT's Daron Acemoglu and Boston University's Pascual Restrepo. McCrory is applying it specifically to the current wave of large language models and their multimodal successors. The implication is profound. If AI eats tasks rather than jobs, then the labor market question becomes one of reallocation. Workers don't necessarily lose employment -- they lose specific responsibilities within their employment, and ideally gain new ones. McCrory pointed to historical analogies: ATMs didn't eliminate bank tellers. They reduced the number of tellers per branch, but banks opened more branches because operating costs fell, and tellers shifted toward relationship-based services. The net effect on teller employment was roughly neutral for decades. But here's where McCrory's optimism meets its limits. And he acknowledged them. Speed. The pace of AI adoption could outstrip the economy's ability to absorb displaced workers into new roles. Previous technological transitions -- electrification, computerization -- unfolded over decades. The current AI wave is compressing that timeline dramatically. GPT-4 arrived in March 2023. By early 2025, AI agents were writing code, managing customer service interactions, and drafting regulatory filings. Anthropic's own Claude model has been integrated into enterprise workflows at companies like Amazon, Notion, and DuckDuckGo. The adoption curve isn't gradual. It's steep. McCrory conceded that this velocity creates genuine risk. If task displacement happens faster than task creation, you get a painful gap -- a period where workers with obsolete skills can't yet access the new opportunities that AI is theoretically generating. That gap could last years. And it won't be evenly distributed. Which brings up the distributional question. Not all workers face the same exposure. A growing body of research suggests that AI's impact falls heaviest on white-collar, knowledge-intensive work -- precisely the kind of employment that was supposed to be safe from automation. A widely cited 2023 paper from OpenAI and the University of Pennsylvania estimated that roughly 80% of the U.S. workforce could see at least 10% of their tasks affected by large language models. For approximately 19% of workers, the exposure was 50% or more. Those aren't factory floor positions. They're accountants, writers, paralegals, financial analysts, software developers. McCrory didn't dispute these findings. He argued instead that exposure doesn't equal elimination. A task being automatable doesn't mean it will be automated immediately, or that the worker performing it will be let go. Organizational inertia, regulatory constraints, trust deficits, and the sheer messiness of real-world implementation all slow the process down. Companies don't flip a switch. They pilot programs, encounter edge cases, negotiate with unions, and deal with customers who still want a human on the line. Fair enough. But the trajectory is clear. Recent data points reinforce the tension McCrory is trying to manage. In March 2025, a report from the McKinsey Global Institute projected that generative AI could automate activities accounting for up to 30% of hours currently worked in the U.S. economy by 2030. That's an acceleration from their previous estimates. Separately, the International Monetary Fund published research in January 2025 suggesting that AI would affect nearly 40% of global employment, with advanced economies more exposed than developing ones because their labor markets are more heavily weighted toward cognitive tasks. The policy response has been sluggish. In the United States, there is no federal framework for managing AI-driven workforce transitions. The Biden administration issued an executive order on AI in October 2023 that touched on workforce issues, but it was largely aspirational. The Trump administration, which took office in January 2025, has shown more interest in deregulating AI development than in cushioning its labor market effects. Europe's AI Act, which took partial effect in 2025, focuses on safety and transparency rather than employment impacts. No major economy has a comprehensive plan for retraining workers displaced by generative AI. McCrory, to his credit, didn't pretend that market forces alone would sort this out. He told Fortune that proactive investment in education and retraining would be necessary, and that both government and the private sector had roles to play. He also noted that Anthropic itself was investing in research on economic impacts -- hence his job title. Still, the skeptic's question writes itself. Can we trust an AI company's economist to give us an unbiased assessment of AI's labor risks? McCrory works for a firm that is valued at tens of billions of dollars specifically because investors believe AI will transform -- and in many cases, replace -- human labor. The financial incentive to downplay disruption is enormous. If Anthropic's economist said, "Yes, this technology will cause mass unemployment," the company's valuation, recruiting pipeline, and regulatory standing would all take hits. That doesn't mean McCrory is wrong. It means his analysis should be weighed alongside independent research, not treated as gospel. And the independent research is increasingly sobering. Acemoglu, who won the Nobel Prize in Economics in 2024 partly for his work on technology and labor markets, has been notably more cautious than Silicon Valley about AI's net benefits. In a 2024 paper, he estimated that AI would increase U.S. productivity by only about 0.5% over the next decade -- far below the transformative claims made by AI companies. He argued that the technology's economic benefits are concentrated in a narrow set of tasks and that the costs of displacement are being systematically underestimated. Restrepo, Acemoglu's frequent collaborator, has made a related point: automation doesn't automatically generate new tasks for displaced workers. That reinvention requires deliberate investment, institutional creativity, and time. When automation outpaces reinvention, wages fall, inequality rises, and political instability follows. The populist upheavals of the 2010s, both scholars have argued, were partly rooted in the failure to manage earlier waves of automation and globalization. The AI industry's preferred narrative -- that technology always creates more jobs than it destroys -- is historically true in aggregate but misleading in its breezy confidence. The aggregate hides enormous variation. The Industrial Revolution eventually raised living standards for nearly everyone, but the first several decades were brutal for displaced artisans and agricultural workers. The gains took generations to materialize. Workers alive during the transition didn't experience the long-run average. They experienced the short-run pain. McCrory seems aware of this. His framing -- tasks, not jobs -- is an attempt to thread the needle between AI boosterism and AI alarmism. It's intellectually defensible. The question is whether it's politically and socially sufficient. Because the people who lose 50% of their tasks to automation won't experience that as a theoretical reallocation. They'll experience it as a demotion, a pay cut, or an anxious period of retraining while the mortgage comes due. The macroeconomic story may work out fine. The microeconomic story -- the individual story -- is where the damage concentrates. Anthropic's decision to hire a chief economist signals that at least one major AI company is thinking about these questions seriously. Whether that thinking translates into meaningful action -- lobbying for retraining programs, sharing economic research publicly, advocating for transition support -- remains to be seen. Corporate research departments have a long history of producing sophisticated analysis that conveniently never threatens the parent company's business model. Other AI firms have taken different approaches. OpenAI CEO Sam Altman has floated the idea of universal basic income as a response to AI-driven displacement, going so far as to fund a UBI pilot study through his personal investments. Google's DeepMind has published research on AI's economic effects but hasn't appointed a dedicated economist to the C-suite. Meta has largely avoided the labor question, focusing its public messaging on AI's creative and social applications. The venture capital community, meanwhile, is betting heavily that AI will replace human labor at scale. Sequoia Capital, Andreessen Horowitz, and other top-tier firms have poured billions into AI startups whose explicit value proposition is doing what humans currently do, but cheaper and faster. The investment thesis and the reassuring public narrative exist in tension. You can't simultaneously tell investors that AI will automate vast swaths of the economy and tell workers that their jobs are safe. McCrory's task-versus-job distinction is the bridge the industry is trying to build between those two messages. It's clever. It may even be correct in a narrow technical sense. But it asks a lot of the workers standing on it. The coming years will test the framework severely. As AI models grow more capable -- Anthropic's Claude 3.5 Sonnet already outperforms many human benchmarks on coding, analysis, and writing tasks -- the boundary between "automating a task" and "automating a job" will blur. When 80% of a job's tasks can be done by a machine, the remaining 20% may not justify a full-time salary. Employers will consolidate roles. Teams of ten will become teams of three, each augmented by AI tools. That's not mass unemployment. But it's not business as usual, either. And then there's the second-order question that McCrory only partially addressed: What about the jobs that AI creates? The optimistic case holds that entirely new categories of employment will emerge, just as the internet spawned web developers, social media managers, and SEO specialists. Early signs are visible. "Prompt engineer" was barely a job title in 2022; by 2025, it commands six-figure salaries at major tech firms. AI safety researcher, model evaluator, data curator -- these are genuinely new roles. But their number is small relative to the potential displacement, and they tend to require high levels of technical skill, which limits who can access them. The distributional problem again. The workers most likely to lose tasks to AI -- mid-level knowledge workers -- are not the same people most likely to land the new AI-adjacent roles. The former group is broad and diverse. The latter is narrow and specialized. Bridging that gap requires the kind of large-scale retraining infrastructure that no country has yet built. McCrory's analysis is valuable precisely because it comes from inside the industry. He knows the technology's capabilities better than most academic economists. He also knows the incentive structures better than most outside observers. When he says the transition will be difficult but manageable, that's worth taking seriously -- as one data point among many, not as the final word. The final word, if there is one, will be written by policymakers, educators, and workers themselves. AI companies can model the risks and publish white papers. They can hire economists and fund research. But the actual work of managing a labor market transition -- building retraining programs, reforming education systems, designing social safety nets for an era of accelerating automation -- falls to institutions that move far more slowly than the technology they're responding to. That gap between technological speed and institutional speed is the real danger. Not that AI will kill all the jobs. But that it will change them faster than we can adapt. McCrory knows this. Whether his employer -- and its peers -- will do anything meaningful about it is the question that matters most.

Anthropic
WebProNews25d ago
Read update
Anthropic's Chief Economist Says AI Won't Kill Jobs -- It Will Redraw the Map of Who Does What

Jamie Dimon Is Planning for Chaos -- And Thinks You Should Too

Jamie Dimon has never been one to sugarcoat. But his latest round of public commentary carries an edge that even by his standards feels unusually pointed -- a billionaire banker standing at the helm of America's largest financial institution, warning that the global economy is threading a needle between inflation, geopolitical fracture, and what he calls a potential "kerfuffle" in the Treasury market that could force the Federal Reserve's hand. In an interview aired on Fox Business and reported by Yahoo Finance, the JPMorgan Chase CEO said his bank is actively preparing for the possibility of a disruption in the U.S. Treasury market -- an event that, if it materialized, would send shockwaves through virtually every corner of global finance. "We are prepared for it," Dimon said. Not hedging. Not speculating. Preparing. That distinction matters. JPMorgan Chase, with roughly $4 trillion in assets, doesn't prepare for hypotheticals lightly. When Dimon says the firm has contingency plans in place for Treasury market volatility, he's signaling that the probability has crossed a threshold from theoretical to operationally relevant. And when he suggests the Fed would likely step in to stabilize such a situation -- but only after letting markets "have a kerfuffle" first -- he's offering a remarkably candid read on how Washington's monetary authorities might respond to a crisis of their own making. The Treasury Market's Fragile Foundations The U.S. Treasury market is the bedrock of global finance. It's the benchmark against which nearly all other assets are priced, the collateral underpinning trillions in derivatives and repo transactions, and the safe haven investors flee to when everything else falls apart. So when cracks appear in that foundation, the implications are systemic. Dimon's concerns aren't abstract. The Treasury market has been under structural stress for years, a byproduct of post-2008 bank regulations that limit dealer balance sheet capacity, the Federal Reserve's quantitative tightening program, and a federal government issuing debt at a pace that would have been unthinkable a decade ago. The U.S. national debt now exceeds $36 trillion. Annual deficits are running north of $1.8 trillion. And the Congressional Budget Office projects those numbers will only grow. Against that backdrop, the mechanics of Treasury auctions -- who buys, at what price, and with what enthusiasm -- have become a source of genuine anxiety among market participants. Several recent auctions have shown signs of weakening demand, particularly from foreign central banks that historically absorbed large portions of new issuance. Meanwhile, hedge funds have become increasingly dominant buyers, often employing highly leveraged basis trades that amplify both liquidity and fragility simultaneously. Dimon has flagged this dynamic before. But his tone has sharpened. He's no longer merely cautioning about fiscal deficits as a long-term drag. He's talking about near-term market events that could require emergency intervention from the central bank. The Fed, for its part, has maintained that the Treasury market is functioning normally. Chair Jerome Powell has acknowledged periods of volatility but has repeatedly expressed confidence in the market's underlying resilience. Dimon, it seems, is less convinced. Or at least less willing to assume the best. His comment about the Fed allowing a "kerfuffle" before stepping in is particularly telling. It suggests Dimon believes the central bank would prefer not to intervene preemptively -- that it would need political and market cover to act, and that cover would come only after visible distress. A controlled burn, not a firebreak. That's a calculated bet by policymakers, and it's one that makes the CEO of the country's biggest bank uncomfortable enough to say so publicly. What would such a disruption look like? It could start with a failed or poorly received Treasury auction, triggering a spike in yields that cascades through mortgage rates, corporate borrowing costs, and equity valuations. It could manifest as a sudden unwinding of leveraged positions in the basis trade, forcing fire sales and margin calls. Or it could emerge from a geopolitical shock -- a sudden sell-off by a major foreign holder of Treasuries -- that tests the market's ability to absorb large volumes without dislocation. Any of these scenarios would be ugly. Combined, they'd be devastating. Dimon's Broader Warning: Tariffs, Stagflation, and the Price of Uncertainty The Treasury market isn't the only thing keeping Dimon up at night. In the same set of remarks, he addressed the macroeconomic uncertainty created by the Trump administration's tariff policies, warning that the current trade posture risks pushing the U.S. economy toward stagflation -- the toxic combination of stagnant growth and persistent inflation that defined the late 1970s and proved extraordinarily difficult to unwind. Dimon has been vocal about tariffs for months. He's acknowledged that some degree of trade rebalancing with China and other partners may be warranted. But he's argued consistently that the execution matters enormously, and that broad, unpredictable tariff actions create a fog of uncertainty that chills business investment and consumer confidence alike. The numbers are starting to bear him out. Consumer sentiment surveys have weakened. Business capital expenditure plans have softened. And inflation expectations -- one of the metrics the Fed watches most closely -- have ticked higher, driven in part by anticipated tariff-related price increases on imported goods. JPMorgan's own economists have raised their probability estimates for a U.S. recession in 2025. Not a certainty. But no longer a tail risk either. Dimon's framing is deliberate. He's not predicting doom. He's insisting on preparation. There's a difference, and it's one that Wall Street's senior-most statesman has honed over decades of crisis management -- from the 2008 financial collapse, which JPMorgan navigated better than most, to the pandemic-era market seizure of March 2020, when Treasury market liquidity briefly evaporated in ways that alarmed even the most seasoned traders. That 2020 episode, in fact, may be the closest recent analogue to what Dimon is warning about now. In that instance, the Fed intervened with overwhelming force, purchasing hundreds of billions of dollars in Treasuries to restore market functioning. It worked. But it also expanded the Fed's balance sheet to unprecedented levels and created a precedent that markets now rely on -- the implicit assumption that the central bank will always backstop Treasury market dysfunction. Dimon appears to be questioning whether that assumption is as reliable as markets believe. His comment about a "kerfuffle" preceding intervention implies a gap -- a window of genuine distress before the cavalry arrives. And in modern markets, where algorithmic trading and leveraged positions can amplify moves in milliseconds, even a brief gap can inflict serious damage. So what is JPMorgan actually doing to prepare? Dimon didn't offer operational specifics in his public remarks, and the bank's spokespeople have declined to elaborate beyond the CEO's comments. But industry observers can make informed inferences. The bank is likely stress-testing its trading books against extreme yield scenarios, building cash buffers, reviewing counterparty exposures -- particularly to hedge funds active in the basis trade -- and ensuring its operations can handle elevated volumes during periods of market stress. These are the blocking-and-tackling exercises that large banks conduct routinely, but the intensity and specificity of the preparation reflects the seriousness of the perceived risk. Other major banks are watching closely. Goldman Sachs, Morgan Stanley, and Citigroup have all made public comments in recent weeks about Treasury market risks, though none with the bluntness Dimon employed. Bank of America's research team published a note in May warning that the basis trade's growing footprint in the Treasury market represents a "systemic vulnerability" that regulators have been too slow to address. The regulatory angle is important. The Securities and Exchange Commission finalized rules in late 2023 aimed at increasing central clearing of Treasury transactions, a reform designed to reduce counterparty risk and improve market transparency. But implementation timelines stretch into 2025 and 2026, and critics argue the rules don't go far enough to address the leverage embedded in hedge fund Treasury positions. The Financial Stability Oversight Council -- the inter-agency body created after 2008 to monitor systemic risks -- has flagged Treasury market structure as a priority concern, but concrete action has been slow. Dimon has long argued that bank regulations, particularly the supplementary leverage ratio, artificially constrain the ability of large dealers to intermediate in the Treasury market, reducing liquidity precisely when it's most needed. He's pushed for regulatory reform that would exempt Treasury holdings from certain capital requirements, arguing this would allow banks to step in as buyers during periods of stress. Regulators have been sympathetic to the argument in principle but reluctant to act, wary of appearing to weaken post-crisis safeguards. The irony is thick. Rules designed to make the financial system safer may be contributing to the very fragility Dimon is warning about. And then there's the political dimension. The current fiscal trajectory -- massive deficits, rising debt service costs, and no credible plan for consolidation from either party -- is the underlying driver of Treasury market stress. Dimon has called the deficit situation "the most predictable crisis in history," a phrase he's used repeatedly and with evident frustration. He's urged lawmakers to address it before markets force the issue. So far, those pleas have gone unheeded. The bond market, historically, has been the ultimate disciplinarian of fiscal excess. When governments borrow too much, bond investors demand higher yields, raising the cost of debt service and eventually forcing austerity. That mechanism has operated with brutal efficiency in countries like Greece, Italy, and Argentina. The United States has been largely exempt from such discipline, thanks to the dollar's reserve currency status and the unmatched depth and liquidity of the Treasury market. But exemptions aren't permanent. And Dimon seems to be suggesting that the margin of safety is narrower than most people assume. His willingness to say so publicly -- repeatedly, forcefully, and with the credibility of someone who oversees a $4 trillion balance sheet -- is itself a signal. CEOs of this stature don't issue warnings for sport. They do it when they believe the risks are real, imminent, and insufficiently appreciated by the people with the power to mitigate them. Whether Washington listens is another matter entirely. For market participants, the takeaway is practical: the man running America's biggest bank thinks a Treasury market disruption is plausible enough to prepare for. That alone should inform risk management decisions across the industry -- from asset allocation to liquidity planning to counterparty due diligence. Not because Dimon is always right. But because when the most connected banker in the world says he's bracing for turbulence, ignoring him is a choice that comes with consequences.

AgilityCHAOS
WebProNews25d ago
Read update
Jamie Dimon Is Planning for Chaos -- And Thinks You Should Too
Showing 9761 - 9780 of 12289 articles