Anthropic CEO Dario Amodei has a warning for AI companies: Stop telling people that...
Market Updates

Anthropic CEO Dario Amodei has a warning for AI companies: Stop telling people that...

The Times of India11h ago

Dario Amodei sat down for lunch at a San Francisco Italian restaurant in April, ate three of the four shared bread rolls, and told the Financial Times something that most of his peers in the AI industry are still tiptoeing around: the disruption is real, it's coming fast, and pretending otherwise is actively eroding public trust in the technology. He wasn't there to sugarcoat things. He was there to say that the AI industry has a credibility problem -- and that the only way out of it is to stop promising a soft landing while quietly accelerating the turbulence. For an industry used to talking about abundance and breakthrough, it was a notably uncomfortable thing to say out loud.Amodei, who leads Anthropic -- currently riding a $380 billion valuation and a wave of buzz around its powerful Claude Mythos model -- said he believes AI could eliminate around 50% of entry-level white-collar jobs within five years. That includes junior tech roles, early-career lawyers, consultants, and finance professionals. His message to the broader industry wasn't panic, but accountability: stop overselling the upside while glossing over what's being lost."We should not deny that the disruption is going to happen," Amodei told the FT. "We just have to make the positive effect so large that we have a tool to address the disruption."His candor stands out in an industry that has historically leaned on optimism as a default. OpenAI CEO Sam Altman, meanwhile, released a 13-page policy document in early April calling for a kind of New Deal for the AI era -- proposing four-day workweeks, public wealth funds, and portable benefits to cushion the transition to what he calls "superintelligence." Critics were quick to note the irony: the companies creating the displacement are now pitching themselves as the architects of the safety net. One analyst called it "comms work to provide cover for regulatory nihilism."Amodei's concern isn't just theoretical. Anthropic's own labor market research, published in March, found that computer programmers, customer service representatives, and financial analysts are among the most exposed occupations to AI automation right now. A separate survey of 81,000 Claude users, released in April, found that early-career workers expressed significantly higher anxiety about job displacement than senior professionals -- consistent with what's already showing up in hiring data. Among workers aged 22 to 25, the rate of entry into high-exposure occupations has dropped by around 14% compared to pre-ChatGPT levels.Federal Reserve researchers, using an expanded definition of software workers that includes contractors and coders outside the tech sector, found roughly half a million fewer coding jobs exist today than pre-AI trends would have predicted. The job market isn't in freefall, but the floor is quietly shifting underneath it -- and it's the people just starting out who are feeling it first.Amodei's framing keeps returning to a single phrase: AI can only "diffuse at the speed of trust." And right now, trust is running short.Part of the problem, he argues, is that AI companies -- his own included -- haven't fully delivered the promised benefits yet. The productivity gains are real in pockets, particularly in software development and certain knowledge work, but they haven't translated into broadly visible improvements in living standards. A recent survey of over 1,000 executives found that 80% said AI wasn't having any measurable impact on their productivity or headcount. Until the benefits show up in ways ordinary people can feel, he says, skepticism is not only understandable but rational."Is that just propaganda? Is that just vapourware that's not going to happen? We actually have to make it happen," he said. That's a more humble posture than most AI CEOs tend to adopt publicly.It also sets up an uncomfortable tension. Anthropic this year unveiled Claude Mythos -- a model that cybersecurity researchers say represents a genuine step change in automated vulnerability discovery, capable of finding and exploiting zero-day flaws in major operating systems without human guidance. The company's AI coding tools have helped trigger a wave of stock selloffs in the software sector; nearly $3 trillion in market cap has evaporated since October, as investors worry AI agents are quietly eating the application layer. Whether that counts as "delivering the benefits" depends a lot on who you ask and whose job is next.Not everyone is convinced the industry's newfound policy enthusiasm is the real thing. Soribel Feliz, a former U.S. Senate AI policy advisor, noted that the ideas circulating -- shared prosperity, democratized access, worker protections -- are essentially the same framework discussed since ChatGPT launched in late 2022. "I have it in my handwritten notes," she told Fortune. "All of this was already said, all of it." The problem, she said, isn't the vocabulary. It's that nobody has built the actual mechanisms to back it up.Economists who have spent careers studying technological transitions are also skeptical of the timeline predictions coming from AI executives. Daron Acemoglu, Erik Brynjolfsson, and David Autor -- some of the most cited labor economists working on this question -- have pushed back on sweeping job-loss projections, arguing that the evidence so far points to slower, more uneven disruption than the industry tends to advertise.Amodei seems at least partially aware of this gap. He wants AI regulation modeled on how society handles cars and planes -- acknowledging economic value while enforcing real safety standards. He's put $20 million behind a PAC pushing for stricter AI safety laws and has clashed with his own government over attempts to restrict Anthropic's Pentagon contracts.Whether any of that adds up to meaningful protection for the entry-level analyst staring down an uncertain career path is still an open question. But at minimum, one of the most powerful people in AI is saying out loud what others keep leaving in the footnotes: the disruption isn't something that happens to other industries. It's already here.

Originally published by The Times of India

Read original source →
Anthropic