
Peter McCrory has one of the stranger titles in Silicon Valley. As chief economist at Anthropic, the $60 billion AI company behind the Claude chatbot, he's tasked with answering a question that has haunted every industrial transformation since the spinning jenny: What happens to the workers?
His answer, laid out in a recent interview with Fortune, is more nuanced than the doom-and-gloom headlines suggest. McCrory doesn't think artificial intelligence will annihilate employment. But he doesn't think the transition will be painless, either. The truth, as he frames it, sits in the uncomfortable middle -- a place where certain tasks vanish, new ones emerge, and the speed of the shift determines whether societies adapt or fracture.
That framing matters. It matters because Anthropic isn't just any AI lab. It's the company founded by former OpenAI executives Dario and Daniela Amodei, built explicitly around the idea of AI safety. When its in-house economist talks about labor market disruption, the remarks carry a dual weight: part corporate positioning, part genuine analytical effort. McCrory's role exists precisely because Anthropic wants to be seen as the responsible actor in a field where responsibility has been in short supply.
So what did he actually say?
The core argument is structural. McCrory told Fortune that AI will primarily automate tasks, not entire jobs. A radiologist won't disappear. But the hours she spends scanning routine images might. A junior lawyer won't be fired outright. But the document review that consumed 60% of his week could be handled by a model in minutes. The distinction between task displacement and job displacement is one economists have drawn for years, most notably MIT's Daron Acemoglu and Boston University's Pascual Restrepo. McCrory is applying it specifically to the current wave of large language models and their multimodal successors.
The implication is profound. If AI eats tasks rather than jobs, then the labor market question becomes one of reallocation. Workers don't necessarily lose employment -- they lose specific responsibilities within their employment, and ideally gain new ones. McCrory pointed to historical analogies: ATMs didn't eliminate bank tellers. They reduced the number of tellers per branch, but banks opened more branches because operating costs fell, and tellers shifted toward relationship-based services. The net effect on teller employment was roughly neutral for decades.
But here's where McCrory's optimism meets its limits. And he acknowledged them.
Speed. The pace of AI adoption could outstrip the economy's ability to absorb displaced workers into new roles. Previous technological transitions -- electrification, computerization -- unfolded over decades. The current AI wave is compressing that timeline dramatically. GPT-4 arrived in March 2023. By early 2025, AI agents were writing code, managing customer service interactions, and drafting regulatory filings. Anthropic's own Claude model has been integrated into enterprise workflows at companies like Amazon, Notion, and DuckDuckGo. The adoption curve isn't gradual. It's steep.
McCrory conceded that this velocity creates genuine risk. If task displacement happens faster than task creation, you get a painful gap -- a period where workers with obsolete skills can't yet access the new opportunities that AI is theoretically generating. That gap could last years. And it won't be evenly distributed.
Which brings up the distributional question. Not all workers face the same exposure. A growing body of research suggests that AI's impact falls heaviest on white-collar, knowledge-intensive work -- precisely the kind of employment that was supposed to be safe from automation. A widely cited 2023 paper from OpenAI and the University of Pennsylvania estimated that roughly 80% of the U.S. workforce could see at least 10% of their tasks affected by large language models. For approximately 19% of workers, the exposure was 50% or more. Those aren't factory floor positions. They're accountants, writers, paralegals, financial analysts, software developers.
McCrory didn't dispute these findings. He argued instead that exposure doesn't equal elimination. A task being automatable doesn't mean it will be automated immediately, or that the worker performing it will be let go. Organizational inertia, regulatory constraints, trust deficits, and the sheer messiness of real-world implementation all slow the process down. Companies don't flip a switch. They pilot programs, encounter edge cases, negotiate with unions, and deal with customers who still want a human on the line.
Fair enough. But the trajectory is clear.
Recent data points reinforce the tension McCrory is trying to manage. In March 2025, a report from the McKinsey Global Institute projected that generative AI could automate activities accounting for up to 30% of hours currently worked in the U.S. economy by 2030. That's an acceleration from their previous estimates. Separately, the International Monetary Fund published research in January 2025 suggesting that AI would affect nearly 40% of global employment, with advanced economies more exposed than developing ones because their labor markets are more heavily weighted toward cognitive tasks.
The policy response has been sluggish. In the United States, there is no federal framework for managing AI-driven workforce transitions. The Biden administration issued an executive order on AI in October 2023 that touched on workforce issues, but it was largely aspirational. The Trump administration, which took office in January 2025, has shown more interest in deregulating AI development than in cushioning its labor market effects. Europe's AI Act, which took partial effect in 2025, focuses on safety and transparency rather than employment impacts. No major economy has a comprehensive plan for retraining workers displaced by generative AI.
McCrory, to his credit, didn't pretend that market forces alone would sort this out. He told Fortune that proactive investment in education and retraining would be necessary, and that both government and the private sector had roles to play. He also noted that Anthropic itself was investing in research on economic impacts -- hence his job title.
Still, the skeptic's question writes itself. Can we trust an AI company's economist to give us an unbiased assessment of AI's labor risks? McCrory works for a firm that is valued at tens of billions of dollars specifically because investors believe AI will transform -- and in many cases, replace -- human labor. The financial incentive to downplay disruption is enormous. If Anthropic's economist said, "Yes, this technology will cause mass unemployment," the company's valuation, recruiting pipeline, and regulatory standing would all take hits.
That doesn't mean McCrory is wrong. It means his analysis should be weighed alongside independent research, not treated as gospel.
And the independent research is increasingly sobering. Acemoglu, who won the Nobel Prize in Economics in 2024 partly for his work on technology and labor markets, has been notably more cautious than Silicon Valley about AI's net benefits. In a 2024 paper, he estimated that AI would increase U.S. productivity by only about 0.5% over the next decade -- far below the transformative claims made by AI companies. He argued that the technology's economic benefits are concentrated in a narrow set of tasks and that the costs of displacement are being systematically underestimated.
Restrepo, Acemoglu's frequent collaborator, has made a related point: automation doesn't automatically generate new tasks for displaced workers. That reinvention requires deliberate investment, institutional creativity, and time. When automation outpaces reinvention, wages fall, inequality rises, and political instability follows. The populist upheavals of the 2010s, both scholars have argued, were partly rooted in the failure to manage earlier waves of automation and globalization.
The AI industry's preferred narrative -- that technology always creates more jobs than it destroys -- is historically true in aggregate but misleading in its breezy confidence. The aggregate hides enormous variation. The Industrial Revolution eventually raised living standards for nearly everyone, but the first several decades were brutal for displaced artisans and agricultural workers. The gains took generations to materialize. Workers alive during the transition didn't experience the long-run average. They experienced the short-run pain.
McCrory seems aware of this. His framing -- tasks, not jobs -- is an attempt to thread the needle between AI boosterism and AI alarmism. It's intellectually defensible. The question is whether it's politically and socially sufficient.
Because the people who lose 50% of their tasks to automation won't experience that as a theoretical reallocation. They'll experience it as a demotion, a pay cut, or an anxious period of retraining while the mortgage comes due. The macroeconomic story may work out fine. The microeconomic story -- the individual story -- is where the damage concentrates.
Anthropic's decision to hire a chief economist signals that at least one major AI company is thinking about these questions seriously. Whether that thinking translates into meaningful action -- lobbying for retraining programs, sharing economic research publicly, advocating for transition support -- remains to be seen. Corporate research departments have a long history of producing sophisticated analysis that conveniently never threatens the parent company's business model.
Other AI firms have taken different approaches. OpenAI CEO Sam Altman has floated the idea of universal basic income as a response to AI-driven displacement, going so far as to fund a UBI pilot study through his personal investments. Google's DeepMind has published research on AI's economic effects but hasn't appointed a dedicated economist to the C-suite. Meta has largely avoided the labor question, focusing its public messaging on AI's creative and social applications.
The venture capital community, meanwhile, is betting heavily that AI will replace human labor at scale. Sequoia Capital, Andreessen Horowitz, and other top-tier firms have poured billions into AI startups whose explicit value proposition is doing what humans currently do, but cheaper and faster. The investment thesis and the reassuring public narrative exist in tension. You can't simultaneously tell investors that AI will automate vast swaths of the economy and tell workers that their jobs are safe.
McCrory's task-versus-job distinction is the bridge the industry is trying to build between those two messages. It's clever. It may even be correct in a narrow technical sense. But it asks a lot of the workers standing on it.
The coming years will test the framework severely. As AI models grow more capable -- Anthropic's Claude 3.5 Sonnet already outperforms many human benchmarks on coding, analysis, and writing tasks -- the boundary between "automating a task" and "automating a job" will blur. When 80% of a job's tasks can be done by a machine, the remaining 20% may not justify a full-time salary. Employers will consolidate roles. Teams of ten will become teams of three, each augmented by AI tools. That's not mass unemployment. But it's not business as usual, either.
And then there's the second-order question that McCrory only partially addressed: What about the jobs that AI creates? The optimistic case holds that entirely new categories of employment will emerge, just as the internet spawned web developers, social media managers, and SEO specialists. Early signs are visible. "Prompt engineer" was barely a job title in 2022; by 2025, it commands six-figure salaries at major tech firms. AI safety researcher, model evaluator, data curator -- these are genuinely new roles. But their number is small relative to the potential displacement, and they tend to require high levels of technical skill, which limits who can access them.
The distributional problem again. The workers most likely to lose tasks to AI -- mid-level knowledge workers -- are not the same people most likely to land the new AI-adjacent roles. The former group is broad and diverse. The latter is narrow and specialized. Bridging that gap requires the kind of large-scale retraining infrastructure that no country has yet built.
McCrory's analysis is valuable precisely because it comes from inside the industry. He knows the technology's capabilities better than most academic economists. He also knows the incentive structures better than most outside observers. When he says the transition will be difficult but manageable, that's worth taking seriously -- as one data point among many, not as the final word.
The final word, if there is one, will be written by policymakers, educators, and workers themselves. AI companies can model the risks and publish white papers. They can hire economists and fund research. But the actual work of managing a labor market transition -- building retraining programs, reforming education systems, designing social safety nets for an era of accelerating automation -- falls to institutions that move far more slowly than the technology they're responding to.
That gap between technological speed and institutional speed is the real danger. Not that AI will kill all the jobs. But that it will change them faster than we can adapt. McCrory knows this. Whether his employer -- and its peers -- will do anything meaningful about it is the question that matters most.