
Chris Yin spends $2 million a month on AI. Not annually. Monthly.
The CEO of Swan, a startup that builds AI-powered software for the debt collection industry, disclosed the figure in a recent interview with Business Insider, offering one of the most concrete glimpses yet into how deeply -- and expensively -- young companies are embedding large language models into their core products. Swan's primary vendor is Anthropic, the maker of Claude, and the company's AI expenditure now rivals what many startups spend on their entire headcount.
That number deserves to sit with you for a moment.
Swan, which was founded in 2022 and has raised over $54 million in venture capital, uses Anthropic's models to power AI agents that handle phone calls, negotiate payment plans, and manage communications with consumers who owe debts. The company essentially replaced a function traditionally performed by armies of low-wage call center workers with AI systems that can operate around the clock, in multiple languages, without breaks or benefits. It's a textbook case of the kind of labor displacement that economists have been warning about -- and that investors have been salivating over -- since ChatGPT burst into public consciousness in late 2022.
But here's the thing about building your entire business on top of someone else's AI model: the meter never stops running.
Yin told Business Insider that Swan's AI costs have grown in tandem with revenue, which he said is now in the "tens of millions" annually. The $2 million monthly Anthropic bill represents a significant portion of the company's operating expenses, though Yin framed it as a worthwhile trade-off. Each AI-handled interaction costs a fraction of what a human agent would, he argued, and the system can scale in ways that a traditional call center simply cannot. Swan's AI agents reportedly handle millions of consumer interactions per month, a volume that would require thousands of human employees to match.
The math, at least on paper, works. But it also reveals an uncomfortable dependency.
Swan is far from alone in confronting ballooning AI infrastructure costs. Across Silicon Valley and beyond, startups that built their products on foundation models from Anthropic, OpenAI, and Google are discovering that API costs can become the single largest line item on their income statements -- sometimes eclipsing payroll, office space, and traditional cloud computing combined. The phenomenon has created a new category of financial risk that venture capitalists are only beginning to grapple with seriously.
A growing number of AI-native startups now spend between 20% and 50% of their gross revenue on model inference costs, according to estimates from multiple venture capital firms. For companies like Swan that make AI the product rather than a feature, the percentage can climb even higher. And unlike traditional software costs, which tend to decline on a per-unit basis as a company scales, AI inference costs scale roughly linearly with usage. More customers means more API calls. More API calls means a bigger bill from Anthropic or OpenAI.
This is the structural tension at the heart of the current AI startup boom.
Traditional SaaS companies became enormously profitable precisely because software has near-zero marginal cost. Build it once, sell it a million times. The gross margins are extraordinary -- often 80% or higher. AI-native companies don't enjoy that luxury. Every customer interaction, every generated response, every model inference consumes compute. And compute costs money.
Anthropic, for its part, has been raising prices even as it ships more capable models. The company's Claude 3.5 Sonnet and Claude 3 Opus models carry different pricing tiers, with the most capable models costing significantly more per token. For a company like Swan that requires high-quality, nuanced outputs -- negotiating debt repayment plans is not a task where you want the model to hallucinate or sound robotic -- downgrading to a cheaper model isn't always an option. Yin acknowledged to Business Insider that Swan has experimented with routing simpler tasks to less expensive models while reserving the most capable (and costly) Claude variants for complex negotiations.
Smart. But still expensive.
The broader industry context makes Swan's situation even more telling. Anthropic itself has been on a fundraising tear, having raised over $13 billion to date, with its most recent round valuing the company at $61.5 billion. Much of that capital goes directly into training and serving models -- the same models that companies like Swan are paying handsomely to use. There's a certain circularity to the arrangement: venture money flows into Anthropic, which builds models, which startups pay to access using their own venture money, which eventually needs to be justified by actual revenue from actual customers.
The question is whether the unit economics ever truly pencil out at maturity.
Some investors believe they will. The argument goes like this: inference costs are falling rapidly as hardware improves and model architectures become more efficient. What costs $2 million a month today might cost $200,000 in three years. Meanwhile, the revenue Swan generates from its AI-powered debt collection services should continue to grow as the company signs more clients. The gap between cost and revenue will widen in Swan's favor over time, the optimists say, creating the same kind of margin expansion that made traditional SaaS companies so lucrative.
There's historical precedent for this view. Cloud computing costs fell dramatically over the past two decades, turning Amazon Web Services from an expensive experiment into the backbone of modern software. Early AWS customers who gritted their teeth through high bills in 2008 were rewarded with plummeting per-unit costs by 2015. The AI inference market could follow a similar trajectory -- Nvidia's next-generation chips promise significant improvements in performance per watt, and companies like Groq and Cerebras are building specialized hardware designed to make inference cheaper.
But the bears have a counterargument. And it's a compelling one.
Unlike cloud storage or basic compute, AI model costs aren't driven solely by hardware. They're also driven by the complexity and size of the models themselves. As Anthropic, OpenAI, and Google race to build ever-more-capable systems, the models are getting larger and more expensive to run, not smaller. Claude's next generation will almost certainly be more capable than today's -- and almost certainly more expensive to serve. So even if the hardware gets cheaper, the models may get pricier, creating a treadmill effect where startups are perpetually chasing cost reductions that never quite materialize as fully as projected.
There's also the concentration risk. Swan is building its core product on Anthropic's models. If Anthropic raises prices, changes its terms of service, or experiences an outage, Swan's entire business is affected. It's the API dependency problem writ large -- the same concern that plagued companies built on top of Twitter's API or Facebook's platform in earlier eras of tech. Except the stakes are higher now, because AI isn't a feature for Swan. It is the product.
Yin seems aware of this risk. He told Business Insider that Swan maintains the ability to switch between model providers, and the company has tested alternatives from OpenAI and open-source options. But switching costs in AI are nontrivial. Each model has different strengths, different failure modes, different prompting requirements. A conversation flow optimized for Claude won't necessarily perform the same way on GPT-4o. And in a domain like debt collection, where regulatory compliance and consumer protection laws impose strict requirements on what an AI agent can and cannot say, revalidating an entire system on a new model is a significant undertaking.
The debt collection industry itself adds another layer of complexity. It's one of the most heavily regulated sectors in consumer finance, governed by the Fair Debt Collection Practices Act at the federal level and a patchwork of state laws that vary widely. The Consumer Financial Protection Bureau has been paying increasing attention to the use of AI in debt collection, and several consumer advocacy groups have raised concerns about AI systems that might mislead or pressure vulnerable consumers. Swan's bet is that AI can actually improve compliance -- a well-trained model doesn't lose its temper, doesn't make unauthorized threats, and can be programmed to follow scripts with perfect consistency. But regulators haven't fully weighed in yet, and the legal framework for AI-conducted debt collection remains unsettled.
So Swan is navigating simultaneously: spiraling AI costs, regulatory uncertainty, platform dependency, and the fundamental challenge of building a profitable business on top of someone else's technology. That's a lot of risk for a company that's raised $54 million.
And yet, the investors keep writing checks.
Swan's fundraising success reflects a broader conviction in the venture capital community that AI-native companies -- despite their unusual cost structures -- represent the next great wave of enterprise software. Firms like Andreessen Horowitz, Sequoia, and others have been pouring billions into startups that use large language models to automate functions previously performed by humans. The total addressable market for AI-driven automation in financial services alone is estimated at tens of billions of dollars, and debt collection -- a $20 billion industry in the United States -- is considered particularly ripe for disruption because it's labor-intensive, margin-thin, and widely despised by consumers and companies alike.
The pitch is seductive. Replace humans with AI. Cut costs by 60% or more. Scale instantly. Handle compliance automatically. No turnover, no training, no HR headaches.
But the $2 million monthly AI bill complicates the narrative. It suggests that while AI can indeed replace human labor, it doesn't eliminate costs -- it shifts them. Instead of paying salaries and benefits to call center workers, Swan pays Anthropic. Instead of managing a workforce, it manages an API relationship. The nature of the expense has changed. The magnitude, apparently, has not -- at least not yet.
This is the reality that many AI startups are quietly confronting in 2025 and into 2026. The hype cycle promised that AI would make everything cheaper. For end customers, that may prove true over time. But for the companies building AI-first products, the economics are more complicated than the pitch decks suggested. Gross margins at many AI-native startups hover between 40% and 60% -- respectable by most industry standards, but a far cry from the 80%+ margins that made traditional SaaS companies so attractive to public market investors.
Whether those margins expand as inference costs decline or compress as models grow more expensive will likely determine which AI startups survive and which flame out. For Swan, the bet is that its early investment in AI-powered debt collection will pay off as the technology matures and costs come down. For Anthropic, the bet is that companies like Swan will keep paying -- and that new customers will join them -- in sufficient volume to justify the tens of billions being spent on model development.
Both bets could pay off. Both could fail. The only certainty is the bill. Two million dollars. Every month. And rising.