
Anthropic, the San Francisco-based artificial intelligence company behind the Claude chatbot, has inserted itself into a copyright lawsuit that could determine whether training AI models on copyrighted material is legal. The company filed an amicus brief this week with the U.S. Court of Appeals for the Fifth Circuit, urging judges to overturn a lower court ruling that declared AI training on copyrighted content does not qualify as fair use. If the appeals court sides with the original decision, it could upend the entire business model upon which the modern AI industry has been built.
The case, Thomson Reuters Enterprise Centre GmbH v. Ross Intelligence Inc., began as a dispute between the legal publishing giant and a now-defunct legal research startup. Ross Intelligence had used content from Thomson Reuters' Westlaw database to train its own AI-powered legal search tool. A federal judge in Delaware ruled that this copying wasn't protected by fair use, and a jury subsequently found Ross liable for copyright infringement. The decision was narrow in scope -- a single district court ruling involving a specific set of facts -- but its implications sent tremors through the AI industry. As Wired reported, Anthropic's brief argues that letting this ruling stand would create a dangerous precedent, one that threatens not just AI companies but the broader principle that machines can learn from existing works the way humans do.
Here's the core tension. AI companies have long maintained that ingesting copyrighted text, images, and code to train their models constitutes fair use -- a legal doctrine that permits limited use of copyrighted material without the rights holder's permission for purposes like commentary, education, and research. The argument is that training a model is "transformative" because the AI doesn't reproduce the original works; it learns patterns from them and generates entirely new outputs. Thomson Reuters convinced a judge otherwise. The court found that Ross Intelligence's copying was commercial, not transformative, and that it harmed Thomson Reuters' market for licensing its content.
Anthropic's amicus brief, filed alongside a group of AI researchers and organizations, doesn't mince words. According to Wired, the company warns that affirming the lower court's ruling would "imperil the training of AI models across sectors" and threaten the development of AI systems that benefit the public. The brief draws an analogy to human learning: just as a law student reads thousands of cases to develop legal reasoning skills without infringing copyright, an AI system processes text to learn linguistic patterns without copying the expression of any individual work. It's an analogy that copyright scholars have debated fiercely for years, and the Fifth Circuit may now have to rule on it directly.
The timing matters. Anthropic itself is a defendant in a separate copyright lawsuit brought by music publishers, who allege that Claude was trained on copyrighted song lyrics. Universal Music Group, Concord Music, and ABKCO have all joined suits against various AI companies. OpenAI faces claims from the New York Times. Stability AI has been sued by Getty Images. The list keeps growing. But none of those cases has produced an appellate ruling. Thomson Reuters v. Ross Intelligence is further along than almost any other AI copyright dispute in the federal courts, which is precisely why Anthropic and its allies are so anxious about the outcome.
A ruling from the Fifth Circuit -- one of the most influential federal appellate courts in the country -- would carry significant weight, even though it wouldn't be binding nationwide. Other circuits would look to it for guidance. And the Supreme Court, if it eventually takes up an AI copyright case, would consider how the appellate courts have weighed in. So this isn't just about Ross Intelligence, a company that has already shut down. It's about whether the legal framework that has allowed AI companies to train on the open internet will survive.
The fair use doctrine, codified in Section 107 of the Copyright Act, requires courts to weigh four factors: the purpose and character of the use, the nature of the copyrighted work, the amount used, and the effect on the market for the original. In the Ross Intelligence case, the district court found that all four factors favored Thomson Reuters. The use was commercial. The works were creative and factual compilations entitled to protection. Ross copied substantial portions. And the AI tool competed directly with Westlaw's core product.
Anthropic's brief pushes back hardest on the first factor -- whether the use was transformative. The Supreme Court's 2023 decision in Andy Warhol Foundation v. Goldsmith narrowed what counts as transformative use, holding that Warhol's silkscreen prints of a photograph of Prince weren't sufficiently transformative when licensed for the same commercial purpose as the original photo. AI companies worry that courts will read that decision to mean any commercial AI training fails the transformative test. Anthropic argues that AI training is fundamentally different: the model doesn't reproduce or display the copyrighted work but instead extracts unprotectable facts and patterns. The output is something new.
Not everyone buys that argument. Copyright holders point out that AI models can and do reproduce copyrighted material -- sometimes verbatim. ChatGPT has been shown to spit out near-exact passages from books. Image generators can produce works strikingly similar to specific artists' styles. The fact that reproduction is possible, critics argue, undermines the claim that training is purely transformative. And the economic harm is real: why would anyone pay for a Westlaw subscription if an AI trained on Westlaw's data can answer the same legal questions for free?
The broader policy debate is just as fraught. AI companies, backed by significant venture capital and, increasingly, the current administration's pro-innovation stance, argue that restricting training data would cripple American competitiveness. Chinese AI labs, they note, aren't asking for copyright licenses. If U.S. courts impose onerous licensing requirements, the argument goes, the technology will simply develop elsewhere without those constraints. Anthropic makes a version of this argument in its brief, contending that the public interest in continued AI development should weigh heavily in the fair use analysis.
Rights holders see it differently. To them, AI training is the largest act of mass copying in human history, conducted for profit by some of the wealthiest companies on earth. The Authors Guild has been particularly vocal, arguing that writers deserve compensation when their work is used to build products that generate billions of dollars. Musicians, visual artists, and journalists have echoed that view. The question isn't whether AI should exist, they say. It's whether the companies building it should be allowed to take copyrighted material without paying for it.
Congress has shown little appetite to settle the question legislatively. Several bills have been introduced -- including proposals to create compulsory licensing schemes for AI training data -- but none has gained meaningful traction. The Copyright Office released a lengthy report in 2024 examining the issue but stopped short of recommending specific legislative action, calling instead for further study. That leaves the courts as the primary venue for resolving the dispute, which is why the Fifth Circuit case carries such outsized importance.
Anthropic's decision to file an amicus brief rather than wait for its own cases to reach the appellate level is a calculated move. The company wants to shape the legal arguments before a potentially unfavorable precedent solidifies. It's also a signal of how seriously the AI industry takes the threat. When a company with Anthropic's resources -- it has raised more than $15 billion in funding -- files a brief in a case involving a defunct startup, it tells you the stakes extend far beyond the parties in the lawsuit.
The Fifth Circuit hasn't set a date for oral arguments. But the case is being closely watched by lawyers on both sides of the copyright divide. Several other amicus briefs have been filed, including by organizations representing publishers and content creators who support the lower court's ruling. The court could affirm, reverse, or send the case back for further proceedings. Each outcome would send a different signal to the dozens of pending AI copyright cases working their way through federal courts.
One thing is clear: the legal uncertainty is itself having an effect. Some AI companies have begun striking licensing deals with publishers -- OpenAI has agreements with the Associated Press, Axel Springer, and others -- hedging against the possibility that courts will rule against fair use. Others, including Meta, have continued to argue that no licenses are necessary. The market is pricing in risk in real time, with every court filing shifting the calculus slightly.
And then there's the question of remedy. Even if courts ultimately rule that AI training infringes copyright, what happens next? Ordering companies to delete models trained on copyrighted data would be extraordinarily disruptive and possibly technically infeasible. Monetary damages could run into the billions. A compulsory licensing regime might emerge as a practical compromise, but designing one that satisfies both creators and AI developers would be a monumental undertaking. The legal system is being asked to resolve a technological question it wasn't designed for, using a statute written decades before anyone imagined machines that could read every book ever published in a matter of hours.
For Anthropic, the stakes are existential in a very literal sense. If training on copyrighted material isn't fair use, the company's core product -- Claude, and the models that power it -- was built on an illegal foundation. The same is true for OpenAI, Google DeepMind, Meta, and virtually every other major AI developer. That's why the industry isn't treating Thomson Reuters v. Ross Intelligence as a minor case about a dead startup. It's treating it as the first real test of whether the legal system will accommodate the way AI actually works.
The Fifth Circuit's decision, whenever it comes, won't be the last word. But it may be the most consequential word so far.