)
For AI companies like Anthropic, expansion decisions depend on a mix of talent availability, regulatory clarity, infrastructure, and funding access. File Image/Reuters
Anthropic, Google, and OpenAI have come together to take on attempts led by Chinese rivals at model distillation. As per reports, the Silicon Valley firms have agreed to share information to ensure that their frontier AI models are not copied and brought to market at a cheaper price point. Notably, in 2025, OpenAI accused the Chinese firm DeepSeek of condensing its model to build DeepSeek R1.
According to a Bloomberg report, the three AI firms are now working together to crack down on attempts by Chinese AI developers that allegedly use model distillation techniques to copy Anthropic, Google, and OpenAI's proprietary large language models.
The companies are reportedly sharing information via Frontier Forum, a nonprofit founded by the three firms alongside Microsoft in 2023. The body was established to promote the safe and responsible development of frontier AI systems and encourage knowledge sharing.
What does model distillation stand for?
Model distillation is a technique where the knowledge from a powerful AI model is transferred into a smaller, more efficient model, allowing the smaller model to mimic the behavior and outputs of the larger one. In scenarios where such use is authorized, this method helps companies save costs on developing smaller models by using frontier AI systems as a reference point.
The allegations raised by the tech giants indicate that Chinese companies are using AI services in a way that breaches their terms of use. They are essentially generating a large volume of outputs and using that data to train their own models. Once these trained models become efficient enough to operate independently, they are released at highly competitive pricing that undercuts the models from which the information was distilled.
Despite the presence of service restrictions preventing commercial access to Claude in China, the firms allegedly engaged commercial proxy services to sidestep Anthropic's restrictions, enabling access to networks running tens of thousands of Claude accounts simultaneously.
Anthropic further revealed in an earlier report that the Chinese firms collectively generated over 16 million exchanges with Claude from around 24,000 fraudulently created accounts. Of the firms identified, Anthropic found that MiniMax drove the most traffic, with over 13 million exchanges. Anthropic and OpenAI have framed distillation by these firms as a potential national security threat.