)
Anthropic's AI chatbot, Claude, has been long in the headlines for both good and the bad. It has been building strong momentum with both its core AI models and newer ventures like its cybersecurity-focused Claude Mythos.
However, as the company scales rapidly, rising user criticism is casting doubt on whether it can sustain product quality alongside its expanding ambitions.
Tensions in the artificial intelligence sector are spilling into public again. And this time the rival company also had something to say.
Recent reactions on X (formerly Twitter) and comments from OpenAI CEO Sam Altman highlight a growing divide, not just in technology, but in how AI firms position their products and influence public perception.
X user slams Anthropic, Altman sweeps comment
A wave of criticism emerged online following claims that Anthropic had removed key features, including "Claude Code", from its Pro subscription tier.
One prominent X user described the move as a major misstep, arguing that it undermines the company's core identity as a coding-focused AI provider. The criticism reflects broader dissatisfaction among some users who feel the product experience has not matched the hype surrounding Anthropic's models.
The backlash also points to a deeper concern: perceived value. As AI tools become increasingly subscription-driven, users are scrutinising feature access more closely. Removing or restricting capabilities, especially those central to a product's appeal, can quickly erode trust, particularly in a competitive landscape where alternatives are readily available.
OpenAI CEO Sam Altman appeared to seize the moment. In a brief but pointed post, he invited users to "come to the light side", a remark widely interpreted as a direct jab at Anthropic. Though light in tone, the comment underscores the intensifying rivalry between leading AI firms, where even subtle messaging can carry strategic weight.
The exchange illustrates how product decisions are no longer confined to internal roadmaps; they are instantly dissected in public forums, shaping brand perception in real time. In an industry defined by rapid iteration, user sentiment has become a critical battleground.
Sam Altman talks about Anthropic's fear-based marketing
Beyond product criticism, Altman has also taken aim at how competitors frame their technological advancements. Speaking on the Core Memory podcast, he questioned Anthropic's approach to promoting its cybersecurity-focused model, Mythos, which the company has described as too powerful for broad public release.
Anthropic's positioning, that at t he model could be misused by cybercriminals, has drawn scepticism from critics who view such claims as exaggerated. Altman suggested that this kind of narrative functions less as a safety precaution and more as a strategic tool.
He characterised it as a form of "fear-based marketing", arguing that emphasising potential risks can create an aura of exclusivity around a product. By framing AI capabilities as dangerous or restricted, companies may justify limiting access while simultaneously increasing perceived value.
Altman used a vivid analogy to illustrate the point, likening the strategy to warning of an impending threat while offering protection at a premium. The implication is that such messaging can reinforce a model where advanced AI remains concentrated among a small group of users, rather than being broadly accessible.
However, the critique is not without irony. The wider AI industry, including OpenAI itself, has frequently invoked existential risks and transformative potential in public discourse. Warnings about AI's societal impact, ranging from job displacement to more extreme scenarios, have become a recurring theme across companies and research communities alike.
This dual narrative, highlighting both opportunity and danger, serves multiple purposes. It can attract investment, shape regulatory conversations and position companies as both innovators and responsible stewards of powerful technology. At the same time, it raises questions about where genuine concern ends and strategic messaging begins.