Anthropic Introduces ID Verification for Claude, Sparking Privacy Concerns
Market Updates

Anthropic Introduces ID Verification for Claude, Sparking Privacy Concerns

The Hans India7d ago

Anthropic's ID verification for Claude introduces accountability but raises serious concerns about privacy, surveillance, and the future of anonymous AI usage.

In a move that could reshape how users interact with artificial intelligence tools, Anthropic has begun requiring government-issued identification for access to certain features on its Claude platform. The decision marks a significant shift in the AI landscape, where identity verification has traditionally not been part of the user experience.

According to the company, users attempting to access select functionalities will now need to complete a verification process similar to Know Your Customer (KYC) checks commonly used in banking and telecom sectors. This involves uploading a valid government ID -- such as a passport, driver's license, or national identity card -- and taking a live selfie for authentication. Anthropic specifies that only original physical documents are accepted, ruling out photocopies or digital versions.

While the company has not clearly outlined which features require this verification, it has emphasized that the measure is part of a broader effort to ensure responsible AI usage. Anthropic states that the process helps it "prevent abuse, enforce our usage policies, and comply with legal obligations." The verification process is designed to be quick, typically taking around five minutes, and allows multiple attempts if the initial submission fails.

The verification itself is handled by Persona Identities, a third-party provider specializing in secure identity authentication. Anthropic maintains that user data collected during this process is not used to train its AI models and remains stored on Persona's servers rather than its own systems. Additionally, the company notes that accounts may be restricted or banned in certain scenarios, such as when users are under 18 or accessing the platform from unsupported regions.

This development comes shortly after Anthropic experienced a surge in user activity, following its decision to step away from a potential partnership with the U.S. Department of Defense. The company had reportedly expressed concerns over the possible use of its AI models for large-scale domestic surveillance.

Despite Anthropic's assurances, the introduction of ID verification has triggered a wave of concern among users. Critics argue that such measures could erode privacy and set a precedent for stricter regulations around AI usage. One user claimed that this may pave the door for laws which track all AI use. The person wrote, "Next up will be laws: No AI without gov-issued ID, All AI use tracked to individual - no private AI."

Others believe the move could drive users toward competing platforms that do not impose similar requirements. One user reckoned that this may backfire on Anthropic as no other AI company, such as OpenAI and Google want such verifications. The user wrote, "Anthropic just handed their competitors a gift."

As the debate continues, Anthropic's decision raises broader questions about the balance between safety, accountability, and user privacy in the rapidly evolving AI ecosystem.

Originally published by The Hans India

Read original source →
Anthropic