
Your posts can expose you: AI now links anonymous accounts, says Anthropic and ETH Zurich study
What if your "anonymous" social media account isn't really anonymous anymore? A new study by Anthropic and ETH Zurich suggests that may already be the case, with AI now capable of identifying users using only what they write online.
The research shows that large language models can link pseudonymous accounts to real identities or connect multiple accounts belonging to the same person without relying on explicit personal details. The study describes this as large-scale "deanonymisation," enabled purely through analysis of user-generated text.
What the study found
According to the researchers, the system they developed works directly on unstructured data such as posts, comments, and conversations. This marks a shift from earlier methods that depended on structured datasets or manual feature extraction.
The process involves multiple steps. The model first pulls identity-related signals from text, then searches across datasets for possible matches, and finally applies reasoning to verify whether profiles belong to the same individual. The paper notes that this approach can operate across platforms and communities.
In testing, the system successfully linked accounts across platforms including Reddit and LinkedIn, and also matched users within Reddit communities. The study reports that these LLM-based techniques outperform traditional methods, delivering high precision while identifying a significant share of users.
How identification happens
The study highlights that small, seemingly harmless details in online posts can act as identifiers. These include writing style, interests, references to location, education, and recurring topics.
Researchers explain that the model can analyse conversations to infer attributes such as profession, background, and tools used. It then searches publicly available information to find matches based on these traits. The process mirrors how a human investigator might piece together clues, but is executed at scale and speed.
Why it matters
The findings challenge the assumption that pseudonyms provide meaningful privacy online. According to the study, that assumption no longer holds because AI significantly reduces the effort needed to connect scattered data points.
This has implications for users who depend on anonymity, including those discussing sensitive issues or operating across multiple platforms. The ability to link accounts could expose identities even when direct identifiers are not shared.
Risks and concerns
The study outlines several risks associated with this capability. It notes that governments or organisations could use such systems for surveillance, while companies could link anonymous activity to user profiles. There is also the risk of misuse by malicious actors to build detailed personal profiles.
Researchers add that each additional post increases the likelihood of identification, as more data points improve matching accuracy.
What can be done
The paper suggests that existing privacy measures may not be sufficient for unstructured text. It points to potential safeguards such as limiting data access, detecting automated scraping, and strengthening AI system protections.
However, the researchers acknowledge that preventing such identification is challenging, as the same data that enables these systems is central to how online platforms function.