Anthropic Fought the Pentagon to Protect Your Privacy. Now It Wants Your Passport.
- Ethan Carter
- 5 days ago
- 8 min read
On February 27, 2026, Anthropic told the United States Secretary of Defense to go find another AI company. The Pentagon had demanded that Claude be available for "all lawful purposes," including mass surveillance and autonomous weapons. Anthropic refused, lost its government contract, and watched its user base grow by 60 percent in the months that followed. People downloaded Claude because they believed they were choosing the company that would not trade their data and safety for a government check.
On April 14, 2026, Anthropic quietly updated its help center to note that some Claude users would be required to submit a government-issued photo ID and a live selfie. The verification is handled by Persona, a company backed by Peter Thiel's Founders Fund.
The same Founders Fund that invested in Palantir. The same Palantir whose customers include the FBI, CIA, and US Immigration and Customs Enforcement.
What Anthropic Is Asking For and How It Is Framed
The claude privacy rollout went public on April 16, 2026. Anthropic's help center describes it as applying to "a few use cases" and appearing "as part of our routine platform integrity checks, or other safety and compliance measures." The phrase "routine platform integrity checks" is doing a lot of work for what is actually a biometric verification process involving government documents and facial data.
The acceptable forms of ID include passports, driver's licenses, and national identity cards. The document must be physically present, legible, and feature the holder's photograph. Users must also provide a live selfie. Under the framework, Anthropic serves as the data controller while Persona processes and stores the actual biometric data on Anthropic's instructions.
The competitive contrast is stark. ChatGPT does not require government ID for standard consumer use. Neither does Gemini. Anthropic has positioned itself as the AI company with the highest ethical standards, and it is now also the AI company with the most invasive identity requirements for its paying users.
Anthropic has not clarified which specific capabilities trigger the verification prompt. The "few use cases" language leaves users unable to know in advance whether accessing a new feature will require them to submit biometric data. An Anthropic spokesperson told Engadget the verification appears when the company sees "potentially fraudulent or abusive behavior," which suggests it is behavior-triggered, not feature-triggered, but the lack of transparency around the criteria makes that distinction difficult for users to rely on.
Anthropic's security case is not without merit. According to Anthropic, verification targets accounts showing signs of fraudulent or abusive behavior, which is a narrower use than a blanket KYC requirement. The Claude Mythos breach, in which a Discord group accessed a restricted military-grade AI model through a third-party vendor environment, gave Anthropic a concrete example of what inadequate access controls can produce. Identity verification for high-risk capabilities is a defensible response to that incident.
The timing is also notable. The claude id verification rollout came just weeks after the Claude Mythos Discord breach, in which unauthorized users accessed Anthropic's most restricted AI model through a third-party vendor environment. Whether the ID verification expansion is a direct response to that incident has not been confirmed, but the sequence of events gives Anthropic a plausible security rationale that the company has not explicitly invoked.
How Anthropic Built Its Privacy Reputation
To understand why the backlash is this sharp, you need to understand how Claude's user base grew so dramatically in the months before this announcement.
In January 2026, the US Department of Defense began pressuring Anthropic to remove usage restrictions from its Pentagon contract. Secretary Hegseth issued a directive requiring all DoD AI contracts to include standard "any lawful use" language, which would have allowed Claude to be used for mass surveillance of US residents and for fully autonomous weapons systems. Anthropic had explicitly prohibited both uses since signing the original contract in July 2025.
Anthropic CEO Dario Amodei refused. Hegseth gave a five-PM deadline on February 27. Anthropic held its position. The Trump administration directed federal agencies to stop using Anthropic's products and designated the company a supply chain risk.
What happened next was not what the administration expected. Claude jumped to number one on the Apple App Store free apps list, surpassing ChatGPT. Daily signups tripled compared to November. Free user counts increased by more than 60 percent in January and February alone. The Instagram account "quitGPT" gained 10,000 followers. A Reddit post urging users to cancel ChatGPT collected 30,000 upvotes. People were not just downloading Claude. They were downloading it specifically because Anthropic had refused surveillance.
On March 24, a federal court in the Northern District of California granted Anthropic's injunction, finding that the government's actions were punitive rather than legitimate national security measures. Anthropic had gone to court and won. Its reputation as the company that would not compromise user safety for government access was at a peak.
That was thirty-one days before the ID verification rollout.
The Persona Problem Is Not a Footnote
The choice of Persona as the verification partner is the detail that turned a controversial policy decision into a trust crisis.
Persona is a San Francisco-based identity verification company backed by Founders Fund, the venture capital firm co-founded by Peter Thiel. Founders Fund led both Persona's Series C ($150 million) and Series D ($200 million) funding rounds. Thiel also co-founded Palantir, whose clients include the FBI, CIA, and ICE, with its technology primarily deployed to expand government surveillance capabilities.
In February 2026, security researchers discovered that Persona's government dashboard codebase was sitting unprotected on a FedRAMP-authorized government server endpoint. The exposed code revealed that Persona conducts facial recognition against law enforcement watchlists, screens identities against lists of politically exposed persons, and performs 269 distinct verification checks including categories covering terrorism and espionage screening.
Persona's CEO denied any direct ties to Palantir, ICE, or government surveillance programs, but acknowledged the company is pursuing FedRAMP certification, the security standard required for software used by US federal agencies. Discord cut ties with Persona the same month those findings were published. OpenAI, Lime, and Roblox continued using Persona for age verification. Anthropic started using it for identity verification two months later, selecting a vendor that a peer platform had already publicly rejected on surveillance infrastructure grounds.
Anthropic's legal position, that it is the "data controller" while Persona merely processes data under Anthropic's instructions, is technically accurate under data protection frameworks. It also transfers the liability while leaving Persona in physical possession of government IDs and biometric selfies.
Comparison: What Other AI Companies Are Doing
No major AI company has gone this far in requiring government-issued identity verification for consumer use. OpenAI's ChatGPT requires email verification. Google's Gemini requires a Google account. Age verification prompts exist across platforms, but biometric document-plus-selfie at the platform level for an AI chatbot is not a standard that exists elsewhere in the consumer AI market.
The closest precedent from outside AI is financial services. KYC (Know Your Customer) requirements for banking and crypto platforms routinely require government ID, but those services operate under explicit financial regulations that mandate identity verification to prevent money laundering and fraud. Claude is a subscription software product. The regulatory pressure to collect biometric data at this level does not obviously apply.
The irony is that Anthropic's pentagon refusal made the company the only major AI provider whose users actively chose it on privacy grounds. ChatGPT users did not migrate because of OpenAI's Pentagon deal on the assumption that their chats were private. Claude users migrated because Anthropic said something specific about mass surveillance, and they believed it. That specificity is what makes the ID verification requirement feel like a betrayal rather than an inconvenience. These are users who read the fine print.
The argument Anthropic might make, and has implicitly made, is that collecting ID to prevent abuse is not the same as enabling surveillance. That distinction is real in principle. But the principles of claude privacy are harder to defend when the mechanism chosen to enforce platform integrity is a vendor that screens users against terrorism and espionage watchlists as a routine feature of its product.
Anthropic's stated rationale, catching fraudulent or abusive accounts, raises its own questions. The pattern of when verification triggers, whether it is applied consistently across user types or disproportionately to specific behaviors or demographics, has not been disclosed. Users who receive a verification prompt cannot easily distinguish between "Anthropic suspects my account is fraudulent" and "Anthropic is testing a new compliance workflow on my account segment."
What Happens to the Trust That Built Claude's User Base
The Electronic Frontier Foundation wrote during the Pentagon dispute that privacy protections should not depend on the decisions of a few powerful people. That sentence was meant as a criticism of government overreach. It applies here too.
Anthropic built a user base by making a principled stand. That stand was real, and the court found it legally sound. But the same logic that led 60 percent more users to trust Claude because of that stand now turns against the company when its infrastructure choices do not match its stated values. Users who came to Claude because Anthropic would not hand data to the government are now being asked to hand their biometric data to a company with government surveillance infrastructure ties.
The claude app store surge that followed the Pentagon refusal was a trust transaction. Users paid with their attention and subscriptions because they believed Anthropic's values aligned with theirs. Trust acquired quickly through a principled moment can be spent quickly through an unprincipled vendor choice. The users who came for claude privacy concerns are exactly the ones most likely to notice that Persona was already cut by Discord for surveillance infrastructure issues two months before Anthropic chose to use it.
The broader question is what "privacy-first AI" actually means as AI tools become more deeply integrated into how people manage sensitive work information. Whether you are building a personal knowledge base, running sensitive research workflows, or handling client data, the privacy posture of your AI tools is not just about chat logs. It is about the entire verification and identity infrastructure those tools rest on.
The EFF's point was about governments. It applies to companies too. Claude privacy as a product feature is only as strong as the vendor you outsource the verification to.
FAQ: Common Questions About Claude Identity Verification
Why is Anthropic asking Claude users to verify their identity?
Anthropic says the verification appears for "a few use cases" and when the platform detects "potentially fraudulent or abusive behavior." The company has not published a list of which features or behaviors trigger the prompt, making it difficult for users to predict when they might be asked to submit a government ID.
Who handles the ID data when you verify your Claude identity?
Persona, a third-party identity verification company, processes and stores government IDs and selfies. Anthropic is the data controller under this arrangement, meaning it has legal responsibility for how the data is used, but Persona holds the actual biometric information.
Is there a connection between Persona and government surveillance?
Founders Fund, the venture firm co-founded by Peter Thiel, backed both Persona's Series C and Series D funding rounds. Thiel also co-founded Palantir, which serves FBI, CIA, and ICE. In February 2026, researchers found Persona's government dashboard code on a FedRAMP-authorized server, showing Persona screens users against law enforcement watchlists. Persona's CEO denies ties to government surveillance but confirmed FedRAMP certification is in progress. Discord cut ties with Persona immediately after those findings were published.
Do ChatGPT or Gemini require government ID verification?
No. Neither OpenAI's ChatGPT nor Google's Gemini currently require government-issued identity documents for standard consumer access. Anthropic is the only major AI chatbot provider with this requirement. Concerns about claude privacy compared to competitors have driven significant discussion in developer and privacy communities since the rollout was confirmed.