Feb 27, 2026

This was a clarifying week for the tech industry.
On Monday, security researchers revealed that Persona, the verification vendor used by Discord, Reddit, Roblox, ChatGPT, and Character.AI, had its entire frontend codebase exposed on a U.S. government-authorized server. 2,456 files, 53 megabytes, no exploit needed.
What they found inside was more interesting than the exposure itself. Persona's "age verification" tool contains 269 distinct verification checks. Watchlist screening. Adverse media scanning across 14 categories, including terrorism and espionage. Risk scoring. The ability to file Suspicious Activity Reports directly to FinCEN, the U.S. Treasury's financial crimes unit.
On Wednesday, Anthropic published a statement drawing a hard line against two uses of their AI: mass domestic surveillance and fully autonomous weapons. The Department of War had threatened to remove them from military systems, designate them a "supply chain risk," and invoke the Defense Production Act to force compliance. Anthropic's response was direct: these threats do not change our position.
Two very different companies. Two very different decisions about what their technology should and shouldn't do. And a question that every company building technology for people needs to answer: where do you draw your line, and do you draw it before or after someone asks you to cross it?
When the tool outgrows the job
The Persona situation is a textbook case of scope creep that nobody caught until researchers went looking.
Persona builds verification infrastructure for banks, employers, and government agencies. KYC compliance, anti-money laundering checks, financial crime reporting. Heavy stuff for serious regulatory environments.
That same infrastructure got pointed at Discord users trying to access a gaming server. Persona's CEO told Fortune that all 269 checks are options, and a social media platform wouldn't use all of them. That's probably true. But the architecture was built to handle the most demanding use case and deployed across every use case. The tool never shrank to fit the job.
This is how scope creep works in practice. A platform hires a vendor to answer a simple question. The vendor brings infrastructure designed for a completely different risk profile. The capability sits dormant until it doesn't. And in the meantime, user data flows through a system built for surveillance-grade verification, whether anyone intended that or not.
Discord's had two vendor problems in five months. Last year, a support vendor breach exposed roughly 70,000 government documents users had uploaded for age verification. This week, a different vendor's code turned up on a government endpoint. Every third-party relationship is a new attack surface. Every handoff is a place where things break.
Drawing the line before you're asked to cross it
What made Anthropic's statement this week notable was the timing. They didn't wait for a scandal. They didn't get caught doing something and issue an apology. They published their position while under active pressure to change it, and said no.
Their argument on mass domestic surveillance is worth reading regardless of where you sit politically. The core point: the law hasn't caught up with what AI can do. Individually harmless data, movement records, browsing habits, association patterns, can now be assembled into a comprehensive picture of any person's life, automatically and at massive scale. The capability exists. The question is whether companies choose to build it or choose to draw a boundary.
That same logic applies directly to verification.
When a verification vendor builds 269 checks into an age confirmation tool, they've made a choice about what their technology should be capable of. When that infrastructure shows up on a government server, the question of intent becomes irrelevant. The capability speaks for itself.
The companies that earn trust in the next decade will be the ones that decide what they won't build before someone asks them to build it. Anthropic drew that line on surveillance and autonomous weapons.
What platforms should ask their verification vendors
If you're evaluating verification partners right now (and after this week, you should be), here are the questions that matter:
What exactly do you collect during a verification? Not what you can collect. What do you actually collect for our use case? Get it in writing.
What capabilities exist in your infrastructure beyond what we're using? If your vendor has 269 checks and you're using 3, the other 266 are still there. Understand what you're plugging into.
What happens in a breach? If an attacker gets everything you hold on our users, what do they actually have? Which servers, which jurisdictions, which government endpoints does data travel? This week proved that "our vendor handles it" is not a sufficient answer.
The week's real lesson
Verification can help in a number of instances, as it allows users to benefit in spaces where real humans are distinguished from bots and bad actors. Platforms need reliable ways to confirm who's real and who's not.
But verification built without integrity and the user in mind will always carry maximum risk. And technology built without clear boundaries on what it will and won't do will always drift toward the use case that pays the most, regardless of what users signed up for.
The companies worth trusting are the ones that draw their lines early and hold them under pressure. That's true for AI companies facing government demands. And it's true for verification companies that are willing to spy on you.