We built the surveillance machine ourselves. Over the past decade, we willingly handed over our fingerprints to unlock our phones, mapped our faces for convenient logins, and gave apps permission to track our locations in exchange for slightly better restaurant recommendations. We did this without a second thought. Now, as social media companies push for facial recognition scans and government ID verification as conditions for using their platforms, people are suddenly uncomfortable. I think that discomfort is justified, but it arrives about ten years too late, and the proposed solutions are far worse than the problems they claim to solve.
We Already Gave Away the Store
Around 2013 to 2015, something shifted. Biometric data went from being the stuff of spy movies to a casual consumer feature. Apple introduced Touch ID, then Face ID. Android followed. We started training algorithms on our own faces and didn't think twice about it. Location services became default-on. Apps harvested contact lists, browsing habits, and behavioral patterns at a scale that would have been unthinkable a generation earlier.
I think most people understood, on some level, that they were trading privacy for convenience. But the scale of what we enabled is worth sitting with. We didn't just give a few companies some data. We helped construct one of the most comprehensive surveillance infrastructures in human history, and we did it voluntarily. Governments didn't have to mandate it. No one forced us. The incentive was a smoother unlock screen and a better-curated feed.
So when a platform like Instagram or TikTok floats the idea of requiring a face scan or a photo of your driver's license to verify your age, I find it hard to take the outrage seriously in isolation. The outrage is correct, but it should have started a long time ago. The biometric genie has been out of the bottle for years. What these verification proposals represent is the next logical step in a system we already consented to, and that should terrify everyone.
Handing your government-issued ID or a biometric face scan to a social media company creates a centralized honeypot of identity data that is virtually guaranteed to be breached, sold, or misused. These companies have repeatedly demonstrated that they cannot be trusted with the data they already have. Giving them the keys to your legal identity is reckless on an individual level and catastrophic on a societal one.
The Breach Is Not a Question of "If"
It is worth pausing on the breach risk because I think people underestimate how severe it is. When your password gets leaked, you change your password. When your credit card number gets stolen, your bank issues a new card. These are inconveniences, but they are recoverable. When your biometric data or your government ID gets compromised, there is no reset button. You cannot change your face. You cannot get a new set of fingerprints. Your identity documents are tied to you permanently, and once that data is in the hands of bad actors, the damage compounds over time in ways that are difficult to predict and nearly impossible to undo.
We already have a long track record of major data breaches at companies that assured users their data was safe. Equifax exposed the Social Security numbers of nearly 150 million Americans. Facebook allowed the data of 87 million users to be harvested by Cambridge Analytica. T-Mobile, Yahoo, Marriott, LinkedIn. The list is long and it keeps growing. These are companies with dedicated security teams and billions of dollars in resources, and they still failed to protect user data. Now we are being asked to trust that these same kinds of companies will safeguard our biometric identities. I find that expectation unreasonable.
The calculus is straightforward. The more sensitive the data you collect, the more devastating the breach when it happens. Biometric and identity data is the most sensitive category of personal information that exists. Concentrating it in the databases of social media companies, organizations whose primary business model is monetizing user data, is an engineering decision that maximizes risk while providing minimal benefit to the people whose data is being collected.
"Protecting the Children" Is a Marketing Pitch
The loudest justification for biometric age verification is child safety. I don't doubt that children face real dangers online. But I am deeply skeptical of the companies making this argument, because their business models depend on the exact behavior they claim to want to prevent.
Social media platforms are engineered to be addictive. The infinite scroll, the dopamine-driven notification systems, the algorithmic feeds that learn what keeps you engaged and serve you more of it. These features were not designed with forty-year-olds in mind. They were designed to maximize engagement across all demographics, and companies have known for years that minors are among their most engaged users. Internal documents from multiple platforms have shown that leadership was aware of the psychological harm their products caused to teenagers and chose to do nothing, or worse, optimized further.
So when these same companies turn around and say they need your face scan to keep kids safe, I see a contradiction that borders on cynical. You cannot spend a decade making your platform as appealing to young people as possible and then act surprised that young people are on it. If a company genuinely wanted to protect children, the most effective approach would be to stop designing platforms that function like digital candy stores.
Consider Craigslist in the mid-2000s. It was utilitarian to the point of being ugly. No algorithm. No feed. No engagement optimization. Kids didn't flock to Craigslist because there was nothing there designed to capture their attention. The interface looked like a tax form. That was, in a strange way, its own form of child protection. It was a platform built for adults, and it looked and felt like one. The same was true of most early internet forums and classified sites. They were boring by design, and that boredom was a natural barrier.
I am not arguing that every platform should look like a 2004 message board. But I am arguing that if your platform is a visual, interactive, algorithmically supercharged experience specifically optimized for engagement, you have made a product that will inevitably attract minors. The answer to that problem is not to bolt a biometric checkpoint onto the front door. The answer is to take responsibility for the product you built.
The "Teen-by-Default" Idea Has Potential, but Not Like This
Some platforms, Discord among them, have floated the concept of making accounts "teen by default," where new users are placed into a restricted experience unless they verify that they are adults. On paper, this sounds reasonable. In practice, the way most companies envision it is just another vehicle for biometric or ID-based verification dressed up in friendlier language.
If the only way to exit the restricted "teen" experience is to upload a photo of your face or scan your driver's license, then you haven't solved the surveillance problem. You've just added an extra step before arriving at the same outcome. The fundamental issue remains: a private company is collecting identity documents or biometric data from its users, and that data becomes a liability the moment it is stored.
But I think the core concept of a teen-by-default model can work if it is implemented differently. Instead of gating the adult experience behind identity verification, platforms could gate it behind account maturity and behavioral signals. An account that has existed for a certain period of time, that has demonstrated consistent usage patterns, that has not been flagged for violations, could gradually unlock features without ever requiring a face scan or an ID upload. This approach would not be perfect. A determined kid could still circumvent it. But no system is going to be perfect, and the advantage of a maturity-based model is that it doesn't require mass biometric collection to function.
Platforms could also lean into device-level age signals rather than platform-level identity verification. Mobile operating systems already have parental control frameworks that can communicate a user's age bracket to apps without revealing the user's identity to the app itself. Apple and Google both have infrastructure for this. The information stays on the device or within the operating system's parental control layer, and the platform receives only a binary signal: this user is above or below a certain age threshold. The platform never sees a face, never holds an ID, and never stores biometric data. The infrastructure largely exists already. It just requires platforms to use it instead of building their own verification empires.
The difference between these approaches and what most companies are proposing is where the data lives and who controls it. In the models I'm describing, the sensitive information stays with the user or their device. In the models companies are proposing, the sensitive information flows to the company.
Parenting Has to Be Part of This Conversation
There is another dimension to this that I think gets brushed aside too quickly. At some point, the responsibility for what a child accesses online falls on the adults in that child's life.
I understand that the internet is vast and that no parent can monitor everything. But tools exist. Parental controls, content filters, screen time limits, and device management software are all available and increasingly sophisticated. More importantly, the most effective tool is the relationship between a parent and their child. Teaching a kid how to navigate the internet safely, having honest conversations about what they'll encounter, and setting boundaries that feel like guidance rather than punishment will always be more effective than any age gate a tech company can build.
The push for biometric verification feels, in part, like a societal desire to offload parenting onto technology. It is easier to demand that Instagram verify ages with a face scan than it is to sit down with a twelve-year-old and have an uncomfortable conversation about why certain platforms aren't appropriate for them yet. But the uncomfortable conversation is the one that actually works. A face scan is a gate. A conversation is education. One of those scales with a child's development and the other doesn't.
I also think there is a way to approach internet restrictions with kids that doesn't breed resentment. When a parent explains their reasoning, when they frame boundaries as something that will evolve as the child matures, and when they involve the child in setting those boundaries, the restriction feels collaborative rather than authoritarian. A thirteen-year-old who understands why they don't have access to certain platforms and who knows that access will come as they get older is in a fundamentally different position than a thirteen-year-old who just knows their parents said no. The first approach builds digital literacy. The second builds a kid who creates a burner account the first chance they get.
I want to be clear: I am not blaming parents for the predatory design choices of tech companies. But I am saying that no regulatory framework or verification system will ever replace the need for active, engaged parenting when it comes to internet access.
Hold Companies Accountable for What Happens on Their Platforms
If we are going to talk about protecting children online, the conversation should center on corporate accountability rather than mass biometric collection. Platforms that host or enable the exploitation of minors should face severe consequences.
Discord is a platform where servers containing child sexual abuse material have been documented repeatedly. The platform's moderation infrastructure has been criticized for allowing this content to persist rather than being identified and removed immediately. When the structure of your platform makes it trivially easy for bad actors to create private spaces that facilitate the abuse of children, and your response is slow or inadequate, you should face legal and financial consequences proportional to the harm.
I believe our laws should make it clear that if you operate a platform and children are being exploited on it, the liability falls on you. Not on the parents. Not on the users. On you, the company that built the infrastructure, collected the advertising revenue, and failed to police your own product. That kind of accountability would do more to protect children than every biometric checkpoint combined, because it attacks the incentive structure. It makes it expensive to be negligent. It makes it costly to look the other way.
There is a meaningful difference between a platform that has robust detection systems, cooperates with law enforcement, and takes down harmful content within hours, and a platform that lets illegal material fester in dark corners because aggressive moderation is expensive and might reduce engagement metrics. Our legal frameworks should recognize that difference and punish the latter severely. Right now, too many platforms operate in a gray area where they technically prohibit harmful content in their terms of service but lack the infrastructure or the motivation to enforce those terms at scale. That gap between policy and practice is where children get hurt, and closing it does not require a single face scan.
The Anonymity Question
I want to address a counterargument that comes up frequently, which is that anonymous and pseudonymous accounts are inherently dangerous and that identity verification would reduce harassment, abuse, and bad behavior online. I understand the impulse behind this argument, but I think it is wrong.
Anonymity and pseudonymity are foundational to how the internet has functioned since its inception. They allow whistleblowers to share information safely. They allow people living under repressive governments to communicate freely. They allow individuals exploring their identity, whether that involves sexuality, gender, political dissent, or anything else, to do so without fear of retaliation. Stripping away the ability to participate online without attaching your legal name and face to every interaction would cause enormous harm to vulnerable populations.
The idea that verified identities produce better behavior is also not well supported by evidence. Facebook has operated under a real-name policy for years, and it remains one of the most toxic platforms on the internet. People say horrible things under their real names every single day. The problem of online abuse is a moderation problem and a platform design problem. Solving it by eliminating anonymity is like solving traffic accidents by banning cars. You might reduce the specific harm, but the collateral damage is enormous and the underlying causes remain unaddressed.
Who Actually Benefits from This System?
When I step back and look at who actually gains from biometric age verification, the picture becomes clarifying. It is not children. Children who want to access a platform will find ways around verification systems, just as teenagers have always found ways around age restrictions. It is not parents. Parents gain nothing from a system that replaces their judgment with a corporate checkpoint. It is not regular adult users. Adults gain nothing from handing over their most sensitive personal data to access a free social media app.
The primary beneficiaries are the platforms themselves and the growing industry of identity verification companies. For platforms, biometric verification provides a legal shield. If a child is harmed on a platform that requires ID verification, the company can point to its verification system and argue it took reasonable steps. The verification becomes liability insurance, purchased not with the company's money but with the personal data of its users. For identity verification companies like Clear, Jumio, and others in the space, mandatory age verification represents an enormous new market. Every social media user becomes a potential customer whose face needs scanning, whose ID needs processing, whose data needs storing. The financial incentives driving this push have very little to do with child safety and very much to do with creating a new revenue stream built on top of personal identity data.
I think it is important to follow the money when evaluating any policy proposal. When the people most aggressively advocating for a solution are the same people who stand to profit from it, skepticism is warranted.
The Chilling Effect on Speech
There is a position of this conversation that extends beyond privacy and into the territory of free expression. When you require identity verification to participate on a platform, you fundamentally change the relationship between the user and the platform. Every post, every comment, every like, every group membership becomes tied to a verified legal identity. For most people posting pictures of their lunch, this might seem inconsequential. But for people engaged in political speech, labor organizing, journalism, activism, or any form of expression that might attract retaliation, the stakes are entirely different.
I think about what identity-verified social media would mean in practice for a teacher in a conservative district who wants to discuss LGBTQ+ issues anonymously. Or for a warehouse worker trying to organize a union without their employer knowing. Or for someone in an abusive relationship who uses a pseudonymous account as their only social connection outside the control of their abuser. These are not edge cases. Millions of people rely on the separation between their online presence and their legal identity for reasons that range from personal safety to professional survival.
The argument that "if you have nothing to hide, you have nothing to fear" has been thoroughly discredited in every other context. I see no reason to resurrect it for social media policy.
There Is Hope
I have spent most of this essay being critical, and I think that criticism is warranted. But I don't want to end without acknowledging that there are paths forward that protect both children and privacy simultaneously.
The technology for privacy-preserving age verification exists. Zero-knowledge proofs, for example, allow one party to verify a claim about another party without learning any additional information. In practical terms, this means a system could confirm that a user is over eighteen without ever learning the user's name, face, date of birth, or any other identifying information. The verification happens cryptographically, and the platform receives only a yes-or-no answer. And they are actively being developed and piloted in multiple countries.
Regulatory frameworks are also moving in promising directions, even if progress is slow. The EU's GDPR, for all its imperfections, established the principle that users have rights over their own data and that companies face real penalties for violating those rights. Similar frameworks are emerging in various US states and in other countries. The more that privacy is treated as a legal right rather than a corporate courtesy, the harder it becomes for companies to justify mass biometric collection.
I am also encouraged by the growing public awareness of these issues. Five years ago, most people didn't think about where their biometric data went. Today, conversations about digital privacy are mainstream. That shift in awareness matters, because public pressure is one of the few forces that reliably changes corporate behavior. Companies respond to user backlash, advertiser pressure, and regulatory action. All three of those forces are growing.
There is also a generational element that gives me some optimism. Younger internet users tend to be more privacy-conscious than older generations assume. Many teens and young adults already use VPNs, encrypted messaging apps, and privacy-focused browsers as a matter of routine. They understand instinctively that their data has value and that companies cannot always be trusted with it. That awareness, as it matures and enters the workforce and the voting booth, will shape policy in ways that I think trend toward stronger protections rather than weaker ones.
The path forward involves a combination of privacy-preserving technology, strong regulation with real enforcement, corporate accountability for platform safety failures, and a cultural commitment to digital literacy that starts in the home. None of these are easy. All of them are better than handing your driver's license to a social media company and hoping for the best.
The Core Problem Is Simpler Than We Want to Admit
I think we overcomplicate this. If you are a company and you want to protect kids, do not build a platform that is irresistible to kids. If you have built a platform that is irresistible to kids, do not pretend that a face scan at the door solves the problem you created. And do not use child safety as a justification to collect biometric data from hundreds of millions of adults.
The push for facial recognition and ID verification on social media is a convergence of the worst impulses in tech policy. It normalizes biometric surveillance. It creates massive data breach risks. It gives corporations access to identity documents they have no business holding. It doesn't meaningfully protect children. And it lets both companies and parents off the hook for the harder work that actually needs to happen.
We spent the last decade building a surveillance infrastructure and calling it convenience. I would rather not spend the next decade building an identity verification infrastructure and calling it safety.