Social media companies are fighting the ‘age verification trap’ as collecting biometrics on kids violates privacy rights
A facial scan on Instagram, a video selfie on TikTok, a thumbprint passcode on YouTube, and an ID upload on Facebook. It’s not the scene yet, but collecting our biometrics to post an AI slop meme will just become the norm as Big Social goes through its Big Tobacco moment.
The digital landscape is undergoing a massive upheaval in the wake of social media addiction lawsuits and a frantic regulatory scramble for age verification. As social media platforms face a landmark legal reckoning over the “dopamine reaction” and addictive design choices that harm children, a fundamental technical and ethical crisis has emerged. Countries like Australia are enforcing social media bans for people under age 16, while Meta is currently on trial for claims of intentionally creating an addictive environment for children on its platforms.
In the race to verify a user’s age—the primary tool companies have implemented to curb childhood addiction— these social media platforms have unveiled a paradox commonly referred to as the “age-verification trap.” Simply, by attempting to enforce age verification rules on its users, these companies are undermining the data privacy of those very users.
Big Social has its Big Tobacco moment
Companies like Meta and TikTok are facing federal and state trials that compare their platforms and business models to those of tobacco and opioid markets, alleging the companies directly and deliberately manipulate how the platforms are designed to promote user addiction. Meta CEO Mark Zuckerberg recently testified that scientific studies have not proved the link between social media and mental health harms, but experts argue otherwise, saying social media addiction is driven by the very engineering algorithms intended to keep a user online.
“These companies aren’t held to a certain standard” that would stop children from accessing their platforms—not least of all, something these companies “benefit from with kids on their platform. More people, more ads,” said Debra Boeldt, PhD, a clinical psychologist and AI scientist at the family social media company Aura. Boeldt, who leads clinical research at Aura—a company that uses AI to keep tabs on children’s online habits and keep adults’ privacy safe—said children are particularly susceptible to current social media design because their executive function and impulse control are still developing.
For kids, social media platforms aren’t just apps but also their primary source of social connection, noting her research showing one in five children age 13 and under spend four hours or more on social media a day, and with that comes higher levels of stress, anxiety, and depression. Children are savvy, Boeldt said, and so if they are banned from one platform, it’s a game of “whack-a-mole” where they just move from one to the next.
“Kids are super savvy, and so they’ll get around things,” Boeldt told Fortune. “They know how to fly under the radar.”
As social media companies seek to remove underage users from its platforms, or enlist the help of AI to search for censored content, the companies will have a hard time ensuring they can accurately remove access to anyone that is under a certain age. (Boeldt even referenced platforms like Instagram and TikTok that monitor language and how children have already found loopholes, using “PDF files” or “unaliving,” and creating new vocabulary that renders those censors useless: Children are savvy, after all.)
Still, she cautioned, the adverse effect is even worse, in which only a few users are banned from a social media site instead of the whole. If social media platforms barely make inroads in barring underage users but remove access for a select few at a time, that creates an “island effect” where, unless a ban is universal, a child cut off from social media is isolated while their friends continue to connect online.
The regulation is barely keeping up with the use
Forget the current lawsuits acting as a litmus test for social media design rules: Current regulation is barely keeping up with how kids are using social media—and the tools that social media companies are using fail to keep users’ privacy safe. In recent months, platforms employing third-party verification software have seen their users’ data hacked and exposed, have had to announce and renounce AI-powered censors, and are fighting against poor public sentiment from an increasingly dissatisfied user base.
This is complicated by growing measures of regulation from countries around the world. Australia passed landmark legislation in 2024 barring minors under 16 from having accounts on social media platforms like Facebook, TikTok, and YouTube. Domestically, 32 states have introduced age verification legislation, and that is only intensified in externalities that are yet to be seen after the Federal Trade Commission announced last week it would exercise “enforcement discretion” regarding the Children’s Online Privacy Protection Rule (COPPA). This would allow social media companies to collect children’s data without parental consent—but solely for age verification purposes.
However, this fails to solve the paradoxical issue of adequately collecting data on children and users while also not infringing on users’ privacy rights. The issue becomes intensified when you begin looking into the users on these platforms.
“Humans are now the minority on the internet; we’ve seen bot-to-human traffic increase 50 times year over year,” said Johnny Ayers, the CEO of Socure, an AI-powered identity verification software company. Ayers told Fortune that thanks to bots, the use of deepfakes has increased nearly 8,000% year over year—rendering plenty of the verification software in the market useless. Instead, one of the digital checks his company employs includes using each cell phone’s gimbal to see if a human is indeed holding the phone when going through identity verification.
Evin McMullen, whose company Billions Network is used for anti-money laundering and Know Your Customer methods, says collecting biometrics is one way platforms confirm your identity, because you can’t change what those say about you.
“It sounds kind of cheeky, but the idea that you can’t rotate your thumbs, meaning that you can’t change the password or manage the security easily in the same ways,” McMullen told Fortune. “Identities that are based on your biometrics really is about prioritizing ease of use and security around your most vital data,” she said, adding that the current password manager model is “untenable and no longer secure.”
But the problems arise with children and privacy, again something to be revisited now in light of the FTC’s ruling on COPPA.
“You can’t collect biometrics on a kid,” Ayers told Fortune. “And so how do you verify someone is 13 without verifying, without collecting a thing, that they’re 13?”
The tools are no longer useful
One way to do so is to collect zero-knowledge proofs (ZKP) that determine a party to verify the veracity of a statement, and therefore, the identity of that person. McMullen, whose clients in the financial industry are looking into noninvasive means of identity verification, is a major advocate for ZKPs, adding they’re particularly helpful in establishing trust between parties.
ZKPs is a method that allows a person—looking to verify themselves—to answer statements in a manner that establishes trust to the verifying party without unveiling personal or secret information. Take, for example, the problem of 4+4=8. This is something the person looking to be verified knows to be true, but the ZKP method relies on trust. Instead of asking is 4+4=8, the verifier asks a series of questions to determine if the person wanting to verified is telling the truth (or in this case, knows that to be true). The verifier can ask is 4+4=7; is the sum of 4+4 an even number, and so on and so forth, and after the series of questions, it can determine the veracity of the person’s claims, thereby identifying them.
This isn’t a common method to prove identity. So far, social media companies have enlisted a number of technologies to verify people’s ages, including using identity-based verification like asking users to upload government-issued IDs; using AI to scan a user’s face; tracking a user’s activity to determine a person’s age; and enlisting parental supervision tools like Instagram, which introduced Teen Accounts to alert parents of any harmful online habits.
At the heart of the issue is there is fundamentally no tool that can verify a user’s age without inherently violating a user’s privacy. Any accurate models require extremely invasive measures like biometrics or government IDs—and the IDs are something that even social media companies are hesitant to request because of the ID gap in which 15 million Americans lack any identification, an issue that disproportionally affects Black and Hispanic adults, immigrants, and those with disabilities.
Using AI to scan people’s faces does little to solve for the issue, as experts have found these AI models are less accurate for minority groups and often misclassify adults as minors, while AI itself is unable to discern a synthetic voice or deepfake from a real human. Children, who again are savvy, will also frequently bypass any geographically based bans using VPNs, as is the case in Florida where VPN usage went up 1,150% after the state began requiring age verification for users to access Pornhub. And least of all, there are major security risks that come with storing identity documents, like a recent breach of Discord’s third-party vendor 5CA that left over 70,000 government IDs exposed online.
Ultimately, the “age verification trap” is what happens when regulators treat age enforcement as mandatory and delineate privacy to an optional status. Until methods like ZKPs or device-based verification becomes the norm, these experts warn, the Digital Age will continue down the rabbit hole of trying to prove a person’s identity while trying not to infringe on privacy rights.
This story was originally featured on Fortune.com
