The Age Verification Trap: Verifying age undermines everyone's data protection

by oldnetguyon 2/23/2026, 2:22 PMwith 815 comments

by JohnMakinon 2/23/2026, 4:34 PM

We'll try everything, it seems, other than holding parents accountable for what their children consume.

In the United States, you can get in trouble if you recklessly leave around or provide alcohol/guns/cigarettes for a minor to start using, yet somehow, the same social responsibility seems thrown out the window for parents and the web.

Yes, children are clever - I was one once. If you want to actually protect children and not create the surveillance state nightmare scenario we all know is going to happen (using protecting children as the guise, which is ironic, because often these systems are completely ineffective at doing so anyway) - then give parents strong monitoring and restriction tools and empower them to protect their children. They are in a much better and informed position to do so than a creepy surveillance nanny state.

That is, after all, the primary responsibility of a parent to begin with.

by antitoxicon 2/23/2026, 5:46 PM

I work at a European identity wallet system that uses a zero knowledge proof age identification system. It derives an age attribute such as "over 18" from a passport or ID, without disclosing any other information such as the date of birth. As long as you trust the government that gave out the ID, you can trust the attribute, and anonymously verify somebodies age.

I think there are many pros and cons to be said about age verification, but I think this method solves most problems this article supposes, if it is combined with other common practices in the EU such as deleting inactive accounts and such. These limitations are real, but tractable. IDs can be issued to younger teenagers, wallet infrastructure matures over time, and countries without strong identity systems primarily undermine their own age bans. Jurisdictions that accept facial estimation as sufficient verification are not taking enforcement seriously in the first place. The trap described in this article is a product of the current paradigm, not an inevitability.

by armchairhackeron 2/23/2026, 3:25 PM

Age verification is very hard, because parents will give their children their unlocked account, and children will steal their parents' unlocked account. If that's criminalized (like alcohol), it will happen too often to prosecute (much more frequently than alcohol, which is rarely prosecuted anyways). I don't see a solution that isn't a fundamental culture shift.

If there's a fundamental culture shift, there's an easy way to prevent children from using the internet:

- Don't give them an unlocked device until they're adults

- "Locked" devices and accounts have a whitelist of data and websites verified by some organization to be age-appropriate (this may include sites that allow uploads and even subdomains, as long as they're checked on upload)

The only legal change necessary is to prevent selling unlocked devices without ID. Parents would take their devices from children and form locked software and whitelisting organizations.

by aqme28on 2/23/2026, 3:51 PM

If we're going to do this at all, it should be on the device, not the website/app. Parents flag their child's device or browser as under 18, and websites/apps follow suit. Parents get the control they're looking for, while service providers don't have to verify or store IDs. I guess it's just more difficult to pressure big dogs like google/apple/mozilla for this than pornhub and discord.

by agentultraon 2/23/2026, 3:38 PM

There are alternatives to ID verification if the goal is protecting children.

You could, for example, make it illegal to target children with targeted advertising campaigns and addictive content. Then throw the executives who authorized such programs in jail. Punish the people causing the harm.

by julianozenon 2/23/2026, 4:21 PM

There is missing a solution.

Give our personal devices have the ability to verify our age and identity securely and store on device like they do our fingerprint or face data.

Services that need access only verify it cryptographically. So my iPhone can confirm I’m over 21 for my DoorDash app in the same way it stores my biometric data.

The challenge here is the adoption of these encryption services and whether companies can rely on devices for that for compliance without having to cut off service for those without it set up.

by Wobbles42on 2/23/2026, 6:21 PM

The purpose of a system is what it does.

Undermining data protection and privacy is clearly the point. The fact that it's happening everywhere at the same time makes it look to me like a bunch of leaders got together and decided that online anonymity is a problem.

It's not like kids having access to adult content is a new problem after all. Every western government just decided that we should do something about it at roughly the same time after decades of indifference.

The "age verification" story is casus belli. This is about ID, political dissent, and fears of people being exposed to the wrong brand of propaganda.

by Cthulhu_on 2/23/2026, 3:46 PM

> And the only way to prove that you checked is to keep the data indefinitely.

This is a false premise already; the company can check the age (or have a third party like iDIN [0] do it), then set a marker "this person is 18+" and "we verified it using this method at this date". That should be enough.

[0] https://www.idin.nl/en/

by enjoykazon 2/23/2026, 3:19 PM

Most of this debate makes more sense if the actual goal is liability reduction, not child safety. If it were genuinely about protecting kids, you'd regulate infinite scroll and algorithmic engagement optimization, not who can log in.

by TimPCon 2/23/2026, 3:38 PM

Big tech likes this because there are a lot more face recognition technologies in the wild in real life and being able to connect all real life data to online data is quite valuable. It's also quite possibly the largest training set ever for face recognition if ids are stored and given how ids and images are sold across many companies it seems very high probability that some company will retain the data rather than delete after use.

by nye2kon 2/23/2026, 6:32 PM

I worked for a decade in what I would consider the highest level of our kids' privacy ever designed, at PBS KIDS. This was coming off a startup that attempted to do the same for grownups, but failed because of dirty money.

Every security attempt becomes a facade or veil in time, unless it's nothing. Capture nothing, keep nothing, say nothing. Kids are smart AF and will outlearn you faster than you can think. Don't even try to capture PII ever. Watch the waves and follow their flow, make things for them to learn from but be extremely careful how you let the grownups in, and do it in pairs, never alone.

by notTooFarGoneon 2/23/2026, 3:08 PM

>Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: many users who are legally old enough to use a platform do not have government ID.

So there is absolutely no way to change that and give out IDs from the age of 14? You can already get an ID for children in Germany https://www.germany.info/us-de/service/reisepass-und-persona...

This is a problem that has to be solved by the government and not by private tech companies.

This is a lazy cop out to say "we have tried nothing and we are all out of ideas"

by aleccoon 2/23/2026, 9:34 PM

The purpose is to control the Internet. They've been trying this for ages. They tried with terrorism and other things. Now the excuse is protecting children.

Not exactly a good moment for this caste of politicians to pretend they care about children's well-being, though.

by kuonon 2/23/2026, 5:09 PM

Even if you design the perfect system, kids will just ask parents for an unlocked account, many parents will accept, myself included. My kids have full access to the internet and I never used parental control, I talk to them. Of course, I don't want to give parenting advice, that would be presumptuous. But, my point is that a motivated kid will find a way, you have to "work" on that motivation.

Many of the worst present on the internet is not age gated at all, you have millions of porn websites without even a "are you over 18" popup. There are plethora of toxic forums...

Of course it's a complex problem, but the current approach sacrifice a lot of what made the internet possible and I don't like it.

by Saline9515on 2/23/2026, 7:31 PM

Zero-knowledge proofs exist, that verify that a user's id holds certain properties, without leaking said ID.

by trashbon 2/23/2026, 5:40 PM

I would like to take the discussion in the other direction. How about we offer safe spaces instead of banning the unsafe spaces for kids.

Similar to how there is specific channels for children on the TV. Perhaps the government can even incentivize such channels. It would also make it easier for parents to monitor and set boundaries. Parents would only need to monitor if the tv is still tuned to disney channel or similar instead of some adult channels.

Similarly this kind of method could be applied to online spaces. Ofcourse there will be some kids that will find ways around it but they will most likely be outliers.

by condimenton 2/23/2026, 3:17 PM

We are missing accessible cryptographic infrastructure for human identity verification.

For age verification specifically, the only information that services need proof of is that the users age is above a certain threshold. i.e. that the user is 14 years or older. But in order to make this determination, we see services asking for government ID (which many 14-year-olds do not have), or for invasive face scans. These methods provide far more data than necessary.

What the service needs to "prove" in this case is three things:

1. that the user meets the age predicate

2. that the identity used to meet the age predicate is validated by some authority

3. that the identity is not being reused across many accounts

All the technologies exist for this, we just haven't put them together usefully. Zero knowledge proofs, like Groth16 or STARKs allow for statements about data to be validated externally without revealing the data itself. These are difficult for engineers to use, let alone consumers. Big opportunity for someone to build an authority here.

by arn3non 2/23/2026, 6:13 PM

Parents are competing with multi-trillion dollar companies who have invested untold amounts of cash and resources into making their content addictive. When parents try to help their children, it's an uphill battle -- every platform that has kids on it also tends to have porn, or violence, or other things, as these platform generally have disappointingly ineffective moderation. Most parents turn to age verification because it's the only way they can think of to compete with the likes of Meta or ByteDance, but the issue is that these platforms shouldn't have this content to begin with. Platforms should be smaller -- the same site shouldn't be serving both pornography and my school district's announcement page and my friend's travel pictures. Large platforms are turning their unwillingness to moderate into legal and privacy issues, when in fact it should simply be a matter of "These platforms have adult content, and these ones don't". Then, parents can much more easily ban specific platforms and topics. Right now there's no levers to pull or adjust, and parent s have their hands tied. You can't take kids of Instagram or TikTok -- they will lose their friends. I hate the fact that the "keep up with my extended family" platform is the same as the "brainrot and addiction" one. The platforms need to be small enough that parents actually have choices on what to let in and what not to. Until either platforms are broken up via. antitrust or until the burden of moderation is on the company, we're going to keep getting privacy-infringing solutions.

If you support privacy, you should support antitrust, else we're going to be seeing these same bills again and again and again until parents can effectively protect their children.

by Gindenon 2/23/2026, 7:00 PM

"Age-restriction laws push platforms toward intrusive verification systems that often directly conflict with modern data-privacy law" - when you make rules contradictory, someone always violate these laws, and you can use selective persecution to "convince" companies to favor you, the incumbent politician. You don't even have to use such power, just a "joke" may be enough to send have any rational CEO licking your shoes.

European proponents of "anti-big-tech action" make it pretty explicit - broad discretionary power should be given to executive branch, because otherwise "international corporations" will use "loopholes" (and these "loopholes" are, in practice, explicitly written laws used as intended).

by Seattle3503on 2/23/2026, 5:39 PM

> Some observers present privacy-preserving age proofs involving a third party, such as the government, as a solution, but they inherit the same structural flaw: many users who are legally old enough to use a platform do not have government ID. In countries where the minimum age for social media is lower than the age at which ID is issued, platforms face a choice between excluding lawful users and monitoring everyone. Right now, companies are making that choice quietly, after building systems and normalizing behavior that protects them from the greater legal risks. Age-restriction laws are not just about kids and screens. They are reshaping how identity, privacy, and access work on the Internet for everyone.

This rebuttal to privacy preserving approaches isn't compelling. Websites can split the difference and use privacy preserving techniques when available, and fall back to other methods when the user doesn't have an ID. I'd go further and say websites should be required to prioritize privacy preserving techniques where available.

There is a separate issue of improving access to government ID. I think that is important for reasons outside of age verification. Increasingly voting, banking, etc... already relies on having an ID.

by rafaeleroon 2/23/2026, 5:01 PM

I have no idea where this idea that Internet is toxic to children is coming from. Is that some type of moral panic? Weren't most of you guys children/adolescents during the 2000's?

by RockRobotRockon 2/23/2026, 2:59 PM

Here is an example of the problem with inference-based verification:

https://streamable.com/3tgc14

by kjeldsendkon 2/23/2026, 7:58 PM

Surprisingly there is solutions that work just fine.

It's like bankid or myid works in Scandinavian countries.

When you need to identify yourself you are challenged by a 3rd party trusted service.

Making this a age verification should be very easy.

https://www.mitid.dk/en-gb/about-mitid/?language=en-gb

by edoceoon 2/23/2026, 7:13 PM

I'm certain there is a way to verify age without compromise of privacy or identity. I'm sure it's possible to build some oAuth like flow that could allow sites to verify both human-ness and age. The systems and corporations that gate that MUST (in the RFC sense) be separate from the systems and corporations that want the verification.

Do we need laws to make this happen? What methods can be used to aid adoption? Do site operators really want to know the humanness and ages or are those just masks on adding more surveillance?

by nottorpon 2/23/2026, 8:39 PM

How about we accept age verification but every parliamentary type that voted in favor goes to jail for just one year for each data breach?

Practically that means all of them will be imprisoned for life, of course.

by alt227on 2/23/2026, 4:32 PM

I like the solution Tim Burners-Lee is working on. Lets hope he has some success.

https://solidproject.org/

https://www.theguardian.com/technology/2026/jan/29/internet-...

by kseniamorphon 2/23/2026, 3:32 PM

> "Social media is going the way of alcohol, gambling, and other social sins: societies are deciding it’s no longer kids’ stuff."

Oh, remember those good old times when alcohol was kids' stuff.......

by arrsinghon 2/23/2026, 5:06 PM

Age Verification is very hard to do without exposing personal information (ask me how I know). I feel it should be solved by a platform company - someone like Apple (assuming we trust apple with our personal information but seems like we already do) - and the platform (ios) should be able to simply provide a boolean response to "is this person over 18" without giving away all the personal information behind the age verification.

Now the issue of which properties can "ask to verify your age" and "apple now knows what you're looking at" is still an unsolved problem, but maybe that solution can be delivered by something like a one time offline token etc.

But again, this is a very hard problem to solve and I would personally like to not have companies verify age etc.

by flerchinon 2/23/2026, 6:15 PM

The thing that needs to be age banned, or really just banned, is algorithmic feeds with infinite scroll. Kids (and adults) need to just interact with their friends, and block all the bait.

by reorder9695on 2/23/2026, 6:05 PM

What's always got me about this is when I was in school I had it absolutely drilled into me that I should never expose personal information online to anyone, I completely saw the logic in that and so heavily limit the personal data I give out. Now we're just expecting people to completely go against that and give away the most personal details possible to companies who cannot prove what they are or are not doing with it just because governments have decided that's best now?

by bondarchukon 2/23/2026, 3:19 PM

It's kind of weird to me how every article on this topic here has people rushing to comment within a couple minutes with some generic "yes I too support ID checks for internet use!". Has the vibe really shifted so much among tech-literate people?

by fnyon 2/23/2026, 4:02 PM

Isn't it a simpler solution to create some protocol for a browser or device announce an age restricted user is present and then have parents lock down devices as they see fit?

Aside from the privacy concerns, all this age verification tech seems incredibly complicated and expensive.

by muyuuon 2/23/2026, 7:30 PM

people really believe this coordinated push across jurisdictions is about kids and verifying their age? this excuse to try to end pseudonimity on the web is as old as the mainstream internet itself

to a lot of people it never sat well that people could just go online and say whatever they want, and communicate with each other unsupervised at large scale, and be effectively untargetable while doing so - that model of the internet was only allowed because it happened under the radar and those uncomfortable with it have been fighting it since they got the memo

by Galanweon 2/23/2026, 6:10 PM

Well there are technical solutions for this: blind signatures.

I could generate my own key, have the government blind sign it upon verifying my identity, and then use my key to prove I'm an adult citizen, without anyone (even the signing government) know which key is mine.

Any veryfying entity just need to know the government public key and check it signed my key.

by jonstaabon 2/23/2026, 3:41 PM

Why is no one talking about using zero knowledge proofs for solving this? Instead of every platform verifying all its users itself (and storing PII on its own servers), a small number of providers could expose an API which provides proof of verification. I'm not sure if some kind of machine vision algorithm could be used in combination with zero-knowledge technology to prevent even that party from storing original documents, but I don't see why not. The companies implementing these measures really seem to be just phoning it in from a privacy perspective.

by boersethon 2/23/2026, 3:57 PM

Does each service really need to collect this data from the user directly? They could instead have the user authorise them by e.g. OAuth2 to access their age with one of the de-facto online-identity-providers. I would be surprised if they didn't implement an API for this sometime soon, cause it would place them as the source of truth and give them unique access to that bit of user data. Seems like a chance and position they wouldn't want to lose.

by kylecordeson 2/23/2026, 7:01 PM

"Verifying age undermines everyone's data protection"

That's the whole point, right? A pretense to remove any remaining anonymity from communications?

Governments are endlessly infested with the worst people. They look back at historical attempts at totalitarianism and think to themselves, "Let's facilitate something like that, but worse".

by 111011111on 2/23/2026, 9:07 PM

Isn't clear whether the paradox is biometric verification or ID data collection.

by barfiureon 2/23/2026, 3:50 PM

The internet isn’t the same as it was when we were growing up, unfortunately. I miss the days of cruising DynamicHTML while playing on GameSpy but… yeah. It became an absolute clusterfuck and I’m not surprised they now want to enforce age restrictions.

Maybe TBL is right and we need a new internet? I don’t have the answer here, but this one is too commercialized and these companies are very hawkish.

by cromkaon 2/23/2026, 4:00 PM

Someone explain me like I'm 5: there are some solutions already in effect that are based on cryptographically generated, anonymous, one-time use tokens that allow to confirm adults's age without being tied up yo your ID. Why on earth even technically skilled people completely ignore those? Is this pure NIMBY ignorance or am missing something?

by bronlundon 2/23/2026, 3:47 PM

I would argue that this has nothing to do with age verification, but everything to do with getting identifiable data on all of us.

by akerstenon 2/23/2026, 4:01 PM

In my experience the people who want "privacy preserving age verification" are the same people who want "encryption backdoors but only for the good guys." Shockingly the technically minded among them do seem to recognize the impossibility of the latter, without applying the same chain of thought to the former.

by radium3don 2/23/2026, 7:03 PM

It's insanely dangerous to have so much data stored on so many servers that are inevitably not maintained.

by fdefitteon 2/23/2026, 8:48 PM

Every age verification scheme is really an identity verification scheme, "age" is just the acceptable entry point. Once the infrastructure exists to verify you are 18, it can verify you are not on a watchlist, verify your creditworthiness, verify your political associations.

You are not building a parental filter. You are building rails.

"Protect the children" is the canonical playbook for every surveillance expansion since forever. The children get protected for six months. The infrastructure stays forever.

by haunteron 2/23/2026, 3:42 PM

This is my problem with the Discord situation too:

Big tech don't have wait for an outright government ban when they can just say that we are a teen-only site by default and everyone have to verify if they are over 18 or not. This age verification will affect everyone no matter what.

by ct0on 2/23/2026, 4:52 PM

I don't get the alcohol analogy as in most places it's 100% legal for minors to consume alcohol in the home with parental permission in the USA. In public it's a different story.

by robinwhgon 2/23/2026, 4:47 PM

I‘m not too knowledgeable about this, but couldn’t you just provide a government issued key to every citizen and give a service provider that key and it‘s only valid if you’re above a certain age?

by SoftTalkeron 2/23/2026, 5:58 PM

A lot of talk and no solutions. Exactly the reason we are where we are.

Liquor stores, bars, strip clubs, adult bookstores, or similar businesses don't let kids in. Movie theatres don't let a 10 year old in to an R-rated movie. The tech industry ignored their social responsibility to keep kids away from adult and age-inappropriate content. Now, they are facing legal requirements to do so. Tough for them, but they could have been more proactive.

by jama211on 2/23/2026, 6:06 PM

This thread is gonna be full of HN users blaming the parents for a systemic problem isn’t it?

Yup.

by lightningspiriton 2/23/2026, 4:12 PM

If there's only a centralized system that uses digital IDs to hand off providers only a "yay" or "nay"...

by chaostheoryon 2/23/2026, 9:10 PM

That’s the point: enable mass surveillance and thee loss of privacy under the guise of another cause, usually “protecting the children”.

by edgyquanton 2/23/2026, 3:07 PM

Everything is a trade off in the world. I think that people who are anti-id ignore this but for me personally it’s harder and harder to accept the trade offs of an internet without id. AI has only accelerated this, I don’t want to live in a world where the average person unknowingly interacts with bots more than other individuals and where black market actors can sway public opinion with armies of bots.

I think most people are aligned here, and that an internet without identification is inevitable whether we like it or not.

by catocon 2/23/2026, 7:19 PM

It’s the same as scanning for CSAM, or encryption-backdoors “to catch criminals”.

Of course we hate child abuse.

Of course we hate criminals.

Of course we hate social media addicting our kids.

But they’re just used as emotional framing for the true underlying desire: government surveillance.

(For the record: I am not into conspiracy theories; the EU has seen proposals for - imho technically impossible - “legally-breakable encryption” alone in 2020, 2022, and 2025; now we”ll also see repeated attempts at the “age verification” thing to force all adults to upload their IDs to ‘secure’ web portals)

by kevincloudsecon 2/23/2026, 4:09 PM

the companies pushing hardest for age verification are the same ones whose business model depends on knowing exactly who you are. the child safety framing is convenient cover for a data collection problem they were already trying to solve.

by ltbarcly3on 2/23/2026, 8:38 PM

It's amazing how much it's possible to foment arguments against something if you are very well funded and a regulation will cost your industry a lot of money.

Age verification is a good thing. Giving children unrestricted access to hardcore pornography is bad for them. Whatever arguments you want to make, fundamentally this is true.

by tolmaskyon 2/23/2026, 3:53 PM

I am so surprised by the comments on this thread. I was not expecting to see so many people on Hacker News in favor of this. As is typically the case with things like this, the reasoning stems from agreeing with the goal of age verification, with little regard to whether age verification could ever actually work. It reminds me in some sense to the situation with encryption where politicians want encryption that blocks "the bad guys" while still allowing "the good guys" to sneak in if necessary. Sure, that sounds cool, it's not possible though. I suppose DRM is a better analogue here, an increasingly convoluted system that slowly takes over your entire machine just so it can pretend that you can't view video while you're viewing it.

To be clear, tackling the issue of child access to the internet is a valuable goal. Unfortunately, "well what if there was a magic amulet that held the truth of the user's age and we could talk to it" is not a worthwhile path to explore. Just off the top of my head:

1. In an age of data leaks, identity theft, and phishing, we are training users to constantly present their ID, and critically for things as low stakes as facebook. It would be one thing if we were training people to show their ID JUST for filing taxes online or something (still not great, but at least conveys the sensitivity of the information they are releasing), but no, we are saying that the "correct future" is handing this information out for Farmville (and we can expect its requirement to expand over time of course). It doesn't matter if it happens at the OS level or the web page level -- they are identical as far as phishing is concerned. You spoof the UI that the OS would bring up to scan your face or ID or whatever, and everyone is trained to just grant the information, just like we're all used to just hitting "OK" and don't bother reading dialogs anymore.

2. This is a mess for the ~1 billion people on earth that don't have a government ID. This is a huge setback to populations we should be trying to get online. Now all of a sudden your usage of the internet is dependent on your country having an advanced enough system of government ID? Seems like a great way for tech companies to gain leverage over smaller third world companies by controlling their access to the internet to implementing support for their government documents. Also seems like a great way to lock open source out of serious operating system development if it now requires relationships with all the countries in the world. If you think this is "just" a problem of getting IDs into everyone's hands, remember that it a common practice to take foreign worker's passports and IDs away from them in order to hold them effectively hostage. The internet was previously a powerful outlet for working around this, and would now instead assist this practice.

3. Short of implementing HDCP-style hardware attestation (which more or less locks in the current players indefinitely), this will be trivially circumvented by the parties you're attempting to help, much like DRM was.

Again, the issues that these systems are attempting to address are valid, I am not saying otherwise. These issues are also hard. The temptation to just have an oracle gate-checker is tempting, I know. But we've seen time and again that this just (at best) creates a lot of work and doesn't actually solve the problem. Look no further than cookie banners -- nothing has changed from a data collection perspective, it's just created a "cookie banner expert" industry and possibly made users more indifferent to data collection as a knee-jerk reaction to the UX decay banners have created on the internet as a whole. Let's not 10 years from now laugh about how any sufficiently motivated teenager can scan their parent's phone while they're asleep, or pay some deadbeat 18 year-old to use their ID, and bypass any verification system, while simulateneously furthering the stranglehold large corporations have over the internet.

by DeathArrowon 2/23/2026, 3:18 PM

I wonder how much time we have before being asked to enter the government issued ID in a card reader so websites can read age and biometric data from the chip.

by b8on 2/23/2026, 4:25 PM

Hence why Illinois has already mame it illegal.

by jajuukaon 2/23/2026, 7:53 PM

I feel like the ending undermines the whole piece. Throwing your hands up and going "we should do nothing" isn't really a solution. If a compromise exists I think it's adding age requests on device setup. There wouldn't be any verification but it could be used as a way to limit access to content globally. Content provides would just need a simple API to check if the age range fits and move right along.

This puts more onus on parents and guardians to ensure their child's devices are set up correctly. The system wouldn't be perfect and people using something like Gentoo would be able to work around it, but I think it helps address the concerns. A framework would need to be created for content providers to enforce their own rating system but I don't think it's an impossible task. It obviously wouldn't cover someone not rating content operating out of Romania, but should be part of the accepted risk on an open internet.

Personally I do agree with the "do nothing" stance, but I don't think it's going to hold up among the wider public. The die is cast and far too many average people are supporting moves like this. So the first defense should be to steer that conversation in a better way instead of stonewalling.

by miss_haruon 2/23/2026, 3:49 PM

parents: won't somebody else put some rules and safeguards in place to protect my children?

by rgloveron 2/23/2026, 4:53 PM

Imagine an OIDC type solution but for parents might work here.

Basically, kids can sign up for an account triggering a notification to parents. The parent either approves or rejects the sign in. Parents can revoke on demand. See kids login usage to various apps/services. Gets parental restrictions in the login flow without making it a PITA.

by knallfroschon 2/23/2026, 5:54 PM

All adults proof their identify multiple times per month: Every time they access digital health records, or when they use any electronic payment.

Just make Google/Apple reveal part of that data (age > x years) to websites and apps.

Boom, done. Privacy guarded. Easy.

by hash07eon 2/23/2026, 9:43 PM

Same people who are on Epstein files wants to protect children?

by publicdebateson 2/23/2026, 3:25 PM

Isn't this the same debate as airports post 9/11, whether you can have both privacy and security? Seems conclusive, no.

by cess11on 2/23/2026, 3:11 PM

My main takeaway from this is that politicians seem to have given up on making "social media" less harmful by regulating it, and instead focus on gatekeeping access, with the added perk of supplying security services and ad tyrants with yet another data pump.

by almosthereon 2/23/2026, 6:26 PM

I think this should work like OpenID connect but with just a true/false.

PS = pr0n site

AV = age verification site (conforming to age-1 spec and certified)

  PS: Send user to AV with generated token
  AV: Browser arrives with POST data from PS with generated token
  AV: AV specific flow to verify age - may capturing images/token in a database. May be instant or take days

  AV: Confirms age, provides link back to original PS
  PS: Requests AV/status response payload:

  {
    "age": 21,
    "status": "final"
  }

No other details need to be disclosed to PS.

I don't know if this is already the flow, but I suspect AV is sending name, address, etc... All stuff that isn't needed if AV is a certified vendor.

by Pxtlon 2/23/2026, 6:21 PM

All of my kids devices are identified, at device level, as children's devices. They could've trivially exposed this as metadata to allow sites to enforce "no under 18" use. However, I'd disagree that my bigger concern for my kids isn't that they'd see a boob or a penis, but that they'd see an influencer who'd try to radicalize them to some extremist cause, and that's usually not considered 18+ content.

And either way, none of that requires de-anonymizing literally everyone on the internet. I'd be more than happy to see governments provide cryptographically secure digital ID and so that sites can self-select to start requiring this digital ID to make moderation easier.

by djohnstonon 2/23/2026, 4:27 PM

(thats the point)

by CrzyLngPwdon 2/23/2026, 3:09 PM

It's just another way to surveil the population and won't cause any real problems for anyone who can work around it.

by alvataron 2/23/2026, 4:46 PM

zero knowledge cryptography solves this

by dark-staron 2/23/2026, 5:50 PM

I don't see why platforms would have to store the data indefinitely.

Once you are verified, you just flip a bit "verified" in the database and delete all identification data.

No reason to store the data indefinitely

by mgaunardon 2/23/2026, 6:44 PM

"Think of the children" is merely a political argument to get a law to be popular among normal people.

by stainablesteelon 2/23/2026, 7:51 PM

this is a broader parenting problem, the state doesn't need to do this

politicians are interested in it because they're begging for some way to censor the internet, which would actually be even worse for parenting because now it prevents children from ever learning to be responsible with these highly addictive platforms

by simion314on 2/23/2026, 4:28 PM

Big Tech refused to work together to implement a age flag that parents would setup on the children device, now we get each European and each USA state with their own special rules.

by matthewmorganon 2/23/2026, 4:10 PM

That was the goal.

by Devastaon 2/23/2026, 4:09 PM

I can understand the need to restrict some stuff kids can see, like when I was a teen it me hours and hours to download one 2 minute porn clip from kazaa, but these days you could download a lifetime worth in one weekend. That can't be healthy.

That being said nothing about these laws is about protecting children; their primary purpose is to crack down on the next Just Stop Oil or Palestine Action so for that reason should be opposed.

by redogon 2/23/2026, 4:06 PM

It's to continue the culture of bullying and lack-of-accountability by and for the perversely rich oligarchy.

For you'll need to be accounted while they do the counting.

by scotty79on 2/23/2026, 3:59 PM

If government is concerned shouldn't government just deliver auth based on birth certificate for everyone to use?

by DeathArrowon 2/23/2026, 3:44 PM

In most countries is illegal for small children to drive or to use fire arms. And it's their parents job to not let them to.

Instead of requiring IDs, we should let parents manage what their children do online.

by Filip_portiveon 2/23/2026, 4:42 PM

My new comment

by DeathArrowon 2/23/2026, 3:40 PM

30 years of internet were possible with relative freedom, without spying and surveillance. All of the sudden it's not possible.

Governments recycle "Think of the children" mantra and they are again after terrorists and bad guys.

by ck2on 2/23/2026, 3:01 PM

if you are paying for internet access you have to be over 18, no?

and if you have internet access without paying, that means someone else is legally responsible for your access

"problem solved" ?

by user3939382on 2/23/2026, 7:44 PM

Corporate interests don’t care about data privacy or security they care about liability and compliance which are not the same thing.

Major banks and government institutions can’t even be bothered to implement the NIST password guidelines. If they got their gdpr soc2 fedramp whatever it’s green lights and the rest is insurance.

by Noaidion 2/23/2026, 3:47 PM

I have a problem with an open internet and allowing open access to everything the internet can offer to young children.

It cannot be a friction-less experience. Allowing children to see gore and extreme porn at a young age is not healthy. And then we have all the "trading" platforms (gambling).

Even though my brothers were able to get many hard drugs when I was young, around 1977, there was a lot of friction. Finding a dealer, trusting them, etc. Some bars would not card us but even then there was risk and sometimes they got caught. In NY we could buy cigarettes, no friction, and the one drug I took when I was young, addicted to them at 16, finally quitting for good at 20. I could have used some friction there.

So how do we create friction? Maybe hold the parents liable? They are doing this with guns right now, big trial is just finishing and it looks like a father who gave his kid an ak47 at 13 is about to go to jail.

I would like to see a state ID program when the ID is just verified by the State ID system. This way nothing needs to be sent to any private party. Sites like Discord could just get a OK signal from the state system. They could use facial recognition on the phone that would match it with the ID.

Something needs to be done however. I disagree that the internet needs to be open to all at any age. You do not need an ID to walk into a library, but you need one to get into a strip club. I do not see why that should not be the same on the internet.

by pessimizeron 2/23/2026, 6:38 PM

The point is to undermine data protection; this debate is useless. It's a question about power and control, not a technical one. The people lobbying for this don't care about children, and neither are they getting big support from a constituency clamoring for this. This is an intelligence initiative, and a donor initiative from people who are in a position to control the platform (all computing and communications) after it is locked down.

It's not even worth talking about online. There's too much inorganic support for the objectives of nation-states and the corporations that own them.

Legislation has been advanced in Colorado demanding that all OSes verify the user's age. It will fail, but it will be repeated 100 times, in different places, smuggled attached to different legislation, the process and PR strategies refined and experimented with, versions of it passed in Australia, South Korea, maybe the UK and Europe, and eventually passed here. That means that "general purpose" computing will be eventually be lost to locked bootloaders.

https://www.pcmag.com/news/colorado-lawmakers-push-for-age-v...

[edit: I'm an idiot, they already passed it in California https://www.hunton.com/privacy-and-cybersecurity-law-blog/ca...]

And it will be an entirely engineered and conscious process by people who have names. And we will babble about it endlessly online, pretending that we have some control over it, pretending that this is a technical discussion or a moral discussion, on platforms that they control, that they allow us to babble on as an escape valve. Then, one day the switch will flip, and advocacy of open bootloaders, or trading in computers that can install unattested OSes, will be treated as organized crime.

All I can beg you to do is imagine how ashamed you'll be in the future when you're lying about having supported this now, or complaining that you shouldn't have "trusted them to do it the right way." Don't let dumb fairytales about Russians, Chinese, Cambridge Analytics and pedophile pornography epidemics have you fighting for your own domination. Maybe you'll be the piece of straw that slows things down just enough that current Western oligarchies collapse before they can finish. Maybe we'll get lucky.

Polls and ballots show that none of this stuff has majority organic support. But polls can be manipulated, and good polls have to be publicized for people to know they're not alone, and not afraid they're misunderstanding something. If both candidates on the ballot are subverted, the question never ends up on the ballot.

The article itself says nothing that hasn't been said before, and stays firmly under the premise that access to content online by under-18s is suddenly one of the most critical problems of our age, rather than a sad annoyance. What is gained by having this dumb discussion again?

by scytheon 2/23/2026, 5:23 PM

It's crazy to me that we want to force age verification on every service across the Internet before we ban phones in school. I could understand being in favor of both, or neither, but implementing the policy that impacts everybody's privacy before the one that specifically applies within government-run institutions is just so disappointingly backwards it's tempting to consider conspiracy-like explanations.

The advantage, I think, of age verification by private companies over cellphone bans in public schools is that cellphone bans appear as a line-item on the government balance sheet, whereas the costs of age verification are diffuse and difficult to calculate. It's actually quite common for governments to prefer imposing costs in ways that make it easier for the legislators to throw up their hands and whistle innocently about why everything just got more expensive and difficult.

And the argument over age verification for merely viewing websites, which is technically difficult and invasive, muddles the waters over the question of age verification for social media profiles, where underage users are more likely to get caught and banned by simple observation. The latter system has already existed for decades -- I remember kids getting banned for admitting they were under 13 on videogame forums in the '00s all the time. It seems like technology has caused people to believe that the law has to be perfectly enforceable in order to be any good, but that isn't historically how the law has worked -- it is possible for most crimes to go unsolved and yet most criminals get caught. If we are going to preserve individual privacy and due process, we need to be willing to design imperfect systems.

by light_hue_1on 2/23/2026, 5:09 PM

As a parent, I'm happy that social bans are finally a thing.

But, I don't get the approach. It's not like social media starts being a positive in our life at 20. The way these companies do social media is harmful to mental health at every age. This is solving the wrong problem.

The solution is to take away their levers to make the system so addictive. A nice space to keep in touch with your friends. Nothing wrong with that.

by 1vuio0pswjnm7on 2/23/2026, 7:29 PM

"In cases when regulators demand real enforcement rather than symbolic rules, platforms run into a basic technical problem. The only way to prove that someone is old enough to use a site is to collect personal data about who they are."

These so-called "platforms" already collect data about who people are in order to facilitate online advertising and whatever else the "platform" may choose to do with it. There is no way for the user to control where that data may end up or how it may be used. The third party can use the data for any purpose and share it with anyone (or not). Whether they claim they do or don't do something with the data is besides the point, their internal actions cannot be verified and there are no enforceable restrictions in the event a user discovers what they are doing and wants to stop them (at that point it may be too late for the user anyway)

"Tech" journalists and "tech bros" routinely claim these "platforms" know more about people than their own families, friends and colleagues

That's not "privacy"

Let's be honest. No one is achieving or maintaining internet "privacy" by using these "platforms", third party intermediaries (middlemen) with a surveillance "business model", in order to communicate over the internet

On the contrary, internet "privacy" has been diminishing with each passing year that people continue to use them

The so-called "platforms" have led to vast repositories of data about people that are used every day by entities who would otherwise not be legally authorised or technically capable of gathering such surveillance data. Most "platform" users are totally unaware of the possibilities. The prospect of "age verification" may be the wake up call

"Age verification" could potentially make these "platforms" suck to a point that people might stop using them. For example, it might be impossible to implement without setting off users' alarm bells. In effect, it might raise more awareness of how the vast quantity of data about people these unregulated/underregulated third parties collect "under the radar" could be shared with or used by other entities. Collecting ID is above the radar and may force people to think twice

The "platforms" don't care about "privacy" except to control it. Their "business model" relies on defeating "privacy", reshaping the notion into one where privacy from the "platform" does not exist

Internet "privacy" and mass data collection about people via "platforms" are not compatible goals

"... our founders displayed a fondness for hyperbolic vilification of those who disagreed with them. In almost every meeting, they would unleash a one-word imprecation to sum up any and all who stood in the way of their master plans.

"Bastards!" Larry would exclaim when a blogger raised concerns about user privacy."

- Douglas Edwards, Google employee number 59, from 2011 book "I'm feeling lucky"

If a user decides to stop using a third party "platform" intermediary (middleman) that engages in data collection, surveillance and ad services, for example, because they wish to avoid "age verification", then this could be the first step toward meaningful improvements in "internet privacy". People might stop creating "accounts", "signing in" and continuing to be complacent toward the surreptititious collection of data that is subsequently associated with their identity to create "profiles"

by 2ducton 2/23/2026, 4:46 PM

I'm going to state that at one point I was one of the young people this kind of legislation is meaning to protect. I was exposed to pornography at too young an age and it became my only coping mechanism to the point where as an adult it cost me multiple jobs and at one point my love life.

I don't think this legislation would have helped me. I found the material I did outside of social media and Facebook was not yet ubiquitous. I did not have a smartphone at the time, only a PC. I stayed off social media entirely in college. Even with nobody at all in my social sphere, it was still addicting. There are too many sites out there that won't comply and I was too technically savvy to not attempt to bypass any guardrails.

The issue in my case was not one of "watching this material hurt me" in and of itself. It was having nobody to talk to about the issues causing my addiction. My parents were conservative and narcissistic and did not respect my privacy so I never talked about my addiction to them. They already punished me severely for mundane things and I did not want to be willingly subjected to more. To this day they don't realize what happened to me. The unending mental abuse caused me to turn back to pornography over and over. And I carried a level of shame and disgust so I never felt comfortable disclosing my addiction to any school counselors or therapists for decades. The stigma around sexual issues preventing people from talking about them has only grown worse in the ensuing years, unfortunately.

At most this kind of policy will force teenagers off platforms like Discord which might help with being matched with strangers, but there are still other avenues for this. You cannot prevent children from viewing porn online. You cannot lock down the entire Internet. You can only be honest with your children and not blame or reproach them for the issues they have to deal with like mine did.

In my opinion, given that my parents were fundamentally unsafe people to talk to, causing me to think that all people were unsafe, then the issue of pornography exposure became an issue. In my case, I do not believe there was any hope for me that additional legislation or restrictions could provide, outside of waking up to my abuse and my sex addiction as an adult decades later. Simply put, I was put into an impossible situation, I didn't have any way to deal with it as a child, and I was ultimately forsaken. In life, things like those just happen sometimes. All I can say was that those who forsook me were not the platforms, not the politicians, but the people who I needed to trust the most.

I believe many parents who need to think about this issue simply won't. The debate we're having here on this tech-focused site is going to pass by them unnoticed. They're not going to seriously consider these issues and the status quo will continue. They won't talk with their children to see if everything's okay. I don't have many suggestions to offer except "find your best family," even if they aren't blood related.

by TZubirion 2/23/2026, 4:02 PM

>"None of this is an argument against protecting children online. It is an argument against pretending there is no tradeoff"

Tradeoff acknowledged, and this runs both sides, there's hundreds of risks that these policies are addressing.

To mention a specific one, I was exposed to pornography online at age 9 which is obviously an issue, the incumbent system allowed this to happen and will continue to do so. So to what tradeoffs in policy do detractors of age verification think are so terrible that it's more important than avoiding, for example, allowing kids first sexual experiences to be pornography. Dystopian vibes? Is that equivalent?

Or, what alternative solutions are counter-proposed to avoid these issues without age verification and vpn bans.

Note 2 things before responding:

1)per the original quote, it is not valid to ignore the trade offs with arguments like "child abuse is an excuse to install civilian control by governments"

2) this was not your initiave, another group is the one making huge efforts to intervene and change the status quo, so whatever solution is counterproposed needs to be new, otherwise, as an existing solution, it was therefore ineffective.

If any of those is your argument, you are not part of the conversation, you have failed to act as wardens of the internet, and whatever systems you control will be slowly removed from you by authorities and technical professionals that follow the regulations. Whatever crumbs you are left as an admin, will be relegated to increasingly niche crypto communities where you will be pooled with dissidents and criminals of types you will need to either ignore or pretend are ok. You will create a new Tor, a Gab, a Conservapedia, a HackerForums, and you will be hunted by the obvious and inequivocal right side of the law. Your enemy list will grow bigger and bigger, the State? Money? The law? God? The notion of right and wrong which is like totally subjective anyways?

by 2OEH8eoCRo0on 2/23/2026, 3:09 PM

Fuck data privacy, what privacy? Your ISP knows you, sites track you, cookies track you. It's a myth. But oh, we totally can't figure out age verification. Fuck off, I dont buy it.

by callamdelaneyon 2/23/2026, 4:10 PM

We should just ban smartphones, it's where a great deal of the harm comes from and is harder for parents to manage. No need for children to have cameras connected to the internet whether via smartphones or computers.

by infotainmenton 2/23/2026, 3:13 PM

Device based attestation seems like the way to go largely; it doesn't solve the problem, but it's good enough that it would cover most cases.

by arbirkon 2/23/2026, 5:28 PM

I know many will disagree and that is ok. Imo we need global id based on nation states national id. I know that the US doesn't have that, but the rest of the developed world do. I don't want id on porn sites because I don't think that is necessary, but I want bot-free social media, 13+ sharing forums like reddit and I want competitive games where if you are banned you need your brothers id to try cheating again.

by anon_shillon 2/23/2026, 3:55 PM

From the second paragraph:

> And the only way to prove that you checked is to keep the data indefinitely.

This is not true and made me immediately stop reading. If a social media app uses a third party vendor to do facial/ID age estimation, the vendor can (and in many cases does) only send an estimated age range back to the caller. Some of the more privacy invasive KYC vendors like Persona persist and optionally pass back entire government IDs, but there are other age verifiers (k-ID, PRIVO, among others) who don't. Regulators are happy with apps using these less invasive ones and making a best effort based on an estimated age, and that doesn't require storing any additional PII. We really need to deconflate age verification from KYC to have productive conversations about this stuff. You can do one thing without doing the other.