The one and only method I will participate in is server operators setting a RTA header [1] for URL's that may contain adult or user-generated or user-contributed content and the clients having the option to detect that header and trigger parental controls if they are enabled by the device owner. That should suffice to protect most small children. Teens will always get around anything anyone implements as they are already doing. RTA headers are not perfect, nothing is nor ever will be but there is absolutely no tracking or leaking data involved. Governments could easily hire contractors to scan sites for the lack of that header and fine sites not participating into oblivion.<p>I <i>a small server operator</i> and a client of the internet will not participate in any other methods period, full-stop. Make simple logical and rational laws around RTA headers and I will participate. Many sites already voluntarily add this header. It is trivial to implement. Many questions and a lengthy discussion occurred here [1]. I doubt my little private and semi-private sites would be noticed but one day it may come to that at which point it's back into semi-private Tinc open source VPN meshes for my friends and I.<p>[1] - <a href="https://news.ycombinator.com/item?id=46152074">https://news.ycombinator.com/item?id=46152074</a>
This is exactly the way it should be done. Device with parental controls enabled disables content client-side when the header is detected. As far as I can tell, it's a global optimum, all trade-offs considered.
Well why haven't all the big tech companies done it then?<p>They have only themselves to blame. They had years to fix the problem of inappropriate content being delivered to kids and their response was sticking their fingers in their ears and saying "blah blah blah parenting blah blah blah"<p>And it really should be the opposite. Assume content is not kid-safe by default, and allow sites to declare if they have some other rating.
Because it isn't in their financial interest. They've either done nothing or actively lobbied for these ID laws. You can plausibly explain it in a number of ways, including regulatory capture, deanonimization, spam reduction, etc.
Interesting, I've never heard of this. I see an example that involves an HTTP response header "Rating: RTA-5042-1996-1400-1577-RTA". But does this actually still get used by parental controls? I didn't run into a lot of documentation about this, including on the very badly designed RTA web site <a href="https://www.rtalabel.org/" rel="nofollow">https://www.rtalabel.org/</a><p>For anyone curious about the value, the numbering on the value is just a fixed number everybody decided to use for some reason that isn't clear to me.<p>I would deeply prefer to do it this way, but my goodness the RTA org needs a serious brush up of their web site and information on how to use this.
Or could have a header saying this is <i>not</i> adult-only content, and a parentally-controlled device will block things that don't participate.
Yes, the RTA header was primarily a solution specific to porn sites. The broader problem is that parental controls don't have reliable standardized signals to filter on which has led to the current nonfunctional mess.<p>So ideally you want a standardized header that can be used to self classify content into any number of arbitrary and potentially overlapping categories. The presence of that header should then be legally mandated with specific categories required to be marked as either present or absent.<p>So for example HN might be "user generated T, social media T, porn F" or similar with operators being free to include arbitrary additional categories (but we know from experience that most of them won't).<p>While this would be required by law, I imagine browser vendors might also drop support to load sites that don't send the header in order to coerce global compliance.
That's a good idea. There could be two headers, the existing RTA header that adult sites use today [1] and another static header that explicitly states there shall be no adult content.<p>[1] - <a href="https://www.shodan.io/search?query=RTA-5042-1996-1400-1577-RTA" rel="nofollow">https://www.shodan.io/search?query=RTA-5042-1996-1400-1577-R...</a> [THESE ARE ADULT SITES, NSFW]
What is adult content? I know parents who have no problem with their kids seeing porn. I know parents who give their kids a beer. I know parents who take their kids to violent movies. I used to know parents who will give their kids cigarettes. Most parents I know will disagree with their kids doing one of the above. I know songs that were played on the radio in 1960 that would not be allowed today, even though today we allow some swearing on the radio.
That's between parents and their local governments. Yes when I was a kid my mom let me watch whatever and go wherever. The parent in my example ultimately decides what a kid may or may not do which is in alignment with existing laws. If the parent is endangering their kid that is up to them and their government to sort out.<p>Point being, put the controls entirely into the hands of the device owner. Options can be to default to:<p>- Block everything by default unless header states otherwise.<p>- Block only sites that state they are adult.<p>- Do nothing. Obey the operator. <i>(Controls disabled on child accounts or make them an adult or otherwise unrestricted account on the device)</i>.<p>I think the options are just limited to our imagination.
> I know parents who have no problem with their kids seeing porn.<p>Surely you mean at least teenagers, and not literally children, right? Consider the prevalence of violence, racial stereotyping, and escalation of fetishism into degeneracy that clearly exists within this medium; what's the line that these parents draw? Are they making sure it's only something vanilla? Or is there no line whatsoever?
Then those parents can turn off their browser/client’s age protections. I think that’s actually a decent argument for the solution posed by this thread.
There are already laws defining this. Had to draw the line somewhere, and they did.
> I know parents who have no problem with their kids seeing porn.<p>I don't agree with showing actual children porn, but I also totally expect teenagers to find some way to get access to it in the age of the Internet.<p>Part of the challenge with this is cultural. Different places in the world think about sex, sexuality, and even the concept of what is a child differently. In the US, showing a woman's bare breasts to a person under 18 is generally considered wrong, and in many cases is illegal. In most of Europe it wouldn't even raise an eyebrow, because bare breasts are on television, sometimes in commercials even.<p>Set aside for a moment the question of age verification and age limits, we cannot even agree in any sort of universal sense what even qualifies as porn or adult content, and at what age someone should be able to see it. There's a difference between a 7 year old and a 17 year old seeing the same type of content, and there's also a difference between a photographic nude and a video of people engaged in coitus.<p>The story is basically the same for everything else you listed.<p>These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do. What the GP suggested using RTA headers though puts the control into the parent's hands, which is as it should be.
We don't need to care what France or China thinks when we make our laws that are about our own citizens. They do the same over there.<p>> These age verification laws in many ways are trying to use the most heavy-handed mechanism possible to enforce American cultural norms on the entire planet. That's clearly wrong to do.<p>Yes there's a chance our rules spill over there naturally, and I don't consider that wrong either.
I considered many of the same points you mentioned.<p>Though, one area I am still struggling to grasp is the harm that governments are trying to mitigate. If a child were to see inappropriate material, then what harm can truly arise? Also, why do governments need to enact such laws when the onus of protecting children should be on their parents?<p>I am not trying to start any kind of flame war, but I really cannot see any other basis for all this prohibition that is not somehow traceable back to Western religious beliefs and the societies born and molded from such beliefs.
I always love seeing pros and cons of whitelist vs blacklist sorts of strategies in different scenarios.
Yeah, and this is a good one. Blacklist is less likely to be ignored by parents. Both have risks of corps doing CYA strats, but less so with the blacklist. Whitelist has the advantage of being more feasible without an actual law, and also better matching how parenting works. Generally kids are given whitelists irl.
Back in the late 90s or so, there was a proposal to have sites voluntarily set an age header, so parents/employers/etc could use to block the site if they wish. People said it would never work, because adult sites had a financial incentive not to opt in to reduce their own traffic.
The porn companies already set the RTA header. It was designed by an organisation funded by the porn companies.<p><a href="https://en.wikipedia.org/wiki/Association_of_Sites_Advocating_Child_Protection" rel="nofollow">https://en.wikipedia.org/wiki/Association_of_Sites_Advocatin...</a>
What, in the same way movie studios wouldn't comply with the Hayes Code, or comic book publishers wouldn't comply with the CCA, or games publishers wouldn't comply with the ESRB? The financial incentive is to police yourself, because government policing is much, much worse.
There's a great relevant quip: "If you think that the cost of compliance is high, try noncompliance".
Sure but the government doesn't police corporations in the US anymore. The Hayes code was before neoliberalism.
You’d think that one could simply block sites that don’t have the age header set on child computers. This may block kids from hobbyist sites that don’t bother to set their headers as kid-friendly, but commercial sites would surely set their headers properly. Over time sending proper rating headers would become more normalized if they were in common use.<p>This still isn’t perfect, as it creates an incentive for legislators to criminalize improper age header settings and legislate what is considered kid-appropriate. But it’s still better than this age verification crap.
Yes, that's how parental filters already work. They use a combination of rta tags and external data to block pages. Even works with Google safe search, firewall devices, etc. The rta ecosystem is already built out and viable.
What I am suggesting <i>could</i> address most of that. If they do not participate they get fined. The government loves to fine companies. This assumes they put enough <i>"teeth"</i> into a law that prevents companies from accepting fines as the cost of doing business. This would also require legislation that could block sites that operate from countries that do not cooperate with US laws. <i>Mandatory subscriptions to BGP AS path filters, CDN block-lists which already exist, etc...</i> People could still bypass such restrictions with a VPN but that would not apply to most small children. Sanctions and embargoes are always an option.
>fined<p>Exactly. If you’re hurting kids to make more money selling porn videos, straight to jail.<p>I’m glad there are solutions that won’t ruin the Internet. Now the uphill battle to convince our legislators (see: encryption & fundamentally technically ignorant calls for backdoors).<p>I’m here to die on this hill!
People were wrong.<p>We pay money online mostly through credit cards. Credit card transactions can be reversed. If children spend money on porn, those payments are likely to be reversed. This is really bad for the ability of the porn sites to continue receiving credit card payments, and continue making money.<p>An age header is a trivial step that can reduce the odds of the adult site receiving payments that later get reversed. Win, win.<p>But if someone is willing and able to pay, then the adult industry wants the choice of whether to access content to be up to them. If government tries to regulate them, they'll engage in malicious compliance - do the minimum to not be sued, in a way that they can still reach customers.<p>For example Utah tried to institute age verification. The porn industry blocked all IP addresses from Utah. Business boomed for VPN companies in Utah. Everyone, including porn companies, knows that a lot of that is for porn. But if you show up with a Nevada IP address, the porn's position is, "You're in Nevada. Utah law doesn't apply." Even if the credit card has a Utah zip code.<p>If you live in Utah, and you're able to purchase a VPN, the porn companies want your money.
>But if someone is willing and able to pay<p>If someone is willing and able to pay, they have a source of money. If they aren't allowed to buy something, that control should be applied at the level where they get the money. If the child is using an adult's credit card, responsibility lies with the adult. If children need to have their own credit cards, the obvious point of control is the credit card itself.<p>But also, most porn is ad-supported, pirated or free. Directly paid content is a small fraction. So all of this is moot for porn.
There was a random comment here on HN few days back that adult contents have <i>lower</i> chargeback rates than everything else.<p>So ig stop spreading hallucinatory misinformations?
> Back in the late 90s or so, there was a proposal<p>This one: <a href="https://www.w3.org/PICS/" rel="nofollow">https://www.w3.org/PICS/</a>
PICS was very complicated and attenpted to cover all possible "categories" of adult content. It was confusing, incomplete and only a handful of sites voluntarily labelled their sites with it. RTA is one simple static header that any site operator could add in seconds unless they get more complicated with it by dynamically adding it to individual videos <i>say, on Youtube</i> which means in that case the server application would need to send that header for any video tagged as adult.<p>I added PICS to my forums but it was missing many categories of adult content. I ended up just selecting everything <i>as I could not predict what people may upload</i> which made for a very long header.
> unless they get more complicated with it by dynamically adding it to individual videos say, on Youtube<p>YT already does this. I never watch YT signed in, and I often see videos that require you to be logged in as the video is age restricted.
Agreed though in my example the point would be to set the header in the case the child is logged in but for whatever reason the site does not know their age. Instead of a third party site, a header is sent with the video tagged as adult that triggers parental controls if they are enabled by the device owner.
Servers can then infer user’s ages by whether or not the client renders pages given those headers or not no? See if secondary page requests (e.g images, scripts) are made or not from a client? A bad actor could use this to glean age information from the client and see whether the person viewing the page is a small child. That should be scary
I disagree. The ability to render a page could simply mean that parental controls were not enabled on the device. Some parents have assessed the situation and trust their children to be psychologically ready for adult situations. The client could be literally any age.<p>Today devices do not default to accounts being child accounts. Some day this may change and may require an initial administrator password or something to that affect but this can evolve over time.
That's true. But leaking an age threshold is not the same as private companies being able to link all your online activities to a single legal person.
Adults could also use this to filter out unwanted content without needing to rely on outdated filter lists.
How are they supposed to fine sites out of their jurisdiction?
> fine sites not participating into oblivion.<p>That would also amount to compelled speech.
<i>That would also amount to compelled speech.</i><p>I disagree. The legal requirement to apply a warning label is a well known, understood and accepted process that is applied to a myriad of hazards to children <i>and adults</i>. As just one example businesses in some states, most notably California are compelled to add warning labels to foods and other products that could cause cancer.
That's not the best example, since the levels set for Prop 65 warnings are so low that the warnings are effectively useless; every single commercial building in CA now somehow causes cancer.
Do you believe using the Internet should require a license? Isn’t that what covers these product warning labels?
Clients could refuse to show content that does not have headers set.<p>On other hand servers might choose to lie. After all that is their free speech right.<p>So maybe you need some third party vetting list. Ofc, that one should be fully liable for any damages misclassification can cause... But someone would step up.
Compelled to disclaim facts is good compelled speech, though.
This doesn't address the wider array of age-verification related problems that people want to solve, like social media where age verification is needed to police interactions between users.
Such censorship shouldn't exist in the first place.
I could be misunderstanding the context but to me that sounds like a moderation issue assuming we even want small children on social media in the first place. There should probably be a dedicated child-safe social media site that limits what communication can take place for small children and has severe punishments for adults pretending to be children for the purposes of grooming.
Moderation is like law enforcement, it doesn't prevent crimes from happening it just punishes the people they can catch. There exist severe punishments for the kinds of behavior I'm talking about, but unsurprisingly, this does not stop kids from being harmed and it doesn't <i>undo</i> it.<p>This isn't hypothetical, by the way. There are adults catfishing kids into producing CSAM [0], kidnapping and assaulting minors [1], [2], and in the most extreme case, there's a borderline cult of crazy young adults who do terrorize people for fun [3].<p>It is a constant game of whackamole by moderators/admins to keep this behavior out of online spaces where kids hang out.<p>I recognize that this is a "think of the children" argument, but indeed that's the point. The anonymous web was created without thinking about the children, just like how all social media was created without thinking about how it could be used to harm people. Age verification is the smallest step towards mitigating that harm.<p>Now I <i>disagree very strongly</i> with the laws proposed (and indeed, I've been writing/calling/talking with state reps about this locally, because I don't want my state's bill passed). But the technical challenge needs to address the real problems that legislators are trying to go after.<p>[0] <a href="https://www.justice.gov/usao-wdnc/pr/discord-user-who-catfished-young-victims-sentenced-21-years-prison-production-child" rel="nofollow">https://www.justice.gov/usao-wdnc/pr/discord-user-who-catfis...</a><p>[1] <a href="https://www.nbcnews.com/news/us-news/kidnapping-roblox-rcna201795" rel="nofollow">https://www.nbcnews.com/news/us-news/kidnapping-roblox-rcna2...</a><p>[2] <a href="https://www.nbcmiami.com/news/local/nebraska-man-charged-with-kidnapping-florida-sisters-after-talking-to-them-on-roblox-sheriff/3758900/" rel="nofollow">https://www.nbcmiami.com/news/local/nebraska-man-charged-wit...</a><p>[3] <a href="https://www.fbi.gov/contact-us/field-offices/boston/news/open-letter-to-parents-guardians-and-caregivers" rel="nofollow">https://www.fbi.gov/contact-us/field-offices/boston/news/ope...</a>
I am only interesting in protected the majority of children which I believe my proposal more than covers. There will always be exceptions. Today teens share porn, warez, pirated movies and music with small children in rated-G video games. I am not proposing anything for that. It is up to businesses to detect and block such things.<p>Point being, there will be a myriad of exceptions. I am not looking to address the exceptions. Those can be a game of whack-a-mole as they are today. I am proposing something that would prevent the vast majority of children from being exposed to the trash we today call social media and of course also porn sites.
Look, please don't sideline/marginalize people by using the "whataboutism" term. Thats being used more and more to silence dialog from people that see problems outside the focus of a specific area. Its important that we see ALL sides of the problem.
These aren't exceptions or whataboutism. It's the debate being had on the floors of state legislatures.<p>> It is up to businesses to detect and block such things.<p>Which is <i>exactly why</i> age verification legislation is hitting the books. No one (serious) cares about whether kids can download porn and R rated movies. Parental controls already exist if the threat model is preventing access to specific content that is able to report itself as _being_ that content.<p>Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content. They define specifically what classifies as an addictive stream and put the onus on service providers to assert that they're not delivering addictive streams of media to kids. An HTTP header isn't enough, because it's not about the content being shown to kids but the design patterns of how it's accessed.<p>Essentially: age verification isn't about porn. 18+ content stirs the pot a bit with the evangelical crowd but it's really not what people are worried about when it comes to controlling digital media access with age gates.
<i>Your proposal also doesn't address the other domain that these legislators are targeting, which is addictive content.</i><p>That sounds simple to me. If a type of content is addictive then require the RTA header.<p>- Adult content, or possible adult content.<p>- User contributed or generated content <i>(this covers most of social media)</i><p>- Site psychological profiles that are deemed addictive <i>(TikTok and their ilk)</i><p>Overall we are describing things that are harmful to the development of the minds of small children. If adults wish to avoid such content they can create a child account on their device for themselves to be excluded from this behavior as well. I use a child account in a couple of popular video games to avoid most of the trash talking and spam.
This is assuming children should be on social media at all, which I for one would debate.
How would this work with sites like YouTube which allow sharing of content, potentially not appropriate for children, but the content is generated by the site's users? Who will be fined for "violations"? And how would such a fine be levied, especially internationally?
I think that initially the onus would be on Youtube to figure this out. They have some very intelligent engineers. For example, if the Youtube client is receiving affiliate funds then they are easy to ID and fine. If they are random people then Youtube would have to share the violation data with the other countries and the US or UK would have to pressure those countries to participate in fining the end user. There could be financial incentives for the foreign country to participate. They can also just force label a video to be adult as they do today when enough people report it <i>which is admittedly not uniformly applied</i>.
There's an angle everyone misses.<p>Mandatory age surveillance everywhere is only going to result in massive, normalized ID fraud. You thought fake and stolen IDs were a problem before? You haven't seen anything yet.<p>And half of it will be from adults trying to avoid privacy invasion.
Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.<p>A tech company doing scans for validation could actually connect to a state database to verify the ID is legit and is not already being used for a different account. It would then be saved. I don't think real world vs tech world usage of fake IDs are the same at all.
>Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything. Mainly it's some big man that you can see gears turning to see if the date is correct and a cursory glance to see if the photo matches. Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo. Most of these people really don't care.<p>Not necessarily true. There's a local stripclub that scans and saves the scan to fight chargebacks and the like. It is definitely logging stuff. They've told me that they were going through the logs once and the bartender ended up googling my fullname. We're cool and I didn't care, but this what you said is not a blanket true statement. I trust a physical business that I can visit far more than some ID verification company that <i>is going to get hacked at some point.</i>
The tech companies care even less than the bouncers do.<p>They just want a plausible defence should it ever end up in court.
tech companies care even less? how do you arrive to that conclusion? tech companies log/store EVERYTHING. this would be an absolute boon for them to be able unequivocally assign to you all of the data they track about you. suddenly, anonymous analytics become identified data and not just deanonymized data.
Logs of location data on people are already worth real money. The FBI has admitted to buying it. The companies that do age verification will absolutely be selling that data unless there are severe penalties for doing so, and what are the odds that the U.S. government passes a law making it illegal for the FBI to buy data?<p>That's bad enough if you're a U.S. citizen. If you're a non-U.S. citizen, now you're in the situation where all these U.S. social media sites are collecting personal information from you and reselling it, but you have no legal protection unless your government risks tariffs and invasion threats to pass legislation against it, which the U.S. will probably ignore anyways.<p>This might just be the impetus that finally drives enough users to non-U.S. social media platforms to get the snowball rolling downhill.
How does a tech company calling into a government database to verify your identity maintain your anonymity?
> Not so sure about that. Handing an ID to a bouncer at a bar or similar is not logging anything.<p>> Sophisticated places might have a scanner that does what ever validation it does, but again, it's just another cursory check of the photo.<p>Many/most bars do scan IDs now. Ostensibly it's to verify that it's real, but they do use those systems to keep a log of everyone who enters.
Well then it’s a good thing my fake id is from a state or foreign government without a checkable database
ID system should be based on commercial bank. If you need to prove your identity or whatever about yourself just tell them to ask your bank and bank will ask you which information about yourself you are willing to share with whoever requested to confirm something about you.<p>When ID is tied to your bank account you guard it like you guard your bank account. Because it is the same thing. This will drastically lower the incentives to "share" your identity with anyone.<p>What's more this system is already operational in many countries.
Plenty European countries have eID without these issues.
You use eID when explicitly interacting with a govt entity or bank or otherwise similar institution because you have to and want to prove who you are. Yes, I do want to prove who I am when I file taxes, vote or want to start a business...<p>You don't use it when just browsing randomly on the internet. You don't use it to buy games on steam. Your computer isn't forced to store it because a law arbitrarily says so.
THe government shouldn't be raising anyone's children, that's what parents are for. If you're a bad parent, your kids will get access to bad things and could become an adult failure.<p>The future of your family and your legacy is up to you, not the government. We don't need age verification to restrict the social darwinism of raising children.
I wish I could upvote this comment harder. I started having unsupervised internet access (with the family computer in the living room) when I was 8. I'm a functional and successful adult because I trusted my parents. When my mother forbade me from registering on online forums I complied. When I read "fellation" in some minecraft chat (albeit somewhat later) I asked my mom what it was and understood that "sex" was something for the grown-ups and that I shouldn't worry about it. All because I would never even conceive that my parents wouldn't do what's best for me, and was unconditionally loved (even though I didn't know about this concept).<p>I would rather have parenting licenses than online age verification
Yeah I'm not sure why the govt or any other 3rd party needs to get involved. If I don't want my kids to look at porno online I will educate them on porn. If I don't trust my kids to listen to me then I will install an open source monitoring software and educate them on trust.<p>Letting the govt dictate what is age restricted is an easy way for the govt to control speech and narrative. For example, children's books that feature LGBT characters are being reclassified as adult [1], thus requiring additional verification. If I do/don't want my kids to read LGBT books, it's my decision. The govt should not dictate that. What else will the govt reclassify? Anything involving people of color?<p>[1] <a href="https://www.ala.org/bbooks/book-ban-data" rel="nofollow">https://www.ala.org/bbooks/book-ban-data</a>
I keep thinking we can't fight age verification by just saying "no" to it, and have to offer an alterative.<p>Maybe we need to turn it on its head, point out that if we want legislation to help out with this, we could choose legislation that gives power to parents. Age verification laws put the power directly into the law itself, they're a blanket solution that gives all the power to legislators and that prevents parents from making decisions about what's appropriate for their kids and what isn't.<p>If the market isn't delivering the level of parental controls people want, then sure, maybe legislation is needed. But it should be legislation that improves parental such that parents can make decisions about what's appropriate for their children.
Also, different kids mature at different rates. I wouldn't give a shit about my kid watching, say, an R rated movie if I understand they'll be able to handle it and understand it's fiction. If I had a 14 or 15 year old and they had a healthy understanding of sex and the dangers of porn, I wouldn't give a shit if they managed to see some poorly drawn tits online. Why? Because if you didn't intentionally seek out lewd content as a teenager you're either very very religious or a liar
Age verification can be achieved without destroying anonymity and privacy online using anonymous credential systems, but it has to be designed that way from the ground up, and no one pushing age verification is interested in preserving privacy.
This comes up in every thread, but the purpose of the laws is not to verify that someone can access an anonymous token. If we had a true anonymous token system then everyone would just share tokens around.<p>The real world analog would be if you could buy beer at the store with anyone's ID because they didn't make any effort to reasonably check that the ID was yours or discourage people from sharing or copying IDs.<p>The systems enforce identity checking because that's the only way age verification can be done without having some reason to discourage or detect credential sharing.<p>The retort that follows is always "Well it's not perfect. Nothing is perfect." The trap is convincing ourselves that a severely imperfect system would be accepted. What would really happen is that it would be the trojan horse to get everyone on board with age verification, then the laws would be changed to make them more strict.
Matthew Green talks about this in his blog on the subject: <a href="https://blog.cryptographyengineering.com/2026/03/02/anonymous-credentials-an-illustrated-primer/#:~:text=clone%20wars" rel="nofollow">https://blog.cryptographyengineering.com/2026/03/02/anonymou...</a><p>The two methods that seem feasible are making it hard to copy (putting it in the secure element in your phone, for example, which I don't love) or doing tokens that can only be used a limited number of times per day, like in : <a href="https://eprint.iacr.org/2006/454" rel="nofollow">https://eprint.iacr.org/2006/454</a>
Continuous age verification isn't possible, so you'll have to store some sort of proof of age somewhere, and that proof will always be sharable.<p>Let's say Facebook has verified my age somehow. I could share my Facebook login credentials, or the token that their authorization server sends back in response. You can create some hurdles to doing that, like requiring a second factor, but I can just share that too.<p>You might as well go down the route of accepting that possibility. These systems are never going to hold up in the face of a determined enough teenager.
Make it a duplication resistant hardware token that you can get for free then. The stakes just aren't high enough to worry about these kinds of edge cases.
Yeah, right. So the government is going to spend billions on “porn tokens”. That’s going to get through the legislature.<p>I’m sure there wouldn’t be a brisk illicit trade in these tokens either. Certainly no one would be incentivized to sell these tokens to teenagers for easy profit.
Further, "porn tokens" are the pointy end of the wedge, because it's easy to misconstrue any opposition as advocating for "kids should have access to porn, actually". The broad end that is being hammered towards is "kids aren't allowed on social media because it's harmful to them" AKA "free speech tokens".
The stakes just aren't high enough for us to implement any of this crap for the Internet in the first place. Let alone an entire government-administered hardware supply chain.
No it really can’t. Age verification requires identification.<p>Even if you could anonymously verify age to issue a “confirmed adult” credential, the whole chain of trust breaks down if one bad actor shares their anonymous credential and suddenly everyone is verifiably an adult.<p>The solution to that attack is naturally to have some kind of system for sites to report obviously-shared credentials. Which means tracking.
There's already authorities that know your age, so verifying age with them to get the credential isn't the part that needs to be anonymous. The issue is them knowing what you do with your credential, which anonymous credentials solves by making it impossible to track tokens back to the credential holder. As far as sharing, there are some possible mitigations.
Right. And the possible sharing mitigations generally amount to tracking.<p>This isn’t even getting to the issue that mandating government-issued credentials is the “foot in the door”. If you mandate the use of government creds for accessing websites, it’s an obvious step to turn around and demand that sites report credential use to “fight credential fraud”.
But likewise, someone can share (or have stolen) their ID<p><a href="https://news.ycombinator.com/item?id=47951372">https://news.ycombinator.com/item?id=47951372</a>
The destruction of privacy is the whole point.
This is something that's technologically feasible, but will never happen in practice.
Yes, but this is not popular among technologists (see the average sentiment towards age verification here). Legislators aren't going to build technology. This will happen if age verification actually becomes a widespread requirement. But until that point the prospective builders will be fighting the entire premise of such systems.
Apple and Google have already implemented private age verification.
They are interested - interested specifically in opposing it. These groups don't care about age verification - it is a trojan horse for censorship.
That is why we need easy to use age verification technology that protects identity. (I would prefer it implemented anonymous and indistinguishable "age or permissioned by parent" verification.)<p>Because it is genuinely useful for the nominal problem, and removes the cover of those using the nominal problem for their ulterior motivations.<p>Pointing out that they don't want the technology isn't a reason not to produce the technology. I am baffled at how often that argument is made.<p>(Not saying you are making it.)
the EU is. but their verification age process shows the design flaw that preserving privacy means the system can be easily circumvented with a mitm allowing to circumvent the age verification process.
Young people setting up a MITM and getting deeper into tech rather than consuming short-form-content is something I'd appreciate as a nice bonus effect.<p>Of course the EU solution isn't perfect and there are bypasses (there will always be and have always been), but let's appreciate it that way rather than too many PII, if it must come. I'd prefer the Age/RTA header and parental responsibility too.
AFAIK there are designs in the EU that respect privacy. There is a range of options being pushed around the world, and theres definitely a few of them which are more technically defensible than others.
And they continue to act like opposition just wants a wild west/don't care about kids, which is the oldest trick in the book. We just don't want "protect the kids" leveraged to tear up our rights.<p>It's addressing a real problem in a bad way.
After reading these comments, I don't want to hear any of you suggest that kids shouldn't be allowed to have unrestricted access to smartphones or social media ever again.<p>How did you think this was going to be enforced?
I’m in the UK and we recently got the Online Safety Act. We failed, this legislation is very popular with voters and not getting rolled back. Those that dislike it use a VPN and aren’t interested in fighting. I’d say most of the public here is exhausted with cost of living and internet freedom just isn’t relevant to their voting habits.<p>I grew up around a lot of the hacker ethos, open internet, Information Wants To Be Free etc… feels like a part of my identity is being striped away by my government.
How are folks recommended to get involved? Contact your local Congress member? I feel this thread has a lot of passion but is missing concrete, actionable steps.
Heroes @ EFF have our guide (USA residents):<p><a href="https://www.eff.org/pages/help-us-fight-back#main-content" rel="nofollow">https://www.eff.org/pages/help-us-fight-back#main-content</a>
I've contacted my congressmen and I would also advocate for telling/explaining this to non technical people you know. They either won't have heard of this or won't know whats bad about it.
Let them pry ID from our cold dead hands. If a site requires ID, it doesn't get my business.<p>Example, Discord wanted my ID to enable certain features, I declined, I now can't use those features, fine by me. If they started asking for ID anyway, I'd say no and see what happens, even if that means they lock me out entirely. There's no universe where they get my ID.
The Electronic Frontier Foundation set up a resource page for this:<p><a href="https://eff.org/age" rel="nofollow">https://eff.org/age</a><p>Their guide:<p><a href="https://www.eff.org/files/2026/04/09/condensed-age_verification_resource_guide.pdf" rel="nofollow">https://www.eff.org/files/2026/04/09/condensed-age_verificat...</a><p>Unfortunately, their most prominent call to action doesn't seem to address the various state-specific and non-US legislation (focusing on KOSA instead). Here it is:<p><a href="https://www.eff.org/pages/help-us-fight-back" rel="nofollow">https://www.eff.org/pages/help-us-fight-back</a>
I have long thought that all content (local and remote) should be properly labeled with metadata. Just like the cans of soup in the supermarket, you don't have to open it to find out if it has peanuts, lactose, or MSG in it; you should be able to filter data before accessing it.<p>You could define a set of 5 or six categories (nudity, sex, drugs, violence, etc.) and have a scale from 1 to 10 for each. Each content producer would rate each category according to defined criteria.<p>Then each user, or their parent, can set what their own acceptable level is. If you set your violence level at 4 then nothing level 5 or higher will load.
There are some showstoppers here, though. You have to either:<p>A) Change the laws in all countries (a non-starter), or
B) Restrict access to only countries that obey those laws<p>And Option B is a non-starter to the freedom crowd.<p>Not to mention all the other issues with labeling, such as:<p>A) How to label in an internationally-agreeable way
B) How to prevent abusive mislabeling<p>It's fraught, this path.
Age verification on Australian social media has loopholes. Underage influencers use an agency to manage their social media for them. So anyone with enough followers or money can continue using social media under the age of 16.<p>If you are going to implement age controls, you should implement a ban on underage influencers as well.
How could one protect the, call it one in 1 million… the speech of the (young) Greta Thunbergs, for example?<p>I bet there is a 15 year-old much smarter than me making political videos and I wouldn’t necessarily want them to be forced to stop. What if they’re on my “team”! ;) (I kid)<p>Recalling how we had lots of political debates in high school: if some of those kids made videos and got really popular, and the law made them stop, they would have been incentivized to vote $responsibleParty out.<p>(Socials bad for kids though maybe they could selfhost their monologues instead)
I believe every government disenfranchises young people because they are young.<p>Its not about intelligence. Else a whole lot of over-age-of-majority wouldn't pass either.<p>Theres also no old-age cutoff, when their mental faculties significantly decline.<p>Yeah, the voting majority keeps 'under age' from voting. But at least in the USA, we have children as young as 11 being tried as adults but with none of the benefits.
You’re right that it shouldn’t be about intelligence! Overall definitely unfair.<p>—<p>After posting, I questioned whether political speech is special. Like should fifteen-year-olds who love film be able to make videos about them and get lots of followers… but I couldn’t be thought police. So maybe-<p>The platform just has to be designed non-addictively.<p>Is this accurate?: In reality, Facebook was so powerful the regulators could never make them stop at any turn. Now that they finally got sued big time, we finally educated ourselves enough as constituents to raise enough of a stink to trigger straight up bans. (educated ourselves, or politicians legislate based how bad headlines are, or it was so egregious it genuinely ticked them off… …)
>Underage influencers<p>Anyone who has hone so far as to become an influencer is already a lost cause. No law could save them.
That’s not really a loophole though. We have child actors in Harry Potter.
>If you are going to implement age controls, you should implement a ban on underage influencers as well.<p>That just makes it even worse, why deprive the younger generation of one of the few remaining methods they have to make a decent income? We should be encouraging youth entrepreneurship, not making them spend even longer in classrooms learning things that LLMs will do better than them.
This is almost verbatim the same argument that people make in support of allowing child labor in factories.<p>Children do not need, nor are they entitled to, any kind of "freedom" to work for a living.
Less education, more peddling products on Instagram is... certainly an opinion that exists.
People under the age of 16 shouldn't be worried about "making a decent income". They should focus on school.<p>In the weekends they can stock shelves, deliver pizza, deliver newspapers, wash dishes, babysitting, feed animals or other typical jobs for children in the age range of 12 to 16.
Since when did being an influencer become 'one of the few remaining methods' to make a decent income?
I don't think it truly is, but I do think that the younger generations think it is.<p>My nieces and nephews really don't know what they are going to do in their futures because so much is uncertain right now.<p>If it feels like a longshot to expect normal 9-5 office jobs to be around in 5 years, and it's also a longshot being an influencer, then why not go for the influencer thing?
>age verification requires identity verification. Identity verification requires digital IDs. Digital IDs require everyone — not just children — to prove who they are before they can speak...<p>Not if it's done in a half arsed way. I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).<p>Really this is just the modern equivalent of putting the porn mags on the top shelf at the newsagent to stop the kids getting them. We don't need more.
A photo identifies you. This is the digital equivalent of having a photo taken of you upon entering the mag store, stored digitally forever, shared with government, and tied to every magazine you read and purchase.
> I'm in the UK and so far my age verification has involved doing a selfie with the webcam for Reddit. That's it. No one needing my name, ID number etc. (Apart from banks of course).<p>a convenient record of your face is all we need
<i>doing a selfie with the webcam</i><p>First, that's easily enough to identify you from biometric data, and it's naive to assume it won't be resold. Second, I kept getting asked for ID into my 40s because I looked young. People don't all age in the same way, so this system will fail for people at the tails of a normal distribution - some 15 year olds will easily pass for 25 and vice versa.
In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification. It's not explicitly part of the proposed law but there are only a handful of companies who meet the qualifications to provide this service (id.me, Persona) and this is how they do it.<p>I believe if you are a "minor" then you can go the post-a-selfy route.
If someone wanted to be a martyr and just uploaded all their personal documents so they could be accessed by everyone, I wonder if an interesting court case might follow.<p>I could imagine it ending with a court ruling that people are responsible to protect their own personal documents which... yeah, that would muddy the waters in a world where every website expects to see your ID.
Imagine so if that was a pltr right
Or like someone who uses pltr
What could possibly go wrong? People are being paranoid for no reason!
> In the US, the plan is to require adults to take a picture of their state ID and upload it to a third party that provides age verification.<p>That's not just the plan - that's what's already legally required in many US states.<p>These laws were introduced by the explicitly religious right-wing groups like Exodus Cry and Morality in Media, as ways to <i>de facto</i> outlaw pornography (in their own words). They've since been laundered into the mainstream so the general public is unaware of the root cause.
Whether it can be done this way is besides the point. It is about how regimes like ours in the US that have demonstrated an interest in spying on their subjects choose to regulate this over time.
Reddit is one thing but would you do the same for a porn site?
Now Persona has your picture and PII. Pray they never have a breach.
Does it not sound insane to you that you need to expose your biometrics to a corporation just to make anonymous posts on a forum?
We need a truly distributed point-to-point internet asap. Politicians going to do everything to limit free speech and free ideas in the name of protecting children while they already got all the powers to investigate and stop child abuse.<p><a href="https://meshtastic.org/" rel="nofollow">https://meshtastic.org/</a>
Just requiring it for social media companies is probably enough of a win to not have to pursue any further. We require age verification for sports betting and things like that, I'm not sure why we wouldn't do the same or some variation of that for other massively addicting products that we know as a matter of scientific study have a very bad impact on some number of kids.
It will spread to everywhere else if we allow it for social media. In Australia for example, mandatory age verification has already spread to video games.
Indeed, social media companies seem to big proponents of the US legislation.<p><a href="https://www.politico.com/news/2025/09/13/california-advances-effort-to-check-kids-ages-online-amid-safety-concerns-00563005" rel="nofollow">https://www.politico.com/news/2025/09/13/california-advances...</a>
Big social media companies are likely <i>overjoyed</i> to be able to get discrete, government issued info of a person's full legal name, date of birth, residential address (as is printed on US drivers licenses) for advertising and demographic profile targeting purposes. And then be able to correlate it with their existing social media history/clicks/profile, browser fingerprinting, IP address, daily usage patterns, geolocation. It's a massive gift to them.
I doubt they need that to identify you. There are also lots of other problems like algorithmic manipulation. But also just stop using these junky websites. Everyone always complains about Meta doing this, TikTok doing that, and it's like if all they do is make you mad, stop being their user/customers?
Because it's not about children but requiring identification to speak online.
That's the cynical view, yes, but we can see educational standards and performance going down in the United States, we have seen plenty of scientific and medical studies showing problems with children and more specifically teenagers using social media. I'm not one to want to want to limit someone's rights, but it seems like the trade-off here is in favor of requiring age verification at least for social media companies.<p>Separately I still don't fully agree with concerns raised regarding social media and identification for everyone. Bots, people who are online just stirring up trouble, &c. are causing pretty significant challenges and problems for society. If you spew a bunch of racist stuff for example I think people deserve to know who you are.<p>And you know we do this all the time. Folks want gun registries and things like that (and I agree, as a matter of practice, but not principal) so I'm not sure why we're ok with that form of requiring identification to exercise your rights and against this one other than political priorities.
While we've been agonizing over Age Verification (real or planned), Greece has apparently introduced a ban on anonymity on social media. I'm not liking where the world is headed, but I have no idea how to push back against it.
Since it is so harmful to let children use social media, why aren't parents being put in prison for abuse and neglect when they let their children use social media? Why should everyone else have to suffer when it's parents that should be punished?<p>(it's because it's not about protecting children)
We simply don't need online age verification. It's not the state or private business' job to parent children. It's their parents job.<p>This is not only unnecessary, but will with 100% certainty lead to negative downstream affects, either via leaks, or the state being able to find people for things that aren't crimes once they're adults.<p>There's simply no good reason for it that outweighs the bad. But what it really boils down to is completely unnecessary.
In the age of AI I think it’s only necessary and inevitable to implement some of kind of internet ID system to stop the massive onslaught of AI generated fraud, malicious hacking, and spam. If age verification is a Trojan horse to erase online anonymity, so be it, I see that as a worthy goal.<p>Humans are inherently social, and social networks are based on trust. Trust is primarily a function of reputation, peer pressure, and legal consequences. Reputation requires tying behavior to a stable identity. Peer pressure only works when you’re not anonymous. For there to be legal consequences for bad behavior, we must identify bad actors. I don’t see why anyone would want to remove any of this. To protect some freelance journalists in Iran?<p>Also I don’t think that the “pro privacy” activists really understand the scale and severity of harm being done to children through the internet. I as a programmer who makes my living on the internet, would gladly support the shutting down of the whole internet if it would save the life of a single precious child.
The irony of posting ethical social reflection on X though...<p><a href="https://xcancel.com/GlennMeder/status/2049088498163216560" rel="nofollow">https://xcancel.com/GlennMeder/status/2049088498163216560</a>
There are lots of ways to implement identity verification while preserving privacy. It's actually a super interesting engineering problem. Estonia has an excellent model to build on. The government can maintain a "traditional" ID system based on documents and in-person verification, and provide you with a device similar to a yubi-key or Bitcoin hardware wallet that could be used to share specific, cryptographically verifiable claims with third parties, like your age, or even just a boolean "over 18", but also your name or other information if you choose, with a way to control the access and audit which parties have verified which claims with the govt.
If you don't use X/Twitter anymore, XCancel makes it possible to read threads when not logged in: <a href="https://xcancel.com/GlennMeder/status/2049088498163216560" rel="nofollow">https://xcancel.com/GlennMeder/status/2049088498163216560</a>
Nothing against Twitter, but I just don't feel like logging in, so that site makes it way easier to read this. Also it doesn't take like 900TiB of RAM to render.
If this does not work, use nitter instead
And the piece nobody is even considering...<p>Responsible parents don't have separate OS accounts for their children.
It is not like a digital control for id verification could be used anyway to control a narrative in war times right?
I've heard that we could use zero-knowledge ID proofs to show someone is of age without revealing any more but I don't think that's the plan and the demand for age restrictions doesn't feel like a grassroots effort of concerned parents. It feels like an NGO/bureaucrat driven law and I assume its purpose is to de-anonymize people on the internet.
I’d wager most people want more censorship of the internet.
Good: some commenters here realize it's an attack on privacy<p>Bad: some still entertain the idea that we should do age verification using some sort of crypto primitives<p>There is no reason for age verification at all.<p>I am from the goatse generation. Rotten.com. steakandcheese. Horrific stuff tbh, I mostly stayed away from it, and I didn't need a helicopter government to protect me from it.<p>The moment you accept the narrative that kids need to be protected from the Internet you have already lost.<p>You've already condemned those kids to a life of slavery. So much for protecting them.<p>What we need is not online verification, but a competent government that does its existing job well.<p>Who's been arrested over the Epstein files? Who is protecting those kids?<p>No one.<p>That same government wants to "protect" your kids by KYCing everyone.<p>Give me a break.
Nah, that already didn't work because corps are very good at creating network effects in children and will set up multi-billion-dollar businesses around them. And then the kids with protective parents become the weird ones in school. I'll die on the hill of curtailing this stuff in a privacy-preserving way.
> I'll die on the hill of curtailing this stuff in a privacy-preserving way.<p>At some point you'll realize the contradiction in not trusting these "multi-billion-dollar businesses" to the point that you are risking enslaving humanity and "dying on this hill" and yet at the same time trusting those same businesses to implement this dystopian system in a privacy-preserving way.<p>When that realization hits, it will be a loud sound, possibly heard by nearby telepaths.
Right? I especially don't understand where some of the "think of the children" attitude on porn sites, as they for the most part already ask for your age and if you didn't get some kind of amusement out of seeing tits as a teenager you're a liar
It's a function of our society becoming more puritan and conservative in the past 10-15 years. This has been a slow burn.<p>We are back to perceiving viewing boobies as an existential threat to people. Currently, sexuality is being demonized all around, and sexual morality is once again becoming a currency in society.<p>I encourage people to talk to some Gen Z kids. They're much more puritan than millennials. They're focused on virginity and the moral superiority of monogamy. It's bizarre.
Enjoy dying on that hill then because without mandatory ID for potentially harmful services like social media, we will continue to descend further into the brainrot that many of you suffer from today.
It’s not online age verification. It’s online identity verification.<p>Would you vote for that? Prove who you are to visit this website? Would you do it to access Hacker News? Your newspaper?<p>Didn’t think so.
This seems hyperbolic as it's actually a long path between age verification to full digital identity tracking. But I agree that pushing the burden of verification to websites is ridiculous. Like the GDPR requirements where every webpage has an annoying consent modal, the verification and preferences should be controlled on the device you use to access these digital services. My browser should know and enforce my cookie preferences in a way that has a uniform user experience. Likewise, if I am a minor, my parent should provide me with a device (or profile on a device) which knows my age and can use that to inform online services of the age of the user rather than needing to go through a separate process for each service.
Kids will always find ways around regulation. Look at cigarettes, vapes, alcohol, weed; they will just get it from their dealers. Pornography? I expect something like: download a Torrent, get it from a classmate, share HardDrives in school, get it through an older brother.
>Kids will always find ways around regulation.<p>And porn companies should always be held responsible for not doing their due diligence and freely distributing porn to minors. Which is already illegal in teh US and most places.
It’s just defence in depth and wholly appropriate for it to be imperfect
And bootleggers will always bootleg, and smugglers will always smuggle. For that matter, murderers will find ways to murder.<p>Shall we just abolish all laws? None of them have any effect whatsoever, if they are even slightly imperfect... by your rule.
Reminder: Age Verification are not being passed to protect anyone but social media companies. But in addition, they will be used for a massive surveillance state. This is the DMCA of the 2020s, but far worse.
I agree, doxxing yourself to some shady gray-market adjacent data broker is not acceptable as age verification, and age verification was safer using the honor system as before. But for some communities, especially social media communities, some kind of verification is better than none, otherwise what's to stop them from being overwhelmed with alt accounts that are used simply for harassment or other targeted objectives?<p>People should not be able to misrepresent themselves on the internet, it may have been safe in low volumes but it is scary now and will be outright dangerous as a modality in the hands of AI agents. If you think teen mental health is bad now, wait until social media campaign capabilities previously only available to nation states fall into the hands of ordinary school bullies.<p>Maybe age verification isn't the way to mitigate this obvious risk, but there has to be something that can be done to stop rampant sockpuppeting.
I can't agree with this enough and yet I think the long term danger is masked by the current problems for the majority of voters. I'm not hopeful.
This whole problem is basically parents admitting they cant parent.
Hopefully this will give yet another push towards decentralized, open source services. Platforms where noone and everyone is responsible and the state does not get to decide the rules.
I have a fair bit of fatalism on this one.<p>Saw it with the UK laws. It just gets rammed through. Whether it’s ignorance, malice, hidden force, a desire for surveillance state, genuine concern for children - doesn’t matter, the forces in favour are substantially more and seemingly motivated to try over and over until it sticks.<p>Much like brexit or for that matter trump reelection I just don’t have much faith in wisdom of the democratic collective consensus anymore and I don’t think it’ll get any better in an AI misinformation echo chamber world. Onwards into dystopia<p>Exceeding gloomy take I know
Contacting my representatives is about as effective as making a silent wish. Whenever I've done it, I'll either get no response, or a boilerplate reply which basically says "I'm doing this, go fuck yourself". Then I'll be added to their spam list. The truth is that my reps don't represent me and they're going to do what they want regardless. After all, I'm not the one backing the truck of money up to their front door.
So many pieces of law are flawed today, and the reason why should be concerning to all.<p>I find it disgusting that most laws today are based on creating a perfect world instead of addressing harms in the least intrusive way. There is no balancing of interests, even when they state that there are. Every side complains about the others and potential future abuses, except when it is their plan. Nobody tries to design the law with a devil's advocate perspective to make as effective as reasonably possible (not perfect!) while limiting overreach.<p>The real problem is the pursuit of perfection. A perfect world does not exist, nor will it ever (laws of nature, physics, etc). One person's view of perfect is not the same as another's. We've lost the capacity for legislative empathy through are impatience and self importance. It's no longer about restricting government and providing people with rights. It's about how we can use government to shove the desires of a majority or plurality onto the total population.<p>There are ways to do age verification with reasonable anonymity, but they aren't perfect and can create underground markets (see gaming in China). At a certain point, we need to step back and put the responsibilities where they belong - with parents, instead of causing massive negative externalities on everyone else.<p>Yeah, yeah, but the children...
Just a reminder that the YC funds many of the companies pushing these laws and building the surveillance state.
I'm not a fan of online age verification, but this is completely absurd:<p>> <i>Every website. Every platform. Every app. Every service. Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used...</i><p>No. Nobody's proposing you need to verify your identity to read articles on the New York Times or Wikipedia or political blogs. And nobody is proposing you need to verify your identity to leave comments on a news article or blog post. And any proposed law around that would run into <i>massive</i> first-amendment constitutional hurdles. It would be struck down easily.<p>There's always going to be a spectrum of websites that range from open and anonymous (like news and political discussion) to strongly identity-verified (like online banking). I don't like online age verification for particular sites, but at the same time I think it's completely misleading to see it as this slippery slope to a world where anonymous speech no longer exists.<p>We can have reasoned arguments around how people's usage of sites is tracked and how to prevent that, without making this about free speech and "the hill to die on".
We've spent the past three decades trying to invent ways to deduce identity and build profiles of what would otherwise be anonymous users. When the government steps in and compels people to formally identify themselves by their government names, what would you expect these companies to do? They're not gonna say "no thanks."
Ok, maybe that’s a silly thought, but… couldn’t this be provided by Apple/Google anonymously?<p>When you set up kids devices in your family they ask you to provide the birthday anyway.<p>I’m keen to see the arguments against this.
Usually Fear is the realm of governments. Modern republics are basically legitimized around the fears of something terrible happening, it can be communism, narcotics, the ozone hole, corona virus, terrorists, immigration, globalization, unrecycled waste or greenhouse effect.<p>Private entities being frontrunners in AI Fear either means that these companies have too much unchecked power or that they have are covert instruments of governments.
ironically i think we need more social and stronger local social networks that have high identity validation and are "safe" spaces for the plebs. so that the perceived "threat level" from the free internet gets lower. basically hide the real internet a bit behind a small rock.
its a slippery slope but it might be the better strategy unless some democratic societies achieve to put more modern "freedom guarantees" into their consitution.
For a forum that supposedly consists of hackers and tech-savvy people, this number of comments supporting age verification is concerning.<p>The author has said a lot about what kind of future awaits with mass surveillance and AI, but I believe it’s not enough. Technofascism Is not that far away.
There’s age verification when you buy a gun. Not on a gun handle.<p>Kids should not be able/allowed to buy/use devices that are dangerous for them<p>But the device itself should not care at the fallacious idea “it might be able to”
It's worth pointing out that full digital identity verification ("doxxing" yourself to an untrustworthy, unauditable, legally unconstrained private company) is NOT the only way to verify adulthood. We have had a system in place which enables adulthood validation without enabling digital surveillance infrastructure, with a degree of false negative risk that society has deemed acceptable for nearly 100 years now. This idea is not my own, but I'm happy to share a reasonable proposal for it.<p>The Cashier Standard – Age Verification Without Surveillance<p><a href="https://news.ycombinator.com/item?id=47809795">https://news.ycombinator.com/item?id=47809795</a><p><a href="https://claude.ai/public/artifacts/7fe74381-a683-4f49-9c2b-171cba4b3e59" rel="nofollow">https://claude.ai/public/artifacts/7fe74381-a683-4f49-9c2b-1...</a>
The "cashier standard" you advocate for has already crept toward centralized state tracking in places like Utah. When you go to a restaurant and order a drink, the staff are required to take it to the back and scan it for verification. The scanned data is also compared with a state database of DUI offenders. It's not clear whether the database is stored on site, or if that data goes out on the wire for the check; presumably the latter. Scanned data is also stored for up to 7 days by the restaurant, and it's easy to imagine further creep upping that storage bound.
This is not the case in most of the country. Utah is largely influenced by a Mormon / LDS culture that expresses heavy opposition to drinking. I am clearly not proposing that the cards be scanned Utah style, I am proposing that they be glanced at by a cashier, everywhere else style.
More and more places I go in other states besides Utah, try to scan IDs when purchasing alcohol.
Again, the proposal isn't for a system which requires scanning of IDs, it's for a system where the cashier glances at the ID. You're arguing against a strawman. You may argue that the system proposed could evolve into the system you're describing, but still, you're arguing against a hypothetical future fiction. If we're going to be arguing about what the proposal might evolve into in the future, we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.
> we might as well be arguing about what we should be doing when aliens arrive, since they might arrive in the future, too.<p>Did aliens land in multiple states already? Strawman deflections aside, scanning is the natural evolution and has already happened across multiple kinds of exchange (money markers, various ids, various phone apps, etc). Government issue has a benefit of an independent verification system. It's super expensive for various government agencies to integrate into businesses. Constituents and businesses don't want that, leading to a much more comfortable adversarial relationship, imo.
California grocery stores scan ID too
How does this prevent a second market for one time codes? I as an adult can just get a code and sell it someone else.
Stings that catch adults reselling codes.<p>It doesn't have to be perfect.
It doesn't prevent it, it just disincentivizes it. As an adult, you can also go buy a beer and sell it to a minor. That said, mandatory age verification with photo ID upload and facial scans doesn't prevent workarounds either - kids use their parents' photo ID and pass facial scans with a variety of techniques, too.<p>Nobody who understands how adversarial systems like this work is seriously expecting a 100% flawless performance of blocking every single minor and accepting every single adult, the question is how much risk is acceptable, and the risks posed by this system are acceptable for alcohol, cigarettes, and other adult items that can arguably pose much more acute risk of serious injury or bodily harm to kids.
This type of system is a horrible idea for the following reasons:<p>1) the cards can just be re-sold which creates a black market <i>and</i> defeats the "cashier physically saw the person buying the card" angle<p>2) nickle and dimes people for simply browsing the internet (verification can dystopia anyone?)<p>3) related to #2, it creates winners in the private sector since presumably you need central authorities handing out these codes<p>I abhor the idea of digital ID verification, but if we're going to do it, let's not create a web of new problems while we're at it.
Is it even theoretically possible to have bearer anonymity and no reselling option at the same time?
With digital tokens being generated by a user (the seller) on demand, you could have a bond system where the seller places something costly on the line, that the buyer can choose to destroy or obtain. For instance, if Alice gives her age token to Bob, Bob can (if he is a troll) invalidate the token in a way that requires Alice to go to a physical location to reset her ID.<p>I imagine this could be done with appropriate zero-knowledge measures so that the combination of Alice's age token and Bob's private key creates a capability to exercise the option, but without the service (e.g. a social media site) knowing that the token belongs to Alice, and without the ID provider (e.g. the state) knowing that Bob was the one who exercised it.<p>While honest customers have no reason to make use of this option, if Alice blindly sells her tokens to anybody willing to pay, there's bound to be <i>some</i> trolls out there who will do it just for the laughs.<p>This is far from a perfect system since a dishonest site could also make use of the option. But it <i>theoretically</i> works without revealing anybody's identity (unless the option is used, and then only if the service and the ID provider collude).
First - Alcohol and cigarettes can just be resold too. The black market for them is effectively zero because the consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.<p>Second - The codes would be priced on the order of magnitude of pennies per verification - think 10 cents or less, accessible even to low / fixed income folks without really making a dent in their budget.<p>Third - the proposal explicitly mentions a nonprofit running it as an option, and the idea would be that law codifies the method to be approved, not a specific vendor, so competitive markets could emerge, too. Would you argue that restrictions on the sale of alcohol are creating artificial winners in the private sector of alcohol manufacturing?
'consequences for giving them to kids are severe and the room for meaningful profit is close to zero, same applies here.'<p>I don't think it applies, the difference is that codes are digital and can be sold over the internet, anonymously, in a scallable manner.<p>I still like this solution because all the solutions I've seen have flaws and this one being so easy to explain makes it great to campaign for.
You're doing a huge logical jump in your first point. Alcohol and cigarettes are physical goods, digital ID is not, but you're proposing a system that <i>turns it into</i> a physical problem. I'm merely pointing out that's what you're doing and the issues with it.<p>Second, it doesn't matter what it costs, it's inconvenient <i>and</i> I already spent time (possibly money too) obtaining a government ID... on top of a theoretical mandate that says I need to show the ID on a bunch of websites.<p>Third, I'm not sure I follow your point on alcohol restrictions creating winners? The non-profit idea could potentially be good, but I'm not hopeful that real world legislation would be crafted that way.<p>EDIT: also more on #1 and "severe consequences" for re-selling... yes that's exactly what we want to avoid: creating more reasons to put people in prison and a bigger burden on law enforcement and the court system.
Online age verification is an example of the Motte-and-bailey fallacy (<a href="https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy" rel="nofollow">https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy</a>, <a href="https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/" rel="nofollow">https://slatestarcodex.com/2014/11/03/all-in-all-another-bri...</a>).<p>It is easy to defend on the motte hill (protection of children, protection against abuse and heinous crimes), and easy to expand and farm on the bailey (universal surveillance, mass data collection, and the erosion of privacy).
"But age verification requires identity verification. Identity verification requires digital IDs."<p>Um, no? iOS is doing age verification just by your credit card. I never saw people all that upset about giving their credit card info to their phone wallet app or even to a bunch of websites.
Agree
If it was the hill to die on, then we should have done a better job of stopping pervasive fraud, abuse and harm to <i>everyone</i> so that we wouldn't have been a need to bring in age verification.<p>The reason we are up shit creek is because large companies didn't want to spend 2-5% of profits on decent editorial controls to stop bad actors making money from bending societal red lines (ie pile ons, snuff videos, the spectrum of grift, culture of abusing the "other side")<p>They also didn't want to stop the "viral" factor that allows their networks to grow so fucking fast.<p>This isn't really about freedom of speech, its about large media companies not wanting to take responsibility for their own shit.<p>meta <i>desperately</i> want kids to sign up. There are no penalties for them pushing shit on them. If an FCC registered corp had done half the shit facebook did, they'd have been kicked off air and restructured.<p>So frankly its too fucking late. Meta, google and tiktok will still find ways to push low quality rage bate to all of us, and divide us all for advertising revenue.
There is a <i>sudden</i> <i>concerted</i> international push for online age verification, and we do not know where this push originates from. That is the scariest thing about it.
It's not _completely_ shrouded in mystery - it started after Facebook got slapped by the EU for irresponsible handling of underage users, and since began a heavily funded lobbying push to drag competitors down with them. <a href="https://github.com/upper-up/meta-lobbying-and-other-findings/tree/main" rel="nofollow">https://github.com/upper-up/meta-lobbying-and-other-findings...</a><p>Of course, it's probably also been coopted by the neverending stream of nanny-state political power grabs in both the US and EU.
Politicians looks to each other. There’s nothing new in that.
It's true for a lot of things in Western countries.<p>Evident when the fight against "hate" was suddenly everywhere, and also during covid.
Age Verification is very offensive. It assumes guilt and creates risk to no societal benefit. <a href="https://en.wikipedia.org/wiki/Right_to_privacy" rel="nofollow">https://en.wikipedia.org/wiki/Right_to_privacy</a>
Alternative take: The fact that twitter / facebook / whatever allow arbitrary, unverified posting enables large-scale misinformation that led to, among other things, Russia's manipulation the US electorate and ultimate impacting the presidential election.<p>This one-sided view has some good points, but for goodness sake, don't pretend that the alternative has no downsides.
You'll need to explain how age verification fixes that.
Really? How many Electoral College votes did Russia's clumsy attempt at manipulation actually change? Please quantify that for us based on hard evidence.
Playing devil's advocate outside of debate club only serves to promote the devil's point of view.<p>State your well reasoned opinion where you have considered the facts. Or just say you are in support of this openly.
Disagreed. I'm against invasive age verification methods, but to allow innacurate expectations to proliferate often becomes a bubble that pops, causing many to rebound to the other side, even if it's objectively worse. I much prefer to keep the tradeoffs clear, as it prevent betrayed expectations while still showcasing the unnacceptible downsides.
I'm firmly against the idea of Internet arguments presenting an opposing position under the guise of it not being their actual opinion so they can run away from debate. Devil's advocate is a technique that should be used in school to learn how to make stronger arguments.<p>All it does is covertly promote the idea by presenting it as reasonable and on an equal level to the other idea. While at the same time being able to shut down debate, by pretending they don't actually think that.<p>Anybody can say something like "but what about the good side of the African slave trade" but they will be debated and the argument shut down if they present it as their actual argument and engage in good faith with the comments. Using the devil's advocate technique is an extremely useful way to argue in bad faith, anonymously on the Internet.<p>Critique of the author's style is fine. An opposing view should honestly be presented as such.
Why is it <i>always</i> “think of the children” used to abrogate the rights of adults?
Because it's very easy for the creeps already thinking of your children to paint these rejecting this type of the laws as those who want to see children hurt.<p>Regardless how stupid this argument is, rags will always pounce on it.<p>This is just a dirty trick of the creeps to make the resistance harder.
I think it's because, without further context, it's so hard to argue against. Pretty much every person in every culture cares deeply about their children. So if you can successfully hitch your position to that idea, it too becomes hard to argue against.<p>It's the same with tough on crime. "What, you <i>want</i> criminals to keep getting away with it?!"
> Pretty much every person in every culture cares deeply about their children.<p>I would substitute "deeply" for "superficially". Like, if my parents found some way to prohibit porn when I was an adolescent, I wouldn't say they cared deeply about me. I would say they were misguided and authoritary. The "care deeply" idea you are putting forward is just trying to distil whatever societal norm currently is into the youngs.
Because adults remain children. As in, their parent’s kids and therefore property. [edit: I should mention also property of the state beyond that] It’s less explicit in US I guess but in some places that’s very blunt - if you don’t support your parents enough you can be sued for abuse. And there are situations where an adult in us has been declared too irresponsible and forced into conversion camps by parents in the US. It’s insane, yes, and if you’re lucky enough this might be entirely invisible to you. But if you’re gray or trans or autistic and get a but unlucky this can become a very harsh reality.<p>Protect the children refers to a type of property, not a type of human.
I agree. I don't call it "age verification" though - it is age sniffing. And it has nothing to do with children - that is the lie.<p>What is fascinating is to see how governments ALL fall for it. There is zero resistance. This is fascinating to me. It shows how little real effort is necessary once you have the lobbyists in place. Kind of scary to witness too.<p>It is an apartheid system. All apartheid slavery systems will eventually die, so age sniffing will die too. But it will most likely be a long fight as more and more money will be invested by crazy corporations such as Palantir and others.<p>The whole "debate" is already not logical by the way. Let's for a moment assume the "but but but the kids!" is a real argument rather than a strawman argument, which it is. Ok so ... I am a "concerned parent", for the sake of discussion. I have three young kids. I am not a tech nerd. The kids see "unfitting content" on the antisocial media such as facebook and what not. So, what do I do? Well ... they have a smartphone? Aha, so ... I am not so concerned? Having no smartphone is no option? Ok so ... I say they can have a smartphone, but they may not use antisocial media. Ok. First - in any free society, is it acceptable that this kind of censorship is done on ALL kids? What if I, as a parent, do not agree with this? Well, tough luck - the laws force you into the age sniffing routine suddenly. But, even those parents who want the state to act as totalitarian: why would I want to hand over control to ANY politician for that matter? That makes no sense to me. I am aware that some parents may think differently, but do all parents think like that, even IF they buy into the "we protect the children" lie? I don't want ANY information from ANY of my computers to go into private hands here. So the whole argument already makes zero sense from the get go.<p>Of course those who know how things work, they know that this is the build up towards identifying everyone on the world wide web at all times AND to make access to information conditional, e. g. if the state does not know you, you can not access information. Aka a passport system for the www. Built right into the operating system too. Windows already complied. MacOSX too. The battle for Linux will be interesting; it may be some hybrid situation, like systemd. And the systemd distributions will all succumb to age sniffing, courtesy of Poettering "this is really harmless if we store your age in the database, just trust me".
We now know all the arguments. No more need to persuade anyone.<p>People will show what they are made of.
An attestation-like system to detect humanity at time of post is absolutely for useful online spaces in the era of AI slop.<p>The writing style of the author is very annoying.
And people should be free to pick and choose whether they want to use sites that do that or not. Whatever hacker news does seems to be fine for me, and I did not need to verify my ID in any way (even though it's very easy to figure out who I am from this profile)
Until people hit "attest" and then copy the text from ChatGPT.
It could be done with anonymous credentials though. No tracing to who the human is.
Very unpopular opinion here on HN: one can't stop it without direct physical action against those who push it.
[dead]
[flagged]
[flagged]
Basically every article on this site has a comment complaining that the article is AI. Who knows. Maybe “complaining about AI” is the new AI way of fitting in.
I just flag all the AI complaints. Perfect example of the guideline about “don’t complain about tangential issues” or whatever the wording is.
This feels different to me than complaining about the font or whatever. I don’t want to read or comment on anything not written by a human. I also agree with GP here that using AI instead of your own words has bearing on the content itself, insofar as it’s a signal that the author doesn’t care enough to write it themself.<p>As a corollary, I also want to know if a project posted here is predominantly vibe-coded, since that to me is a signal that it may be of lower quality, have fewer edge cases worked out, and is more likely to be abandoned in the near future.
Caring enough to put in the effort of thinking and writing is not a "tangential issue". Laziness is a substantive defect, and sadly, I think that kelseyfrog has clocked this one correctly. There are borderline cases, but the cadence of this tweet thread is unmistakeable.<p>We don't have to live like this. We don't have to accept it. We don't have to upvote it even if we agree (as I do) with the explicit point. The medium is the message, and the message that this poster is putting out here is that online age verification isn't actually worth getting that worked up about.
AI-generated content being passed off as human-written is <i>not</i> a tangential issue. HN staff agree, because posting AI generated comments is explicitly forbidden. I suspect the only reason this isn't extended to submissions is because pretty much all articles about AI are also written by AI, and effectively forbidding positive discussion of AI is obviously against the interests of a VC firm.<p>HN's guidelines were written under the assumption that submitted articles about [thing] would be written by people who care about [thing] and made a good faith effort to write something interesting about [thing], so it's only fair that any comments would be expected to respect the author's effort and discuss the article in equally good faith.<p>This assumption completely falls apart when you add AI generated submissions into the mix. If the "author" didn't care and thus couldn't be bothered to write about [thing] themselves, choosing to instead outsource that work to an LLM while they supposedly did something they deemed more valuable with the time they would've spent writing, then why should commenters be expected to dedicate more effort into their discussion of the article than the author dedicated to writing it? It's a bit unfair towards the commenters, don't you think?
> Basically every article on this site has a comment complaining that the article is AI<p>Just a hunch, but it may have something to with the fact that basically every article on this site is AI.
No, it's because authentic writing on HN has been drowned out in an ocean of slop, in such quantities that calling it out is becoming an exercise in futility
There was an article yesterday where people were complaining about "AI smell", the author showed up in comments to state clearly and unequivocally that he didn't use AI to write it, and people wouldn't believe him.<p>AI didn't get its tropes and tics ex nihilo. Some people just write in a way that "smells" like AI to others.
If everyone is complaining about the smell of shit, maybe it's because there's shit everywhere.
You should add an AI clause to that license agreement in your profile.
How would it read?
Holy crap, I only just saw your license agreement. Oh no. We've argued on here before, although this time we're in agreement. Please don't use this hidden license to dox me!<p>(It's an unenforceable joke, right? There's no way I'm bound by anything here other than maybe the site ToS.)
Seriously, who cares this much about the internet? I for one will be happy if my kids spend less time online than me. Similar to what a smoker would feel seeing cigarettes finally be banned, I suppose.<p>It's also ironic that this guy is so adamant about protecting the children on xitter. It's like preaching against racism on 4chan.
> who cares this much about the internet?<p>The Internet pretty much runs our lives now, so: I do.<p>Lots of things require having Internet access, an email address, being able to visit a website, coordinate with others on a Facebook page for a local group, etc.<p>No one requires me to buy a pack of cigarettes to register for classes, pay bills, submit something to the government, etc.
The argument being made seems plausible but it’s complete fear mongering. The surveillance mechanisms already exist and are in play and people can be identified in endless ways.<p>States have broad power to do what is being feared in the thread and haven’t already and to think that they’re waiting for this final piece of the puzzle to enact some insane regime is laughable. They could do that right now without the internet at all.<p>Social media is probably not healthy and kids should probably not be on social media. Age verification and age limits for social media will be a good thing for kids.<p>Instead of fear mongering, finding a middle ground, like governments adding some rules and protections on how this information or system is used is probably a better response.<p>I might be in the minority, but I think incorporating an identity layer into the internet itself should happen with the right protections for users and should have happened at the beginning of the net and is probably a result of lack of foresight by the creators of ARPANET.
What I'm hearing you say:<p>> Our freedom is already being eroded, saying that it is being eroded more is just fear mongering.<p>> They want to hurt you, instead of fear mongering, find a middle ground where they're hurting you differently.
Social Media is not a thing at all. Social media is a website. Websites are not health or unhealthy. Food is healthy or unhealthy. Websites are light and potentially sound, not something with health effects.
Go look directly at the sun without any protection or go listen to sounds of 120dB if you want to test your hypothesis that light and sound can't be unhealthy.<p>Or maybe you aren't being litteral and are just saying that what children see and hear has no influence on their developmemt. Either way, total bullshit.
This is simply false -- the literature is full of discussion about the health effects of social media.<p>More generally you're committing I believe two separate fallacies of ambiguity? Like one in going from the institution of social media to its reification in the form of specific websites, and then a second fallacy when you go from the specific websites to all websites in general? Like if you said "Gun ownership is not a thing at all. Gun ownership is a piece of metal. Pieces of metal cannot be healthy or unhealthy." OK but, you owning a gun is known in the scientific literature to significantly correlated with a bunch of very adverse health effects for you, such as you dying by suicide or you dying from spousal violence or your protracted grief and wasting away because your child accidentally killed themselves. Like to say that it's impossible for the institution to have adverse health effects because we can situate the objects of that institution into a broader category which doesn't sound so harmful, is frankly messed up.<p>[1]: Bernadette & Headley-Johnson, "The Impact of Social Media on Health Behaviors, a Systematic Review" (2025) <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/" rel="nofollow">https://pmc.ncbi.nlm.nih.gov/articles/PMC12608964/</a> - the content you consume can promote healthy or unhealthy behaviors<p>[2]: Lledo & Alvarez-Galvez, "Prevalence of Health Misinformation on Social Media: Systematic Review" (2021) <a href="https://www.jmir.org/2021/1/E17187/" rel="nofollow">https://www.jmir.org/2021/1/E17187/</a> is notable not just for its content but also like a thousand papers that cite it getting into all of the weeds of health influencers sharing misinformation to make a buck<p>[3]: Sun & Chao, "Exploring the influence of excessive social media use on academic performance through media multitasking and attention problems" (2024) <a href="https://link.springer.com/article/10.1007/s10639-024-12811-y" rel="nofollow">https://link.springer.com/article/10.1007/s10639-024-12811-y</a> was a study of a reasonably large cohort showing correlations between social media usage and particular forms of multitasking that inhibit academic performance -- more generally there's broad anecdata that the current "endless scrolling constant dopamine hits" model that social media gravitates to, produces kids that are "out of control" with aggressive and attentional difficulties -- see Kazmi et al. "Effects of Excessive Social Media Use on Neurotransmitter Levels and Mental Health" (2025) (PDF warning - <a href="https://www.researchgate.net/profile/Sharique-Ahmad-2/publication/392758716_Effects_of_Excessive_Social_Media_Use_on_Neurotransmitter_Levels_and_Mental_Health_A_Neurobiological_Meta_Analysis/links/6851422111be4823fbdf017c/Effects-of-Excessive-Social-Media-Use-on-Neurotransmitter-Levels-and-Mental-Health-A-Neurobiological-Meta-Analysis.pdf" rel="nofollow">https://www.researchgate.net/profile/Sharique-Ahmad-2/public...</a>) for more on the actual literature that has probed those questions<p>[4]: The APA has a whole "Health advisory on social media use in adolesence" <a href="https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use" rel="nofollow">https://www.apa.org/topics/social-media-internet/health-advi...</a> which is pretty even-handed about "these parts of social media are acceptable, those parts can maybe even be downright good -- but here are the papers that say that for adolescents, it can mess with their sleep, it can expose them to cyberhate content that measurably promotes anxiety and depression, it has been measured to promote disordered eating if they use it for social comparison..."
> If you love your family, you must stop online age verification.<p>> If you want the best for your children, you must stop online age verification.<p>> Your children are being targeted. The infrastructure being built under the cover of child safety is designed to enslave them for the rest of their lives.<p>Jumped the shark on that one, and really off-color. I'm less inclined to listen to guy, not because of his actual points, but because of how unreasonable he sounds when articulating them. A great lesson in how not to do rhetoric.
When I read those seemingly outrageous claims, I didn't immediately dismiss the author. I allowed him to substantiate the claims and kept reading. I found myself agreeing with his argument and his train of thought of how, once digital IDs are accepted as a norm, they won't be unwound, and all online activity will likely require them and then, as he says,<p><i>"Your children will never know what it was like to think freely online. They will never explore ideas anonymously. They will never question authority without it being logged in their permanent profile. They will never speak freely without fear that every word will be used against them.<p>They will grow up in a digital cage. And you will have to tell them you saw it being built and did not stop it when you had the chance."</i><p>So I'm with the author on this one. Under the cover of child safety, digital IDs will cage us (or at least children entering the verification age), and it will probably never be rolled back.
That's the role of rhetoric as a skill: all the true and sufficient syllogisms in the world will be ignored by most readers, if the argument leads with priors-triggering hyperbole and bombast.
The best way to not be in a digital cage is to opt out of the current digital products.<p>Would that be such a bad thing? Frankly I would welcome a world in which kids are not using Instagram or TikTok. They don’t have to live in a cage if we don’t let them in the cage.<p>Personally, my plan is that when age verification laws get passed, every service that requires ID is a service I stop using. And I expect my life to be better for it!
What if all services require ID?<p>Let’s take a basic example: Wikipedia, which hosts pornography, easily could be a target of such legislation. Now there is infrastructure in place to know when you read about “Criticisms of policy X” and maybe it’s handled safely or maybe it’s handed directly to the government.<p>What about news? It’s a hop skip and leap from “age verify pornography with ID” to “age verify content about sexual abuse or violence.” Now the infrastructure is in place to see the alt-news criticisms you read.<p>Twitch or YouTube wouldn’t even wait to comply, ID verification is something that these corporations are already perfectly fine with. Now, you watching a history of your government’s crimes is a potentially tracked red flag that you’re a dissident to be watched.<p>Do you think if this sort of legislation is enacted, it will stop at large websites? It will be an excuse used by the government and supported by big tech firms to shut down any small websites which don’t comply. After all, Google, MS, et al, they would rather that your entire concept of the internet start and end in a service they control.
> The best way to not be in a digital cage is to opt out of the current digital products.<p>But will your friends and family opt out? Their phones are always listening. They can just as easily listen to you, even if you go to great pains not to expose yourself to technology. They'll make a shadow profile of any avoidant user whether they want it or not.
> The best way to not be in a digital cage is to opt out of the current digital products.<p>Bullshit. These are all-encompassing monopolies and government services. More likely, they'll <i>ban you</i> and you'll end up having to go to court out of desperation to demand that they service you.<p>This is very limited thinking. If you lacked this sort of imagination 20 years ago, you wouldn't have been able to predict today.<p>> Frankly I would welcome a world in which kids are not using Instagram or TikTok.<p>This is the sort of passive reactionary nonsense that causes the danger that we're in. <i>Everything</i> isn't something to give up lightly, even if you think that it will force your neighbor to turn his music down, or get rid of bad reality television. I don't like kids on social media either. I don't like adults on it. I think kids are suffering more from surveillance than from TikTok.
Nah that’s silly, because Google has been doing all that already for the past quarter century. This “age verification” shit isn’t going to move the needle on the Google-created dystopia we already have.<p>The time to worry about not having a digital cage was quite awhile ago. Instead tech people pushed Chrome and Android and Gmail and ads onto us.
Chrome, Android, and Gmail are optional to use.
So is social media.
It's framed as being only for social media. But, really, it's about network access. Without network access, it's difficult to thrive in the modern world.<p>Are you not alarmed at the possibility that a person's network access could be cut arbitrarily and at-will?
Is Google tracking which teenagers make which posts on 4chan?<p>Curious about via Google Chrome versus not
A lot of people dismissed RMS's "Right to Read"[1] essay long ago. All the things it was warning about have come to pass, in spades.<p>1: <a href="https://www.gnu.org/philosophy/right-to-read.html" rel="nofollow">https://www.gnu.org/philosophy/right-to-read.html</a>
It's mind boggling how far Stallman saw into the future. Saddest part is we're losing this war. They're going to destroy freedom of computation, freedom of information, and it turns out that... Nobody cares. Nobody but a bunch of nerds.
Responding to tone but not to content is what a dog does.
><i>They are counting on you caring more about sounding reasonable than protecting your kids from a system designed to control them forever.</i>
Do you actually have an argument to make?<p>He’s 100% correct.<p>For a start, child are <i>parents</i> responsibility, and the state should stay out of that as much as reasonably possible.<p>Nothing more would need to me said on the matter if that’s as far as it went, but it isn’t.<p>There can be no free speech if the state can imprison you for what you say, and they know everything you say.<p>I dropped the word ‘online’ from the above paragraph, because on <i>is the real world</i>. Touch grass, but there’s no way online isn’t real. Are these words not real simple because I telegraphed them to you?<p>That’s not a world I want to live in.
>For a start, child are parents responsibility<p>And not distributing porn to children is a porn company's responsibility.<p>You are repeating a very common talking point but its not a good one.<p>Age verification laws make it possible to hold services providers liable for breaking the law (it's already illegal to distribute porn to minors in many places, like the US).<p>It's both true and completely irrelevant that parents should do a better job protecting their children from harmful services online.
> For a start, child are parents responsibility, and the state should stay out of that as much as reasonably possible.<p>Yes<p>That's why stores let kids buy alcohol and tobacco, of course, because no responsible parent would let them buy that, right?<p>That's why any kid can go watch any movie in the cinema right?<p>Yes it's the parents responsibilities. Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?<p>The problem with age verification is 100% the lack of anonymity in its implementation (which I do agree has ulterior motives) - but honestly not the age check in itself
> That's why any kid can go watch any movie in the cinema right?<p>Yes. At least in the U.S., the federal government does not regulate that, it is voluntary by the MPA (formerly MPAA) and theaters. A kid can buy a ticket for a PG movie and walk into an R-rated movie.<p>> Do you think a middle class single mother has the resources to keep their kids entertained and out of social media for the whole day?<p>Mine did. While not everyone has a backyard, things like pencils, papers, books, used toys, etc can be found inexpensively or for free.
It's weird that none of your arguments or proposals hold accountable the responsible parties.<p>You want to force us to compromise when <i>we were minding our own goddamn business</i>.
Responsible parties like porn companies that distribute porn to minors? Parents are still accountable with age verification laws.<p>If parents suck at parenting, they will suffer.<p>If porn companies distribute porn to minors, which is illegal in many places such as the US, they will not suffer. Unless you start holding them accountable.
The kids are our future adults. It should be pretty obvious that getting them used to the state yanking access is a future problem. I don’t see anything off-color or unreasonable.
Maybe you're not the target, then.<p>I haven't heard too many people say these extreme-sounding, yet at least arguably true points out loud.<p>Someone <i>should</i> be saying them, and the fact that it's not your particular cup of tea may not be the biggest issue here.
I’ve been noticing a trend among a lot of HN members where instead of contending with the arguments made in an article, they focus on the “off putting rhetoric” used by the author.<p>Make no mistake you are engaging in your own form of rhetoric when you respond like this. You are in effect moving the discussion away from the subject at hand, and towards the perceived faults in the author’s communication style. This is a rhetorical slight of hand and it’s highly disingenuous.
"Disingenuous?" Just because someone finds the style irksome, and chooses to share that here, they're deceptively, calculatingly trying to derail the conversation? That's an extremely cynical and uncharitable take.<p>If I were the author of the post, I'd value the feedback.
Except that is not what this place is for, at all, and flirts with several explicit posting guidelines. It doesn't make for good discussion, doesn't address the topic at hand, etc.
> how unreasonable he sounds<p>It's important to remember that they're targeting your children. You grew up with freedom from surveillance and constant identification. You were able to communicate anonymously and without the content of your speech being sold to Walmart and the cops. They are putting in effort to make sure that your children will never have that reality as a reference point. The idea of the government and a dozen corporations not knowing everything that they are doing at all times, and not using and selling that information freely, will sound like the ramblings of a delusional old fool.<p>It's important that you engage with that. Denial is not something to brag about.
Ironic that he's relying on the same ridiculous "think of the children" rhetoric that's being used to promote age verification. Really says a thing or two about online discourse in our day and age.
5 years ago I would have agreed, but seeing how the GOP has been fighting tooth and nail to protect actual child sex traffickers, I don't think so anymore. There's just no possible way that the safety of children is an actual concern to any of them. To these people, kids are little more than sex toys for billionaires.
Im completely OK with verifying someone's age before distributing age-restricted services to them. That's what an age restricted service is, and obviously we shouldnt let porn companies distribute porn to minors (its already illegal most place). Just dont use porn, facebook, online gambling etc. if you dont want to share your identity.<p>I can see why it's unfortunate but the idea posited that that it's somehow illegal in the US is ridiculous. You have no right to watch porn anonymously at the expense of holding porn companies liable for distributing porn to minors.<p>Internet 1.0 was largely read only, ephemeral, or decentralized. Chat rooms, IRC, personal webpages, etc. There was anonymity and there were not age restricted services.<p>Internet 2.0 introduced age restricted services and the enforcement lagged. The enforcement is now catching up. You can still do all the Internet 1.0 things anonymously but you can no longer gamble online as a 14 year old and hopefully soon you wont be able to watch porn either.
Private companies now can link all your online activities to you. Not an advertisement ID, but directly to you and your loans and your health data and whatever they're selling in the black market. Every data breach is a 100 times. It was already almost possible to directly know about you by buying data, now it's easier.<p>The point of this is not to verify age really. It is to verify identity. There's no way to prove someone is some age without presenting a legal ID.<p>Also, it's not just porn, facebook, online gambling etc. It is the OS based on some bills. So ALL your activities.
This argument as framed doesn’t make any sense. Porn is (and WAS) Internet 1.0.<p>There was porn before most everything on the web. Porn is also speech / art.<p>Anonymous access should be available for any website that wants to share their content on the Internet provided they have the rights to that content.<p>States that seek to limit that could make a legal argument that they have the right to limit access, but in the end it’s infringing speech. Worse, it’s unenforceable.<p>And yes, I would make the same arguments for people posting hateful shit or misinformation.
[dead]