Most active commenters
  • johnnyanmac(6)
  • fluidcruft(5)
  • 827a(4)
  • layer8(3)
  • ozgrakkurt(3)
  • int_19h(3)

←back to thread

693 points macawfish | 58 comments | | HN request time: 1.503s | source | bottom
1. everdrive ◴[] No.44544268[source]
Others have said this, I'm sure, but this will move past porn _quickly_. Once there is agreed-up age verification for pornography, much of the professional internet will require identity verification to do _anything_. This is one of the bigger nails in the coffin for the free internet, and this true whether or not you're happy with all the pornography out there.
replies(8): >>44544359 #>>44544369 #>>44544497 #>>44545175 #>>44545690 #>>44550491 #>>44550525 #>>44550534 #
2. zeroonetwothree ◴[] No.44544359[source]
I don’t agree, at least as far as legal obligation goes. The average voter is far more worried about porn and other explicit content and not so much about anything else.
replies(3): >>44544628 #>>44544941 #>>44547888 #
3. nikanj ◴[] No.44544369[source]
And honestly, with the advent of AI spam everywhere, I'd be quite happy to visit a version of the internet where everyone is a certified real person
replies(6): >>44544510 #>>44545802 #>>44545960 #>>44546641 #>>44547850 #>>44559850 #
4. baq ◴[] No.44544497[source]
I’d rather have this regulated properly before cloudflare becomes the defacto standard of id checks.
replies(2): >>44545198 #>>44545947 #
5. baq ◴[] No.44544510[source]
No idea why you’re getting downvoted when there’s a slow but unstoppable migration of everything into discord or other walled, somewhat LLM-proof gardens.
replies(2): >>44544682 #>>44547857 #
6. ◴[] No.44544628[source]
7. tines ◴[] No.44544682{3}[source]
Walled gardens are only LLM-proof until some AI company makes an offer for the data.
replies(1): >>44544988 #
8. __loam ◴[] No.44544941[source]
This doesn't really track with widespread and normalized use of pornographic materials, including written descriptions, by most adults in this country. There's a pretty wide gulf between "I don't think kids should be able to access this stuff" and "I think we need to supercharge the surveillance state and destroy the first amendment"
replies(1): >>44547967 #
9. nikanj ◴[] No.44544988{4}[source]
I don’t care about that. I care about the number of DMs I get from ”superhorni420” offering me her nudes

I don’t really see a future where Discord would let an AI company post the kind of 24/7 porn+crypto+scams you get in your email spam folder

replies(3): >>44545633 #>>44546200 #>>44547876 #
10. fluidcruft ◴[] No.44545175[source]
Age verification seems like a subset of human verification so if it gets rid of both bots and captchas then why not?
replies(1): >>44545618 #
11. kgwxd ◴[] No.44545198[source]
Nothing is going to be "regulated properly" for at least the next 3.5 years, and we'll all be dealing with backwards decline for decades after. That's best case, but i'm guessing It'll be even worse than the "radicals" are shouting about.
12. Robotbeat ◴[] No.44545618[source]
Pretty sure you can guess a few.
replies(1): >>44545632 #
13. fluidcruft ◴[] No.44545632{3}[source]
Guess a few what?
replies(1): >>44545789 #
14. Aerroon ◴[] No.44545633{5}[source]
But that's already been happening in discord for years.
replies(1): >>44545774 #
15. hackyhacky ◴[] No.44545690[source]
This doesn't sound so bad. I would much prefer to have discussions about politics, technology, or religion safe in the knowledge that I am not inadvertently communicating with a minor.
replies(4): >>44545722 #>>44545747 #>>44547947 #>>44563871 #
16. HaZeust ◴[] No.44545722[source]
I had very passionate talks online about all 3 categories before I turned 18, and I got a lot of feedback, from older folk I didn't previously know, that I shaped opinions and formed new perspectives - and a lot of the talks sure as shit did the same for me. I cannot say I would have nearly the same current passion that I do for technology, aspects of politics, and philosophy (including that of religion) without such exposures during my adolescent years, and I'm sure you'd be hard-pressed to find others young enough that wouldn't say the same - provided they have an adequate baseline of introspection.

On that note, out of all the examples you could have given for discussion categories that are unbecoming to have with minors, you chose 3 relatively benign ones, lol.

17. layer8 ◴[] No.44545747[source]
Parent said identity verification, not age verification.
18. layer8 ◴[] No.44545774{6}[source]
The parent’s point is that identity verification would stop that.
replies(1): >>44547864 #
19. layer8 ◴[] No.44545789{4}[source]
Reasons why not.
replies(1): >>44546353 #
20. 38 ◴[] No.44545802[source]
be careful what you ask for.
21. squigz ◴[] No.44545947[source]
What issues do you have with Cloudflare becoming the defacto standard that wouldn't also apply to whatever would come of regulating it 'properly'?
22. squigz ◴[] No.44545960[source]
You won't though. Malicious actors will find a way around this - either purchasing or stealing whatever form of ID is used for this. The only people who will suffer are law-abiding citizens simply trying to browse the Internet.
replies(1): >>44546364 #
23. stogot ◴[] No.44546200{5}[source]
WhatsApp does this. I get added to scam channels
24. fluidcruft ◴[] No.44546353{5}[source]
I still don't see any reason erasing bots and captchas from my online experience is bad. I hate bots and captchas. They add absolutely no value to my life. Conversely there is lots to be gained if imagine if something like X or reddit or whatever can anonymously verify that a user is a person and over 18 or 21 or 30 even (whatever) without having to directly handle identities. It's could be all the benefits of a bouncer checking for a pulse and valid ID without the privacy invasion. If done correctly it can also make fraud more difficult.
replies(2): >>44547402 #>>44547936 #
25. Tadpole9181 ◴[] No.44546364{3}[source]
"Workarounds?" Malicious actors will just operate out of different countries like they always have.
26. positron26 ◴[] No.44546641[source]
GAN style training is only going to get cheaper and easier. Detection will collapse to noise. Any ID runes will be mishandled and the abuse will fly under the radar. Only the space of problems where AI fundamentally can't be used, such as being at a live event, will be meaningfully resistant to AI.

Another way for it all to unfold is maybe 98% of online discourse is useless in a few years. Maybe it's useless today, but we just didn't have the tools to make it obvious by both generating and detecting it. Instead of AI filtering to weed out AI, a more likely outcome is AI filtering to weed out bad humans and our own worst contributions. Filter out incessant retorting from keyboard warriors. Analyze for obviously inconsistent deduction. Treat logical and factual mistakes like typos. Maybe AI takes us to a world where humans give up on the 97% and only 1% that is useless today gets through. The internet's top 2% is a different internet. It is the only internet that will be valuable for training data to identify and replace the 1% and converge onto the spaces that AI can't touch.

People will have to search for interactions that can't be imitated and have enough value to make it through filters. We will have to literally touch grass. All the time. Interactions that don't affect the grass we touch will vanish from the space of social media and web 2.0 services that have any reason to operate whatsoever. Heat death of the internet has a blast radius, and much of what humans occupy themselves with will turn out to be within that blast radius.

A lot of people will by definition be disappointed that the middle standard deviation of thought on any topic no longer adds anything. At least at first. There used to be a time when the only person you heard on the radio had to be somewhat better than average to be heard. We will return to that kind of media because the value of not having any expertise or first-hand experience will drop to such an immeasurable low that those voices no longer participate or appear to those using filters. Entire swaths of completely replaceable, completely redundant online "community" will just wither to dust, giving us time to touch the grass, hone the 2%, and make sense of other's 2%.

Callers on radio shows used to be interesting because people could have a tiny window into how wildly incorrect and unintelligent some people are. Pre-internet media was dominated by people who were likely slightly above average. Radio callers were something like misery porn or regular-people porn. You could sometimes hear someone with such an awful take that it made you realize that you are not in the bottom 10%. The internet has given us radio callers, all the time, all of them. They flooded Twitter, Reddit, Facebook. They trend and upvote themselves. They make YouTube channels where they talk into a camera with higher quality than commercial rigs from 2005. There is a GDP for stupidity that never existed except as the novelty object of a more legitimate channel. When we "democratized" media, it wasn't exclusively allowing in thoughts and opinions that were higher quality than "mainstream".

The frightening conclusion is possibly that we are living in a kind of heat death now. It's not the AIs that are scary. Its the humans we have platformed. The bait posts on Instagram will be out-competed. Low quality hot takes will be out-competed. Repetitive and useless comments on text forums will be out-competed. Advertising revenue, which is dependent on the idea that you are engaging with someone who will actually care about your product, will be completely disrupted. The entire machine that creates, monetizes, and foments utterly useless information flows in order to harness some of the energy will be wrecked, redundant, shut down.

Right now, people are correct that today's AI is on an adoption curve that would see more AI spam if tomorrow's AI isn't poised to filter out not just spam but a great mass of low-value human-created content. However, when we move to suppress "low quality slop" we will increasingly be filtering out low-quality humans. When making the slop higher quality so that it flies under the radar, we will be increasingly replacing and out-competing the low-quality content of the low-quality human. What remains will be of a very high deductive consistency. Anything that can be polished to a point will be. Only new information outside the reach of the AI and images of distant stars will be beyond the grasp of this convergence.

All of this is to say that the version of the internet where AI is the primary nexus of interaction via inbound and outbound filtering and generation might be the good internet we think we can have if we enact some totalitarian ID scheme to fight against slop that is currently replacing what the bottom 10% of the internet readily consumes anyway.

27. sigwinch ◴[] No.44547402{6}[source]
The article concludes that age verification must repeat every 60 minutes. And when there’s doubt about safe harbor, better safe than sorry. There’s a chance you’ll look back at captchas with relish.
replies(1): >>44550485 #
28. realusername ◴[] No.44547850[source]
You already have that with Cloudflare checking almost every single website on earth, as you can see, that doesn't work.
replies(1): >>44548790 #
29. johnnyanmac ◴[] No.44547857{3}[source]
It was a very tonedeaf take, that's why. Most of the internet is concentrated in the top 100 websites, and 80% of them would not be affected by this law. So you'll still see plenty of bots on Youtube, Discord, Reddit, news sites, and so on.

Blogs with a few comments would go from 5 real commenters to 0 or 1. This does not get the desired result.

----

secondly, I assure you there's plenty of classic spam on servers that don't have good moderation. pre-AI spam never disappeared.

30. johnnyanmac ◴[] No.44547864{7}[source]
Where in this statement did people conclude that all websites require identification? Even if they did, you know the technocrats would just pay a few million to have the government look the other way. I don't see the upside here.
31. johnnyanmac ◴[] No.44547876{5}[source]
>I don’t really see a future where Discord would let an AI company post the kind of 24/7 porn+crypto+scams you get in your email spam folder

Discord just changed management and new managment immediately said they are interested in IPO'ing. If trends emerge, it will indeed get overriden by bots like Reddit did around the time it was preparing to IPO. I see it as an inevitability at this point.

32. johnnyanmac ◴[] No.44547888[source]
Did you miss the recent years of some states trying to ban gay/trans books from libraries? Or even just books written by gay/trans authors? It's been part of their playbook for years to try and assosiate transgenderism with pornographic.

You are right, the average voter is not worried about any single enforcement outside of CSAM. The people who will exploit this are not just "your average voter".

33. johnnyanmac ◴[] No.44547936{6}[source]
Because you're choosing not to see the obvious downsides. Wasn't this community the ones worried last decade about tech companies havesting their data for profit?

But sure, let's explain the downsides:

1. this isn't an all encompasing law. It's only for sites that host adult content. You know what people will do... remove adult content.

2. As we see this year, rules are useless without enforcement. I'm sure X or Reddit or whatever large companies will strike deals and be exempt. This will only harm the little sites who get harassed by vested interests.

3. There's been campaigns to try and assossiate LGBT to pornography for a while now. This will delve beyond porn and be used to enforce yet more bigotry. This "think of the children" rationale is always their backdoor to stripping away freedoms, and I sure don't trust it this time.

4. On a moral level, I care more about retaining my pseudo anonymity than about worrying over bots. I'm not giving my ID.ME in order to interact on a games forum, for instance. The better way to address this (if these people actually cared about it) is to force companies to disclose with commenters are being operated via bots. Many websites have API's so that would eliminate many of them, even if it's not perfect.

5. This execution sounds awful. On a general principle, I do not want people sued over state laws that they do not reside in. Why should California need to comply with Floridian laws? This is why porn sites impacted simply block those state IP's. The Internet is more and more connected, so you can imagine the chaos is this is generalized more, instead of actually taking hold and making federal laws. This is half hearted.

replies(1): >>44550309 #
34. johnnyanmac ◴[] No.44547947[source]
I don't care if they are 16 or 68, I discuss about topics, not necesarily with the person themself. the former can be insightful and the latter still be extremely close minded.

I also don't understand why the government should control who I can talk to in a digital space. Maybe start investigating the president's flight records if you suddenly care about children interacting with adults.

35. 827a ◴[] No.44547967{3}[source]
This doesn't destroy the first amendment any more than requiring an ID & background check to purchase a firearm destroys the second amendment. Which is to say that it might, but for exactly the same reason, so The People ultimately need to decide on a consistent choice of interpretation.
replies(2): >>44548077 #>>44548247 #
36. ozgrakkurt ◴[] No.44548077{4}[source]
Except one person googling and watching porn has nothing to do with other people, very different from buying guns
replies(1): >>44551535 #
37. rocqua ◴[] No.44548247{4}[source]
1A: Congress shall make no law ... Abridging the freedom of speech. (Note: freedom of speech includes the ability to listen to what you want)

2A: ... the right of the people to keep and bear Arms, shall not be infringed.

Congress making a law that prevents minors from accesing information is a clearly a breach of the first text.

Point of sale ID checks for guns are much less clearly "infringing on the right to keep arms". It is only limiting the sale, not the ownership.

replies(1): >>44551486 #
38. esperent ◴[] No.44548790{3}[source]
Clicking on pictures of motorcycles, while annoying, is a very different thing than having to show your ID.
39. fluidcruft ◴[] No.44550309{7}[source]
One response to flaws in the law is to oppose them. Another response is to find common ground and embrace and extend.

It won't harm anything. Even now as these things spread nationwide something like Stripe or whatever will pop up and fill the need as a service. It used to be essentially universally required to prove your age using a credit card. There was/is a company that specializes in that. I can't remember its name but it was ubiquitous for porn access for quite a long time. Those over 18 confirmation banners used to be much stronger than the merely souped up cookie notices they have become today. Age verification as a service is trivial (particularly with the rise of phones) and someone will build a system that does a much better job preserving anonymity than credit cards ever did. At this point all you need is something like a passkey or FIDO token and a way for something to vouch age during account creation.

I agree that federal law is preferred.

40. fluidcruft ◴[] No.44550485{7}[source]
The article starts as

> Just in time for the Fourth of July, last week the Supreme Court effectively nullified the First Amendment for any writers, like me, who include sex scenes in their writing, *intended for other adults*

There you have it. The author already is self-aware of the appropriateness of their creation for minors.

All that's needed is an easy way for the author to click "intended for adults" on whatever material they are creating and the entire article becomes nothing more than yapping into the wind.

Substack can easily build that as a feature for example. Reddit already has that with its "NSFW" flags (but does not currently verify accounts are actually 18yo+ adult humans).

Generally, it seems like silicon valley has become so entitled to taking the mile that the threat of taking back an inch brings out the hysterical Chicken Little fursona.

replies(1): >>44551710 #
41. mathiaspoint ◴[] No.44550491[source]
Sharing/storing child porn is already illegal and punished far more harshly. So it's not like we've gone from zero to one. We've been censoring things people don't like for a little while now.
replies(1): >>44551183 #
42. buyucu ◴[] No.44550525[source]
This is why they are doing it. Goverments wants ID-check before anyone uses the internet. So they pick a topic like pornography to get their foot through the door. It's salami tactics.
replies(1): >>44560264 #
43. like_any_other ◴[] No.44550534[source]
Already in the works: Australia is quietly introducing 'unprecedented' age checks for search engines like Google - https://www.abc.net.au/news/2025-07-11/age-verification-sear...
replies(1): >>44553325 #
44. 3836293648 ◴[] No.44551183[source]
Sure, but that is a step from banning because it's harmful to the producer to banning because it's harmful to the consumer
replies(2): >>44555170 #>>44564344 #
45. 827a ◴[] No.44551486{5}[source]
Fine, then ATF Class 3 licenses, which are required to keep and bear some kinds of arms, are a breach of the 2A similar to how the 1A is being breached here.
replies(1): >>44564405 #
46. 827a ◴[] No.44551535{5}[source]
It has as much to do with other people as buying guns does. What about the actresses in the porn content; people the world, clearly including you, so quickly forget about? The concerning number of women who are trapped into this industry, usually in third world countries, by men? What about the people on the other side of the personal relationships of individuals who consume this content; the averted gazes, their treatment of women, how that impacts their community and their children?
replies(2): >>44551696 #>>44555447 #
47. ozgrakkurt ◴[] No.44551696{6}[source]
Asking id from the person watching porn has nothing to do with making porn illegal.

If you think it is harmful to the people doing the porn, then it should be illegal.

If not, asking for id is super useless.

Also if watching porn is bad for the person’s spouse then they shouldn’t do it, which has nothing to do with asking for id

replies(1): >>44552390 #
48. Robotbeat ◴[] No.44551710{8}[source]
Substack and Reddit are huge websites. What you’re talking about kills self-hosting. Ironically, your idea for regulating this reinforces the VC-driven Silicon Valley capital-intensive model and kills independent, community driven low/no-capital websites.
49. 827a ◴[] No.44552390{7}[source]
But isn't harm minimization a thing? That's something we practice in other domains, like providing clean needles to drug addicts. After all, if drug use is harmful to the people doing drugs, then it should be illegal. So; making things illegal often doesn't solve the problem. Making it harder to consume porn reduces consumption which reduces the amount of money being funneled into the industry, which might be beneficial to those harmed by it (both producers and consumers). Versus, making it illegal might have a prohibition-style impact, and is, of course, legally tenuous anyway.
replies(2): >>44553164 #>>44564418 #
50. ozgrakkurt ◴[] No.44553164{8}[source]
I agree but, then they can go after people producing porn, not people that watch it.

This is the same as going after the drug addicts.

This feels like going after the people on the more vulnerable side because it is easy. Which signals it is more about forcing people to not do something instead of trying to genuinely help them.

But going after people producing porn is a no no because they have money and they are organised.

Also imo the intention of people trying implement things like this is just about surveillance and has absolutely nothing to do with protecting the families, children, addicts etc. etc.

replies(1): >>44556449 #
51. sunaookami ◴[] No.44553325[source]
The difference between the comments there (https://news.ycombinator.com/item?id=44528204) and here is quite funny.
52. mathiaspoint ◴[] No.44555170{3}[source]
Maybe if it were the creation of child porn that were illegal but that's not how the law is written.
53. __loam ◴[] No.44556449{9}[source]
There's no evidence that porn addiction is nearly as harmful as drug addiction other than making some religious people feel more shame about it than usual. If the argument is that the people who produce porn are doing something illegal or harmful, then prosecute them. If they're filming consenting adults in compliance with regulations that are already in place then I don't really understand the problem here.
54. throwaway290 ◴[] No.44559850[source]
That's why "AI" founders do stuff like Altman's Worldcoin. First they reap $$$ while creating a problem then more $$$ when "solving" it.
55. tenacious_tuna ◴[] No.44563871[source]
> safe in the knowledge that I am not inadvertently communicating with a minor.

Why is that so bad? As a kid I really appreciated participating in mixed-age discussions on many topics. I view that as part of what it means to grow into a "young adult."

Too often I think we (North American society) assume that school, with all it's rigorous age separation, gives kids the space and instruction they need to do well in the world but inevitably we get 18 year olds with no awareness of how the world functions beyond themselves... because they've only ever dealt with people of the same age.

The world is a diverse place; ideologically, racially, and in age. We, adults, need to be comfortable communicating with both children and legal minors because they'll be future citizens of the world [added in edit:] and they need to learn those skills too.

Overall, we keep trying to model a world that filters it's own interactions towards children, which is flawed to begin with, but at some point people stop being children, and where does that leave them w.r.t. their expectations of others? If you've never had to consider that an adult might act in bad faith because your world has been so sanitized, are you prepared for a world with bad actors in it?

56. int_19h ◴[] No.44564344{3}[source]
We're long past that point. Many (most?) Western governments ban simulated CP, including non-realistic stuff like cartoons or even purely textual descriptions.

Some go further still. E.g. in Australia, "laws also cover depictions of sexual acts involving people over the threshold age who are simulating or otherwise alluding to being underage, even if all those involved are of a legal age."

57. int_19h ◴[] No.44564405{6}[source]
NFL stuff is actually a pretty good example of largely pointless law considering that what it does is effectively just make the items in question more expensive by taxing them and artificially limiting supply. If you want to own a machine gun, a grenade launcher, or even a fully functional tank in US, you still can so long as you're rich enough to afford it (unless your state has laws banning it). There are no additional restrictions on who can and cannot own that stuff beyond the requirement to pay the tax.
58. int_19h ◴[] No.44564418{8}[source]
You're essentially saying that you'd like to ban it long term, but since you can't make it happen right away, laws like these can serve as a first step to normalize censorship leading to such a ban.

Thank you for being honest about it and illustrating why the slippery slope is very real.