I don’t really see a future where Discord would let an AI company post the kind of 24/7 porn+crypto+scams you get in your email spam folder
On that note, out of all the examples you could have given for discussion categories that are unbecoming to have with minors, you chose 3 relatively benign ones, lol.
Another way for it all to unfold is maybe 98% of online discourse is useless in a few years. Maybe it's useless today, but we just didn't have the tools to make it obvious by both generating and detecting it. Instead of AI filtering to weed out AI, a more likely outcome is AI filtering to weed out bad humans and our own worst contributions. Filter out incessant retorting from keyboard warriors. Analyze for obviously inconsistent deduction. Treat logical and factual mistakes like typos. Maybe AI takes us to a world where humans give up on the 97% and only 1% that is useless today gets through. The internet's top 2% is a different internet. It is the only internet that will be valuable for training data to identify and replace the 1% and converge onto the spaces that AI can't touch.
People will have to search for interactions that can't be imitated and have enough value to make it through filters. We will have to literally touch grass. All the time. Interactions that don't affect the grass we touch will vanish from the space of social media and web 2.0 services that have any reason to operate whatsoever. Heat death of the internet has a blast radius, and much of what humans occupy themselves with will turn out to be within that blast radius.
A lot of people will by definition be disappointed that the middle standard deviation of thought on any topic no longer adds anything. At least at first. There used to be a time when the only person you heard on the radio had to be somewhat better than average to be heard. We will return to that kind of media because the value of not having any expertise or first-hand experience will drop to such an immeasurable low that those voices no longer participate or appear to those using filters. Entire swaths of completely replaceable, completely redundant online "community" will just wither to dust, giving us time to touch the grass, hone the 2%, and make sense of other's 2%.
Callers on radio shows used to be interesting because people could have a tiny window into how wildly incorrect and unintelligent some people are. Pre-internet media was dominated by people who were likely slightly above average. Radio callers were something like misery porn or regular-people porn. You could sometimes hear someone with such an awful take that it made you realize that you are not in the bottom 10%. The internet has given us radio callers, all the time, all of them. They flooded Twitter, Reddit, Facebook. They trend and upvote themselves. They make YouTube channels where they talk into a camera with higher quality than commercial rigs from 2005. There is a GDP for stupidity that never existed except as the novelty object of a more legitimate channel. When we "democratized" media, it wasn't exclusively allowing in thoughts and opinions that were higher quality than "mainstream".
The frightening conclusion is possibly that we are living in a kind of heat death now. It's not the AIs that are scary. Its the humans we have platformed. The bait posts on Instagram will be out-competed. Low quality hot takes will be out-competed. Repetitive and useless comments on text forums will be out-competed. Advertising revenue, which is dependent on the idea that you are engaging with someone who will actually care about your product, will be completely disrupted. The entire machine that creates, monetizes, and foments utterly useless information flows in order to harness some of the energy will be wrecked, redundant, shut down.
Right now, people are correct that today's AI is on an adoption curve that would see more AI spam if tomorrow's AI isn't poised to filter out not just spam but a great mass of low-value human-created content. However, when we move to suppress "low quality slop" we will increasingly be filtering out low-quality humans. When making the slop higher quality so that it flies under the radar, we will be increasingly replacing and out-competing the low-quality content of the low-quality human. What remains will be of a very high deductive consistency. Anything that can be polished to a point will be. Only new information outside the reach of the AI and images of distant stars will be beyond the grasp of this convergence.
All of this is to say that the version of the internet where AI is the primary nexus of interaction via inbound and outbound filtering and generation might be the good internet we think we can have if we enact some totalitarian ID scheme to fight against slop that is currently replacing what the bottom 10% of the internet readily consumes anyway.
Blogs with a few comments would go from 5 real commenters to 0 or 1. This does not get the desired result.
----
secondly, I assure you there's plenty of classic spam on servers that don't have good moderation. pre-AI spam never disappeared.
Discord just changed management and new managment immediately said they are interested in IPO'ing. If trends emerge, it will indeed get overriden by bots like Reddit did around the time it was preparing to IPO. I see it as an inevitability at this point.
You are right, the average voter is not worried about any single enforcement outside of CSAM. The people who will exploit this are not just "your average voter".
But sure, let's explain the downsides:
1. this isn't an all encompasing law. It's only for sites that host adult content. You know what people will do... remove adult content.
2. As we see this year, rules are useless without enforcement. I'm sure X or Reddit or whatever large companies will strike deals and be exempt. This will only harm the little sites who get harassed by vested interests.
3. There's been campaigns to try and assossiate LGBT to pornography for a while now. This will delve beyond porn and be used to enforce yet more bigotry. This "think of the children" rationale is always their backdoor to stripping away freedoms, and I sure don't trust it this time.
4. On a moral level, I care more about retaining my pseudo anonymity than about worrying over bots. I'm not giving my ID.ME in order to interact on a games forum, for instance. The better way to address this (if these people actually cared about it) is to force companies to disclose with commenters are being operated via bots. Many websites have API's so that would eliminate many of them, even if it's not perfect.
5. This execution sounds awful. On a general principle, I do not want people sued over state laws that they do not reside in. Why should California need to comply with Floridian laws? This is why porn sites impacted simply block those state IP's. The Internet is more and more connected, so you can imagine the chaos is this is generalized more, instead of actually taking hold and making federal laws. This is half hearted.
I also don't understand why the government should control who I can talk to in a digital space. Maybe start investigating the president's flight records if you suddenly care about children interacting with adults.
2A: ... the right of the people to keep and bear Arms, shall not be infringed.
Congress making a law that prevents minors from accesing information is a clearly a breach of the first text.
Point of sale ID checks for guns are much less clearly "infringing on the right to keep arms". It is only limiting the sale, not the ownership.
It won't harm anything. Even now as these things spread nationwide something like Stripe or whatever will pop up and fill the need as a service. It used to be essentially universally required to prove your age using a credit card. There was/is a company that specializes in that. I can't remember its name but it was ubiquitous for porn access for quite a long time. Those over 18 confirmation banners used to be much stronger than the merely souped up cookie notices they have become today. Age verification as a service is trivial (particularly with the rise of phones) and someone will build a system that does a much better job preserving anonymity than credit cards ever did. At this point all you need is something like a passkey or FIDO token and a way for something to vouch age during account creation.
I agree that federal law is preferred.
> Just in time for the Fourth of July, last week the Supreme Court effectively nullified the First Amendment for any writers, like me, who include sex scenes in their writing, *intended for other adults*
There you have it. The author already is self-aware of the appropriateness of their creation for minors.
All that's needed is an easy way for the author to click "intended for adults" on whatever material they are creating and the entire article becomes nothing more than yapping into the wind.
Substack can easily build that as a feature for example. Reddit already has that with its "NSFW" flags (but does not currently verify accounts are actually 18yo+ adult humans).
Generally, it seems like silicon valley has become so entitled to taking the mile that the threat of taking back an inch brings out the hysterical Chicken Little fursona.
If you think it is harmful to the people doing the porn, then it should be illegal.
If not, asking for id is super useless.
Also if watching porn is bad for the person’s spouse then they shouldn’t do it, which has nothing to do with asking for id
This is the same as going after the drug addicts.
This feels like going after the people on the more vulnerable side because it is easy. Which signals it is more about forcing people to not do something instead of trying to genuinely help them.
But going after people producing porn is a no no because they have money and they are organised.
Also imo the intention of people trying implement things like this is just about surveillance and has absolutely nothing to do with protecting the families, children, addicts etc. etc.
Why is that so bad? As a kid I really appreciated participating in mixed-age discussions on many topics. I view that as part of what it means to grow into a "young adult."
Too often I think we (North American society) assume that school, with all it's rigorous age separation, gives kids the space and instruction they need to do well in the world but inevitably we get 18 year olds with no awareness of how the world functions beyond themselves... because they've only ever dealt with people of the same age.
The world is a diverse place; ideologically, racially, and in age. We, adults, need to be comfortable communicating with both children and legal minors because they'll be future citizens of the world [added in edit:] and they need to learn those skills too.
Overall, we keep trying to model a world that filters it's own interactions towards children, which is flawed to begin with, but at some point people stop being children, and where does that leave them w.r.t. their expectations of others? If you've never had to consider that an adult might act in bad faith because your world has been so sanitized, are you prepared for a world with bad actors in it?
Some go further still. E.g. in Australia, "laws also cover depictions of sexual acts involving people over the threshold age who are simulating or otherwise alluding to being underage, even if all those involved are of a legal age."
Thank you for being honest about it and illustrating why the slippery slope is very real.