←back to thread

278 points miles | 1 comments | | HN request time: 0.2s | source
Show context
mschuster91 ◴[] No.44363727[source]
> The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

You already need point a) to be in place to comply with EU laws and directives (DSA, anti-terrorism [1]) anyway, and I think the UK has anti-terrorism laws with similar wording, and the US with CSAM laws.

Point b) is required if you operate in Germany, there have been a number of court rulings that platforms have to take down repetitive uploads of banned content [2].

Point c) is something that makes sense, it's time to crack down hard on "nudifiers" and similar apps.

Point d) is the one I have the most issues with, although that's nothing new either, unmasking users via a barely fleshed out subpoena or dragnet orders has been a thing for many many years now.

This thing impacts gatekeepers, so not your small mom-and-pop startup but billion dollar companies. They can afford to hire proper moderation staff to handle such complaints, they just don't want to because it impacts their bottom line - at the cost of everyone affected by AI slop.

[1] https://eucrim.eu/news/rules-on-removing-terrorist-content-o...

[2] https://www.lto.de/recht/nachrichten/n/vizr6424-bgh-renate-k...

replies(7): >>44363768 #>>44363794 #>>44364049 #>>44364088 #>>44364194 #>>44364519 #>>44365269 #
1. pjc50 ◴[] No.44364049[source]
This is one of those cases where the need to "do something" is strong, but that doesn't excuse terrible implementations.

Especially at a time when the US is becoming increasingly authoritarian.