←back to thread

278 points miles | 1 comments | | HN request time: 0.208s | source
Show context
mschuster91 ◴[] No.44363727[source]
> The new version of NO FAKES requires almost every internet gatekeeper to create a system that will a) take down speech upon receipt of a notice; b) keep down any recurring instance—meaning, adopt inevitably overbroad replica filters on top of the already deeply flawed copyright filters; c) take down and filter tools that might have been used to make the image; and d) unmask the user who uploaded the material based on nothing more than the say so of person who was allegedly “replicated.”

You already need point a) to be in place to comply with EU laws and directives (DSA, anti-terrorism [1]) anyway, and I think the UK has anti-terrorism laws with similar wording, and the US with CSAM laws.

Point b) is required if you operate in Germany, there have been a number of court rulings that platforms have to take down repetitive uploads of banned content [2].

Point c) is something that makes sense, it's time to crack down hard on "nudifiers" and similar apps.

Point d) is the one I have the most issues with, although that's nothing new either, unmasking users via a barely fleshed out subpoena or dragnet orders has been a thing for many many years now.

This thing impacts gatekeepers, so not your small mom-and-pop startup but billion dollar companies. They can afford to hire proper moderation staff to handle such complaints, they just don't want to because it impacts their bottom line - at the cost of everyone affected by AI slop.

[1] https://eucrim.eu/news/rules-on-removing-terrorist-content-o...

[2] https://www.lto.de/recht/nachrichten/n/vizr6424-bgh-renate-k...

replies(7): >>44363768 #>>44363794 #>>44364049 #>>44364088 #>>44364194 #>>44364519 #>>44365269 #
johngladtj ◴[] No.44363794[source]
None of which is acceptable
replies(1): >>44363829 #
mschuster91[dead post] ◴[] No.44363829[source]
[flagged]
1. AnthonyMouse ◴[] No.44364422[source]
The fallacy is in expecting corporations to play the role of the government.

Suppose someone posts a YouTube video that you claim is defamatory. How is Google supposed to know if it is or not? It could be entirely factual information that you're claiming is false because you don't want to be embarrassed by the truth. Google is not a reasonable forum for third parties to adjudicate legal disputes because they have no capacity to ascertain who is lying.

What the government is supposed to be doing in these cases is investigating crimes and bringing charges against the perpetrators. Only then they have to incur the costs of investigating the things they want to pass laws against, and take the blame for charges brought against people who turn out to be innocent etc.

So instead the politicians want to pass the buck and pretend that it's an outrage when corporations with neither the obligation nor the capacity to be the police predictably fail in the role that was never theirs.