Most active commenters
  • jart(7)
  • throwaway290(3)

←back to thread

276 points leonry | 24 comments | | HN request time: 1.741s | source | bottom
Show context
Arubis ◴[] No.41889117[source]
Best of luck to the author! My understanding is that anything that makes large file sharing easy and anonymous rapidly gets flooded with CSAM and ends up shuttering themselves for the good of all. Would love to see a non-invasive yet effective way to prevent such an incursion.
replies(10): >>41889269 #>>41889987 #>>41890019 #>>41890075 #>>41890376 #>>41890531 #>>41890775 #>>41892233 #>>41893466 #>>41896754 #
1. jart ◴[] No.41893466[source]
If governments and big tech want to help, they should upload one of their CSAM detection models to Hugging Face, so system administrators can just block it. Ideally I should be able to run a command `iscsam 123.jpg` and it prints a number like 0.9 to indicate 90% confidence that it is. No one else but them can do it, since there's obviously no legal way to train such a model. Even though we know that governments have already done it. If they won't give service operators the tools to keep abuse off their communications systems, then operators shouldn't be held accountable for what people do with them.
replies(4): >>41893921 #>>41894046 #>>41894311 #>>41898004 #
2. kevindamm ◴[] No.41893921[source]
The biggest risk with opening a tool like that is that it potentially enables offenders to figure out what can get past it.
replies(3): >>41894409 #>>41895156 #>>41909786 #
3. blackoil ◴[] No.41894046[source]
Perpetrators will keep tweaking image till they get score of 0.1
replies(2): >>41894419 #>>41895566 #
4. miki123211 ◴[] No.41894311[source]
This would potentially let somebody create a "reverse" model, so I don't think that's a good idea.

Imagine an image generation model whose loss function is essentially "make this other model classify your image as CSAM."

I'm not entirely convinced whether it would create actual CSAM instead of adversarial examples, but we've seen other models of various kinds "reversed" in a similar vein, so I think there's quite a bit of risk there.

replies(1): >>41894453 #
5. jart ◴[] No.41894409[source]
So they publish an updated model every three months that works better.
6. amelius ◴[] No.41894419[source]
How about the government running a service where you can ask them to validate an image?

Trying to tweak an image will not work because you will find the police on your doorstep.

replies(2): >>41894533 #>>41895184 #
7. jart ◴[] No.41894453[source]
Are you saying someone will use it to create a CSAM generator? It'd be like turning smoke detectors into a nuclear bomb. If someone that smart wants this, then there are easier ways for them to do it. Analyzing the detector could let you tune normal images in an adversarial way that'll cause them to be detected as CSAM by a specific release of a specific model. So long as you're not using the model to automate swatting, that's not going to amount to much more than a DEFCON talk about annoying people.
replies(1): >>41895099 #
8. jart ◴[] No.41894533{3}[source]
The government doesn't need more dragnet surveillance capabilities than it already has. Also this solution basically asks the service operator to upload illegal content to the government. So there would need to be a strong guarantee they wouldn't use that as proof the service operator has committed a crime. Imagine what they would do to Elon Musk if he did that to run X. The government is also usually incompetent at running reliable services.
replies(1): >>41895790 #
9. throwaway290 ◴[] No.41895099{3}[source]
I think the point is generating an image that looks normal but causes the model to false positive and the unsuspecting person then gets reported
replies(1): >>41898948 #
10. marpstar ◴[] No.41895156[source]
Fair point, but wouldn’t we rather they be spending their time doing that than actively abusing kids?
11. charrondev ◴[] No.41895184{3}[source]
My understanding is at that Microsoft runs such a tool and you can request access to it. (PhotoDNA). As I understand you hash an image send it to them and get back a response.
12. baby_souffle ◴[] No.41895566[source]
> Perpetrators will keep tweaking image till they get score of 0.1

Isn't this - more or less - already happening?

Perpetrators that don't find _some way_ of creating/sharing csam that's low risk get arrested. The "fear of being in jail" is already driving these people to invent/seek out ways to score a 0.1.

13. bigfudge ◴[] No.41895790{4}[source]
"The government" in the UK already basically shields big internet operators from legal responsibility from showing teenagers how to tie a ligature. But I wouldn't characterise them as the problem — more public oversight or at least transparency of the behaviour of online operators who run services used by thousands of minors might not be a bad thing. The Musk comment also speaks to a paranoia that just isn't justified by anything that has happened in the past 10 years. The EU is in fact the only governmental organisation doing anything to constrain the power of billionaires to distort and control our public discourse through mass media and social media ownership.
replies(1): >>41901916 #
14. tonetegeatinst ◴[] No.41898004[source]
Pretty sure apple already scans your photos for csam, so the best way would be to just throw any files a user plans on sharing into some folder an iPhone or iMac has access to.
15. jart ◴[] No.41898948{4}[source]
If you have a csam detection model that can run locally, the vast majority of sysadmins who use it will just delete the content and ban whoever posted it. Why would they report someone to the police? If you're running a file sharing service, you probably don't even know the identities of your users. You could try looking up the user IP on WHOIS and emailing the abuse contact, but chances are no one is listening and no one will care. What's important is that (1) it'll be harder to distribute this material, (2) service operators who are just trying to build and innovate will be able to easily protect themselves with minimal cost.
replies(2): >>41899968 #>>41900646 #
16. halJordan ◴[] No.41899968{5}[source]
You are mandated to report what you find. If the g-men find out you've not only been failing to report crimes, but also destroying the evidence they will come after you.
replies(4): >>41900472 #>>41900503 #>>41900625 #>>41900671 #
17. ◴[] No.41900472{6}[source]
18. jart ◴[] No.41900503{6}[source]
Wow. I had no idea. That would explain why no one's uploaded a csam detection model to Hugging Face yet. Smartest thing to do then is probably use a model for detecting nsfw content and categorically delete the superset. Perhaps this is the reason the whole Internet feels like LinkedIn these days.
19. dragonwriter ◴[] No.41900625{6}[source]
Note that this is specific to CSAM, not crimes in general. Specifically, online service providers are required to report any detected actual or imminent violation of laws regarding child sex abuse (including CSAM) and there are substantial fines for violations of the reporting requirement.

https://www.law.cornell.edu/uscode/text/18/2258A

20. throwaway290 ◴[] No.41900646{5}[source]
Someone send you a meme that looks like a meme, you share it through messenger, the meme looks like something else to the messenger, messenger reports you to NCMEC. It's NOT police but they can forward it to police. As a side effect NCMEC gets overloaded helping more of real abuse continue.
21. throwaway290 ◴[] No.41900671{6}[source]
Not "crimes". Child sexual exploitation related crimes specifically.

And not "you" unless you are operating a service and this evidence is found in your systems.

This is how "g-men" misinformation of born

22. jart ◴[] No.41901916{5}[source]
You mean the government of Prince Andrew?

Yeah I think I understand now why they want the csam so badly.

replies(1): >>41903528 #
23. bigfudge ◴[] No.41903528{6}[source]
I don't understand this comment. Are you implying Prince Andrew was _in_ or part of the UK Government? This would be a weird misunderstanding of our system.

If it's just a general cynical "all gubernment is bad and full of pedos" then I'm not sure what the comment adds to this discussion.

24. cr125rider ◴[] No.41909786[source]
So security by obscurity? Man, open source software must suck…