Most active commenters
  • jart(7)
  • immibis(3)
  • throwaway290(3)

←back to thread

276 points leonry | 56 comments | | HN request time: 1.26s | source | bottom
1. Arubis ◴[] No.41889117[source]
Best of luck to the author! My understanding is that anything that makes large file sharing easy and anonymous rapidly gets flooded with CSAM and ends up shuttering themselves for the good of all. Would love to see a non-invasive yet effective way to prevent such an incursion.
replies(10): >>41889269 #>>41889987 #>>41890019 #>>41890075 #>>41890376 #>>41890531 #>>41890775 #>>41892233 #>>41893466 #>>41896754 #
2. KomoD ◴[] No.41889269[source]
> ends up shuttering themselves for the good of all

mostly because it's difficult to handle all the abuse reports

replies(1): >>41889478 #
3. aranelsurion ◴[] No.41889478[source]
I wonder how that'll play out in this case, since everything uploaded here expires at maximum 3 days. Maybe they can "handle" abuse reports by simply auto-responding in 3 days that it is now removed.
4. Vinnl ◴[] No.41889987[source]
I've been using this version for a while, presumably it's just gone under the radar enough. So please don't upvote this too much, haha.
5. ghostly_s ◴[] No.41890019[source]
If it's truly e2e how would they even know what's being shared on it?
replies(1): >>41890094 #
6. chasil ◴[] No.41890075[source]
I have been using both Swisstransfer.com and filetransfer.io since Firefox Send shut down.

How have they dealt with this?

7. immibis ◴[] No.41890094[source]
Because some people would tell them. For example, the FBI would look at a child porn sharing forum and observe a lot of people sharing Send links. Then they would go to the operators of Send servers, and "strongly suggest" that it should shut down.
replies(1): >>41890553 #
8. plingbang ◴[] No.41890376[source]
For a case when file sharing is intended between individuals or small groups there's an easy solution:

Anyone who got the link should be able to delete the file.

This should deter one from using the file sharing tool as free hosting for possibly bad content. One can also build a bot that deletes every file found on public internet.

replies(3): >>41890444 #>>41891261 #>>41894038 #
9. giancarlostoro ◴[] No.41890444[source]
That then ruins perfectly valid use cases that someone could maliciously delete the file for.
replies(1): >>41890478 #
10. atoav ◴[] No.41890478{3}[source]
But it allows sending. That might be an okay tradeoff, depending on what you're aiming for.

Anonymous file hosting isn't something I'd be keen to offer, given the nhmber of people who would happily just abuse it.

replies(1): >>41891532 #
11. lovethevoid ◴[] No.41890531[source]
For Firefox Send, it was actually malware and spearfishing attacks that were spread.

The combination of limited file availability (reducing the ability to report bad actors), as well as Firefox urls being inherently trusted within orgs (bypassing a lot of basic email/file filtering/scanning), was the reason it became so popular for criminals to use. Like we've seen in the spearfishing attacks in India[1].

[1]: https://www.amnesty.org/en/latest/research/2020/06/india-hum...

12. KomoD ◴[] No.41890553{3}[source]
> and "strongly suggest" that it should shut down.

I don't know about that, is there any documented case of that?

I feel like they'd probably just contact them and ask for removal of the file(s) and to forward any logs?

replies(1): >>41892100 #
13. neilv ◴[] No.41890775[source]
Do we know whether this uploading is motivated by actual pedo reasons, by anti-pedo honeypot reasons, by sociopathic trolling reasons, by sabotage reasons (state, or commercial), or something else?

It's discouraging to think that privacy&security solutions for good people might end up being used primarily by bad people, but I don't know whether that's the situation, nor what the actual numbers are.

replies(1): >>41891039 #
14. Barrin92 ◴[] No.41891039[source]
It is just pedophiles. A user posted here on HN a while ago that they ran a Tor exist node and the overwhelming majority of it was CSAM or other cybercrime. Here in Germany they busted some underground forum and a single individual had 35TB worth of it at home. There's no great conspiracy, the criminal underworld is huge and they use every service that doesn't clamp down on it in some form.
replies(3): >>41891697 #>>41894830 #>>41896624 #
15. ipaddr ◴[] No.41891261[source]
Or the link expires after a download.
replies(1): >>41893564 #
16. ajsnigrutin ◴[] No.41891532{4}[source]
But people would abuse the delete button too.

Imagine some computer work with a class of high school kids, where a teacher has to send them a file... there will be maybe three full downloads max, before someone presses the "delete" button.

replies(2): >>41891687 #>>41892852 #
17. terribleperson ◴[] No.41891687{5}[source]
For a lot of use cases, simply sending the address of the deleter to whoever sent the file would suffice. Next time, just don't send it to them, or apply real-world consequences.

Sure, it wouldn't work for a large public setting... but it'd work for many other settings.

18. neilv ◴[] No.41891697{3}[source]
That's discouraging, if true. I don't mind doing the occasional on-principle thing (like running Tor Browser for general-purpose browsing, and Tor as the VPN plugin on my phone), and maybe it's a citizen-technologist obligation to do my share of that. But I'd rather not have some Orwellian Bayesian system flagging me for using Tor.
replies(1): >>41894491 #
19. jasonjayr ◴[] No.41892100{4}[source]
"We know that this link includes material that is illegal to possess, and it is on your server."

"We don't know the contents of the files on our server, so we can't know that is was illegal"

"Fine, delete that file, and we won't charge you for possession this time. Now that you know your service is used for this illegal material, you need to stop hosting material like that."

"How, if we don't know what's in the file sent to our server?"

"... maybe don't take random files you don't know about, and share them on the open web with anonymous users?"

replies(1): >>41893517 #
20. qudat ◴[] No.41892233[source]
Checkout https://pipe.pico.sh which is a system for networked Unix pipes using ssh.
21. bastawhiz ◴[] No.41892852{5}[source]
Sending files anonymously and sending files easily seems like mutually exclusive problems. If it's easy and anonymous, it's too easy to abuse. The teacher should just be using file storage that is tied to an account: it's not as though they're trying to stay hidden from their students
22. jart ◴[] No.41893466[source]
If governments and big tech want to help, they should upload one of their CSAM detection models to Hugging Face, so system administrators can just block it. Ideally I should be able to run a command `iscsam 123.jpg` and it prints a number like 0.9 to indicate 90% confidence that it is. No one else but them can do it, since there's obviously no legal way to train such a model. Even though we know that governments have already done it. If they won't give service operators the tools to keep abuse off their communications systems, then operators shouldn't be held accountable for what people do with them.
replies(4): >>41893921 #>>41894046 #>>41894311 #>>41898004 #
23. snowe2010 ◴[] No.41893517{5}[source]
That’s not how csam reporting works at all. You aren’t punished for csam being on your server, as long as you do something about it. You can easily set up cloudflare to block and report csam to the NCMEC (not the FBI) and it will all be handled automatically.
replies(1): >>41893897 #
24. tommica ◴[] No.41893564{3}[source]
Sucks for people that have a shitty connection
replies(1): >>41894106 #
25. immibis ◴[] No.41893897{6}[source]
Maybe a judge would see it that way, but FBI agents aren't required to act like judges.
replies(1): >>41895731 #
26. kevindamm ◴[] No.41893921[source]
The biggest risk with opening a tool like that is that it potentially enables offenders to figure out what can get past it.
replies(3): >>41894409 #>>41895156 #>>41909786 #
27. PoignardAzur ◴[] No.41894038[source]
Oh, that's pretty clever!
28. blackoil ◴[] No.41894046[source]
Perpetrators will keep tweaking image till they get score of 0.1
replies(2): >>41894419 #>>41895566 #
29. aembleton ◴[] No.41894106{4}[source]
The server would know when a download has completed.
30. miki123211 ◴[] No.41894311[source]
This would potentially let somebody create a "reverse" model, so I don't think that's a good idea.

Imagine an image generation model whose loss function is essentially "make this other model classify your image as CSAM."

I'm not entirely convinced whether it would create actual CSAM instead of adversarial examples, but we've seen other models of various kinds "reversed" in a similar vein, so I think there's quite a bit of risk there.

replies(1): >>41894453 #
31. jart ◴[] No.41894409{3}[source]
So they publish an updated model every three months that works better.
32. amelius ◴[] No.41894419{3}[source]
How about the government running a service where you can ask them to validate an image?

Trying to tweak an image will not work because you will find the police on your doorstep.

replies(2): >>41894533 #>>41895184 #
33. jart ◴[] No.41894453{3}[source]
Are you saying someone will use it to create a CSAM generator? It'd be like turning smoke detectors into a nuclear bomb. If someone that smart wants this, then there are easier ways for them to do it. Analyzing the detector could let you tune normal images in an adversarial way that'll cause them to be detected as CSAM by a specific release of a specific model. So long as you're not using the model to automate swatting, that's not going to amount to much more than a DEFCON talk about annoying people.
replies(1): >>41895099 #
34. 1oooqooq ◴[] No.41894491{4}[source]
run a node. but never an exit one.
35. jart ◴[] No.41894533{4}[source]
The government doesn't need more dragnet surveillance capabilities than it already has. Also this solution basically asks the service operator to upload illegal content to the government. So there would need to be a strong guarantee they wouldn't use that as proof the service operator has committed a crime. Imagine what they would do to Elon Musk if he did that to run X. The government is also usually incompetent at running reliable services.
replies(1): >>41895790 #
36. jmorenoamor ◴[] No.41894830{3}[source]
Naive question. Isn't tor supposed to be private? How can you know the contents of the communication just by running a node?
replies(1): >>41894899 #
37. LikesPwsh ◴[] No.41894899{4}[source]
If it's an exit node, you know what the user is connecting to but not which user.
38. throwaway290 ◴[] No.41895099{4}[source]
I think the point is generating an image that looks normal but causes the model to false positive and the unsuspecting person then gets reported
replies(1): >>41898948 #
39. marpstar ◴[] No.41895156{3}[source]
Fair point, but wouldn’t we rather they be spending their time doing that than actively abusing kids?
40. charrondev ◴[] No.41895184{4}[source]
My understanding is at that Microsoft runs such a tool and you can request access to it. (PhotoDNA). As I understand you hash an image send it to them and get back a response.
41. baby_souffle ◴[] No.41895566{3}[source]
> Perpetrators will keep tweaking image till they get score of 0.1

Isn't this - more or less - already happening?

Perpetrators that don't find _some way_ of creating/sharing csam that's low risk get arrested. The "fear of being in jail" is already driving these people to invent/seek out ways to score a 0.1.

42. snowe2010 ◴[] No.41895731{7}[source]
The fbi literally only investigates if the ncmec tells them to.

https://www.fbi.gov/investigate/violent-crime/vcac

43. bigfudge ◴[] No.41895790{5}[source]
"The government" in the UK already basically shields big internet operators from legal responsibility from showing teenagers how to tie a ligature. But I wouldn't characterise them as the problem — more public oversight or at least transparency of the behaviour of online operators who run services used by thousands of minors might not be a bad thing. The Musk comment also speaks to a paranoia that just isn't justified by anything that has happened in the past 10 years. The EU is in fact the only governmental organisation doing anything to constrain the power of billionaires to distort and control our public discourse through mass media and social media ownership.
replies(1): >>41901916 #
44. immibis ◴[] No.41896624{3}[source]
How would they know their traffic was csam? Traffic passing through an exit node is traffic between a Tor user and the open Internet. Who's running csam servers on the open Internet instead of as hidden services? And without https? Especially when Tor Browser enforces https? This story doesn't add up at all.

They did bust a site owner despite Tor, though. That story's true.

45. INTPenis ◴[] No.41896754[source]
I had a Send instance exposed online for years, but I changed it to 1 day retention and I never had any issues.

It was literally just to send large files between friends so more than 1 day was redundant.

46. tonetegeatinst ◴[] No.41898004[source]
Pretty sure apple already scans your photos for csam, so the best way would be to just throw any files a user plans on sharing into some folder an iPhone or iMac has access to.
47. jart ◴[] No.41898948{5}[source]
If you have a csam detection model that can run locally, the vast majority of sysadmins who use it will just delete the content and ban whoever posted it. Why would they report someone to the police? If you're running a file sharing service, you probably don't even know the identities of your users. You could try looking up the user IP on WHOIS and emailing the abuse contact, but chances are no one is listening and no one will care. What's important is that (1) it'll be harder to distribute this material, (2) service operators who are just trying to build and innovate will be able to easily protect themselves with minimal cost.
replies(2): >>41899968 #>>41900646 #
48. halJordan ◴[] No.41899968{6}[source]
You are mandated to report what you find. If the g-men find out you've not only been failing to report crimes, but also destroying the evidence they will come after you.
replies(4): >>41900472 #>>41900503 #>>41900625 #>>41900671 #
49. ◴[] No.41900472{7}[source]
50. jart ◴[] No.41900503{7}[source]
Wow. I had no idea. That would explain why no one's uploaded a csam detection model to Hugging Face yet. Smartest thing to do then is probably use a model for detecting nsfw content and categorically delete the superset. Perhaps this is the reason the whole Internet feels like LinkedIn these days.
51. dragonwriter ◴[] No.41900625{7}[source]
Note that this is specific to CSAM, not crimes in general. Specifically, online service providers are required to report any detected actual or imminent violation of laws regarding child sex abuse (including CSAM) and there are substantial fines for violations of the reporting requirement.

https://www.law.cornell.edu/uscode/text/18/2258A

52. throwaway290 ◴[] No.41900646{6}[source]
Someone send you a meme that looks like a meme, you share it through messenger, the meme looks like something else to the messenger, messenger reports you to NCMEC. It's NOT police but they can forward it to police. As a side effect NCMEC gets overloaded helping more of real abuse continue.
53. throwaway290 ◴[] No.41900671{7}[source]
Not "crimes". Child sexual exploitation related crimes specifically.

And not "you" unless you are operating a service and this evidence is found in your systems.

This is how "g-men" misinformation of born

54. jart ◴[] No.41901916{6}[source]
You mean the government of Prince Andrew?

Yeah I think I understand now why they want the csam so badly.

replies(1): >>41903528 #
55. bigfudge ◴[] No.41903528{7}[source]
I don't understand this comment. Are you implying Prince Andrew was _in_ or part of the UK Government? This would be a weird misunderstanding of our system.

If it's just a general cynical "all gubernment is bad and full of pedos" then I'm not sure what the comment adds to this discussion.

56. cr125rider ◴[] No.41909786{3}[source]
So security by obscurity? Man, open source software must suck…