←back to thread

707 points patd | 9 comments | | HN request time: 1.087s | source | bottom
Show context
Traster ◴[] No.23322571[source]
I think this is going to be a discussion thread that is almost inevitably going to be a shitshow, but anyway:

There are people who advocate the idea that private companies should be compelled to distribute hate speech, dangerously factually incorrect information and harassment under the concept that free speech is should be applied universally rather than just to government. I don't agree, I think it's a vast over-reach and almost unachievable to have both perfect free speech on these platforms and actually run them as a viable business.

But let's lay that aside, those people who make the argument claim to be adhering to an even stronger dedication to free speech. Surely, it's clear here that having the actual head of the US government threatening to shut down private companies for how they choose to manage their platforms is a far more disturbing and direct threat against free speech even in the narrowest sense.

replies(42): >>23322601 #>>23322660 #>>23322889 #>>23322983 #>>23323095 #>>23323271 #>>23325355 #>>23327443 #>>23327459 #>>23327625 #>>23327899 #>>23327986 #>>23328982 #>>23329094 #>>23329143 #>>23329230 #>>23329237 #>>23329375 #>>23329616 #>>23329658 #>>23329911 #>>23330257 #>>23330267 #>>23330422 #>>23330438 #>>23330441 #>>23331115 #>>23331430 #>>23331436 #>>23331462 #>>23331469 #>>23331944 #>>23332090 #>>23332213 #>>23332505 #>>23332858 #>>23332905 #>>23332934 #>>23332983 #>>23333360 #>>23341099 #>>23346876 #
kgin ◴[] No.23328982[source]
I think it's even more concerning than that.

Threatening to shut down private companies -- not for limiting speech, not for refusing to distribute speech -- but for exercising their own right to free speech alongside the free speech of others (in this case the president).

There is no right to unchallenged or un-responded-to speech, regardless of how you interpret the right to free speech.

replies(4): >>23329367 #>>23329735 #>>23331811 #>>23333632 #
mc32 ◴[] No.23329735[source]
Attaching a disclaimer to the speech of another though is not straightforward. Will they get into the business of fact checking everyone over certain number of followers? Will they do it impartially world-wide? How can they even be impartial world wide given the different contradictory points of view, valid from both sides? Cyprus? What’s the take there?
replies(14): >>23330175 #>>23330344 #>>23330620 #>>23330747 #>>23330844 #>>23330867 #>>23331723 #>>23332140 #>>23332537 #>>23332697 #>>23332814 #>>23333088 #>>23333519 #>>23333921 #
Talanes ◴[] No.23330175[source]
What requires them to be impartial?
replies(2): >>23330281 #>>23330343 #
1. moralestapia ◴[] No.23330343[source]
It's not a "requirement" but by policing/editing content (other than what is explicitly illegal) you open yourself to a whole new set of obligations/liabilities that no one really wants to deal with.

IANAL but an example could be:

Someone posts a pirate ebook on their facebook profile. They can hide behind the "yeah but it was the user" harbor.

vs.

Someone posts a pirate ebook on a facebook profile, facebook staff thinks it's cool and puts it on a special themed section called "Pirate picks from today". They will be in trouble.

replies(3): >>23331240 #>>23332096 #>>23332729 #
2. zem ◴[] No.23331240[source]
they didn't police anything; the guy's tweet got posted without any sort of gatekeeping

they didn't edit anything; it was very clear what he posted and it was his exact words as written. there weren't even any dark ui patterns to make it look like the fact check was part of what he said.

replies(2): >>23331321 #>>23331668 #
3. moralestapia ◴[] No.23331321[source]
I don't know about "they" and "the guy". I was just explaining why, in general, content providers prefer to stay of trouble ...
4. mc32 ◴[] No.23331668[source]
Maybe they’ll do the same for their advertisers too? Maybe they’ll fact check UBI? Etc...
replies(1): >>23332759 #
5. gpm ◴[] No.23332096[source]
1. You're generally wrong except in some special cases

2. Twitter already automatically adds posts to special themed sections called "What's happening", so even if you were right there is no added liability.

3. Adding fact checking is not adding things to special themed sections, so this is off topic.

6. ashtonkem ◴[] No.23332729[source]
The legal obligations that platforms be neutral in order to not be liable for content on their website is a complete and utter fabrication in the public’s mind, and has no basis in US law.
replies(1): >>23333152 #
7. ashtonkem ◴[] No.23332759{3}[source]
That would be their right, yes.
8. wahern ◴[] No.23333152[source]
It's not a complete fabrication, though to be sure the present debate has been painstakingly engineered over several years by multiple political factions.

Section 230 of the 1996 Communications Decency Act, which immunized from tort liability "interactive computer service[s]", was passed in response to a 1995 NY state court case that found Prodigy liable for statements posted on a forum by one of its users. See https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod.... The test the judge employed in that case was the degree to which Prodigy exercised editorial control over user-posted content, and in that case the judge found Prodigy exercised such sufficient editorial control that the discretion it effectively exercised in failing to remove the statement made Prodigy liable.

If Section 230 was revoked then that test of editorial control would presumably become the law in many if not most U.S. states as, AFAIU, the test wasn't created out of whole cloth, but rooted in well-established precedent. Some states might go another way but I doubt it; categorical, bright line limitations on liability are an unusual feature of judge-made law (which emphasizes fairness in the context of the particular parties, with less weight given to hypotheticals about society-wide impacts) and are typically created by statute. Other jurisdictions seem to have ended up applying very similar rules as the NY court, and even supposed Section 230 analogs (e.g. EU Directive 2000/31/EC) seem more like the NY rule in practical effect than Section 230's strong, categorical protections. Manifest editorial control seems like a sensible test for deciding when a failure to remove constitutes negligence; sensible, at least, if you're going to depart from strong Section 230-like protections. But I would expect significant variance in the degree of control required to be exhibited absent a national rule. In any event, massive sites like Twitter and Facebook might be faced with some stark choices--go all-in on censorship, or take a completely hands-off approach a la Usenet.

replies(1): >>23333368 #
9. ashtonkem ◴[] No.23333368{3}[source]
You're correct that there is some basis for it in past precedent, which was corrected in Section 230. But given the law as it stands today, the idea is completely false (even if it's not completely made up).