←back to thread

1061 points danso | 3 comments | | HN request time: 0.705s | source
Show context
partiallypro ◴[] No.23350905[source]
Twitter is well within the rights to do this, but I have seen tweets from blue check marks essentially calling for violence and Twitter didn't remove them. So, does that mean Twitter actually -supports- those view points now? If Twitter is going to police people, it needs to be across the board. Otherwise it's just a weird censorship that is targeting one person and can easily be seen as political.

Everyone is applauding this because they hate Trump, but take a step back and see the bigger picture. This could backfire in serious ways, and it plays to Trump's base's narrative that the mainstream media and tech giants are colluding to silence conservatives (and maybe there could even be some truth to that.) I know the Valley is an echo chamber, so obviously no one is going to ever realize this.

replies(35): >>23350963 #>>23351063 #>>23351117 #>>23351215 #>>23351218 #>>23351256 #>>23351291 #>>23351365 #>>23351367 #>>23351370 #>>23351380 #>>23351415 #>>23351424 #>>23351434 #>>23351471 #>>23351559 #>>23351591 #>>23351631 #>>23351685 #>>23351712 #>>23351729 #>>23351776 #>>23351793 #>>23351887 #>>23351928 #>>23352027 #>>23352201 #>>23352388 #>>23352822 #>>23352854 #>>23352953 #>>23353440 #>>23353605 #>>23354917 #>>23355009 #
kingnight ◴[] No.23351063[source]
The valley being an echo chamber doesn’t necessarily mean those implementing this have their heads in the sand.

It can’t be all perfectly achieved, but to do nothing, as they were before, could be now determined to be a worse case than providing these annotations to flagrant misuse by the highest impact profile that they can’t do away with entirely.

replies(2): >>23351131 #>>23351357 #
koheripbal ◴[] No.23351357[source]
The legal issue is that their legal protection from defamation and libel under section 230 requires them to moderate "in good faith". If they only selective moderate accounts, then that protection may not survive in court.

...but I think a greater concern we can all agree on, is that for the type of communications that Twitter does - Twitter is effectively a monopoly. The people being censored here can't even themselves go to any alternative platform, because there's really no other platform at that scale for that content format.

...that's a bigger problem, because it gives Twitter the power to shape global communications unilaterally. Something no corporation should have the power to do.

I think, broadly, that censorship should be regulated by democratically elected bodies - not corporations.

replies(4): >>23351682 #>>23351693 #>>23351806 #>>23352127 #
gnopgnip ◴[] No.23351693[source]
There is no requirement under section 230 to moderate content in good faith. Selective moderation does not affect their liability. This law was passed democratically, for exactly this purpose.

>"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." This federal law preempts any state laws to the contrary: "[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section."

replies(1): >>23351901 #
koheripbal ◴[] No.23351901[source]
It's literally written into the law in section 230 (c)(2)(A)...

> No provider or user of an interactive computer service shall be held liable on account of — any action voluntarily taken in __good faith__ to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

...and that specific requirement has been specifically referenced in Trump's recent executive order.

replies(2): >>23351990 #>>23354650 #
1. hundt ◴[] No.23354650[source]
My understanding is that (c)(2)(A) is only about liability for the act of moderation itself, e.g. suing Twitter because they banned you. If they did that in bad faith you could hypothetically maybe sue them for it, but there would need to be a cause of action, which there normally wouldn't be.

'gnopgnip may be referring to the much broader liability shield, for the content that you do not remove, which is provided by (c)(1) and has no good-faith requirement. That is Twitter's main "legal protection from defamation and libel" that you mention above.

Trump's executive order suggests that the (c)(1) liability shield could go away if you don't meet the (c)(2) good-faith requirements, which I gather is not considered a strong legal position.

replies(1): >>23355846 #
2. koheripbal ◴[] No.23355846[source]
The thing with selective enforcement is that then anyone with claims against Twitter can claim that their moderation attempts are all in bad faith, because they are selective - thereby opening them up to libel.
replies(1): >>23356683 #
3. hundt ◴[] No.23356683[source]
Maybe, but did you read my post? You need a cause of action. It is not libel to moderate someone's tweets, so even if § 230 does not protect them you can't sue them for libel based on their moderation.