←back to thread

707 points patd | 1 comments | | HN request time: 0.209s | source
Show context
tuna-piano ◴[] No.23322986[source]
There's an unsolved conundrum I haven't heard mentioned yet.

After the 2016 election, there was a thought that too much false information is spreading on social media. This happens in every country and across every form of communication - but social media platforms seem particularly worrysome (and is particularly bad with Whatsapp forwards in some Asian countries).

So what should the social media companies do? Censor people? Disallow certain messages (like they do with terrorism related posts)?

They settled on just putting in fact check links with certain posts. Trust in the fact deciding institution will of course be difficult to settle. No one wants a ministry of truth (or the private alternative).

So the question remains - do you, or how do you lessen the spread of misinformation?

replies(18): >>23323009 #>>23323114 #>>23323171 #>>23323197 #>>23323227 #>>23323242 #>>23323333 #>>23323641 #>>23326587 #>>23326935 #>>23326948 #>>23327037 #>>23328316 #>>23330258 #>>23330933 #>>23331696 #>>23332039 #>>23472188 #
1. ken ◴[] No.23332039[source]
Twitter created a special rule [1] for public officials who violate their Terms of Service. They feel there's a genuine "public interest" in being able to see (and respond to) these communications, even though they would not normally be allowed on the platform.

Are people aware that there are two classes of users on Twitter, subject to different sets of rules? Twitter hides this fact, for some reason, but it's something that ought to be glaringly obvious to anyone viewing any of a user's tweets.

[1]: https://blog.twitter.com/en_us/topics/company/2019/publicint...