←back to thread

707 points patd | 1 comments | | HN request time: 0.223s | source
Show context
tuna-piano ◴[] No.23322986[source]
There's an unsolved conundrum I haven't heard mentioned yet.

After the 2016 election, there was a thought that too much false information is spreading on social media. This happens in every country and across every form of communication - but social media platforms seem particularly worrysome (and is particularly bad with Whatsapp forwards in some Asian countries).

So what should the social media companies do? Censor people? Disallow certain messages (like they do with terrorism related posts)?

They settled on just putting in fact check links with certain posts. Trust in the fact deciding institution will of course be difficult to settle. No one wants a ministry of truth (or the private alternative).

So the question remains - do you, or how do you lessen the spread of misinformation?

replies(18): >>23323009 #>>23323114 #>>23323171 #>>23323197 #>>23323227 #>>23323242 #>>23323333 #>>23323641 #>>23326587 #>>23326935 #>>23326948 #>>23327037 #>>23328316 #>>23330258 #>>23330933 #>>23331696 #>>23332039 #>>23472188 #
1. three_seagrass ◴[] No.23328316[source]
The NYT is doing a special podcast on this topic right now.

In one of their episodes, they interview the CEO of YouTube about what they're doing to stop the spread of misinformation on web content platforms like their own.

Her response is that they're no longer tailoring their recommendation models or carousels based purely on engagement alone, but also based on potential harm or impact, because the common misinformation preys on being highly engaging. The biggest example of this is how YouTube is dealing with Covid-19 misinformation, that the "COVID-19 news" carousel on the home page doesn't get much engagement but is important for people to stay informed.

It's a good listen if you have the time: https://www.nytimes.com/2020/05/07/podcasts/rabbit-hole-yout...