←back to thread

45 points gmays | 1 comments | | HN request time: 0s | source
Show context
klabb3 ◴[] No.41917309[source]
This is a really poorly informed article, almost unbearable to read due to the conflation of issues. Content moderation existed before modern AI. Then the article claims that most moderation decisions are actually (exploited) human labor, which I find extremely difficult to believe – even with simpler classifiers. Yes, Amazon used human labor for their small-scale (later shut down) stores. We have seen that trick to drive product hype, it happens. That does not mean FB, Instagram etc uses human labor for “nearly all decisions”. But even if they did, “AI” did not create the gore/csam/abuse content (again, yet), nor the need to moderate public cesspool ad-driven social media. You’re talking about a different issue with different economics and incentives.

There are a million things to criticize AI for, but this take is domain-illiterate – they’re simply drawing a connection between the hyped and fancy (currently AI) and poor working conditions in one part of the tech sector (content moderation).

Look, I’m sure the “data industry” has massive labor issues, heck these companies treat their warehouse workers like crap. Maybe there are companies who exploit workers more in order to train AI models. But the article is clearly about human-created content moderation for social media.

Of all the things AI does, it is pretty good at determining what’s in an image or video. Personally I think sifting through troves of garbage for abusive photos and videos (the most traumatizing for workers) is one of the better applications for AI. (Then you’ll see another sob story article about these people losing their jobs.)

replies(1): >>41917522 #
shadowgovt ◴[] No.41917522[source]
Broadly speaking: while there is a real problem here and it needs to be addressed, it's mostly around the systemic issues: letting Silicon Valley outsource trauma to places that will under-serve the people who experience the trauma is bad and shouldn't be allowed.

Issue 1, the direct trauma, is tragically endemic to providing fora for people to communicate online. Someone will be the front-line of dealing with the fringe of those communications. If it isn't people training AIs to do some of the 90%-work, it's instead human moderators having to review every complaint, which is strictly more trauma.

replies(1): >>41917913 #
1. klabb3 ◴[] No.41917913[source]
Yea agreed. That would have been a perfectly reasonable angle for the article.