I don't know how this problem can be solved automatically without something that looks a lot like AGI and can monitor the whole internet to learn the evolving cultural context. AI moderation feels like self driving cars all over again: the happy path of detecting and censoring a dick pic - or self driving in perfect California weather - is relatively easy but automating the last 20% or so of content seems impossibly out of reach.
The "subtle forms of hate speech" is especially hard and nebulous, as dog whistles in niche communities change adversarialy to get past moderation. In the most subtle of cases, there are a lot of judgement calls to make. Then each instance of these AGIs would have to be run in and tailored to local jurisdictions and cultures because that is its own can of worms. I just don't see tech replacing humans in this unfortunate role, only augmenting their abilities.
> The glossy veneer of the tech industry conceals a raw, human reality that spans the globe. From the outskirts of Nairobi to the crowded apartments of Manila, from Syrian refugee communities in Lebanon to the immigrant communities in Germany and the call centers of Casablanca, a vast network of unseen workers power our digital world.
This part never really changed. Mechanical turk is almost 20 years old at this point and call center outsourcing is hardly new. What's new is just how much human-generated garbage we force them to sift through on our behalf. I wish there was a way to force these training data and moderation companies to provide proper mental health care .
If my GP says that I'm overweight, which is associated with negative health outcomes, that's factual. If someone on twitter calls me a fatso, that's mean/hateful.
I really don’t get it.
Make no mistake. It's a strategic choice to choose these individuals to take the brunt force of the trauma. Silicone Valley thinks Africans have no value to life.
You deploy an AI to moderate, and it lets you cut your moderation workforce by 80%. Maybe you're a generous person, so you cut by 50% instead and the remaining moderators aren't as overworked anymore. (Nobody's going to actually do this, but hey, let's be idealistic.)
Costs are down, things are more efficient. Great! But there's a little problem:
Before, 90% of the posts your moderators looked at were mundane stuff. They'd stare at it for a moment, evaluate the context, and go 'yeah this is a death threat, suspend account.'
Now all the moderators see is stuff that got past the AI or is hard to classify. Dead bodies, CSAM, racist dogwhistle screeds, or the kind of mentally unhinged multi-paragraph angry rants that get an account shadowbanned on places like HN. Efficiency turns the moderator's job from 'fairly easy with occasional moments of true horror' into 'a nonstop parade of humanity's worst impulses, in front of my face, 40 hours a week'.
To simplify, dog whistles make a sound that's too high pitched for most humans to hear, but only dogs can hear it.
So it's speech that the speaker's ingroup recognizes as meaning something other than what the literal interpretation would mean. It's coded speech, usually for racist, sexist or even violent purposes.
An adjacent concept is giving orders without giving orders, i.e. https://en.wikipedia.org/wiki/Will_no_one_rid_me_of_this_tur...
Humans will spend a lot of energy to hide porn content on the internet while self-driving might benefit from a virtuous circle: once enough waymos are out there, people will adapt and learn to drive/bike/walk alongside them. We have a fundamentally good reason to cooperate.
I am not a self-driving fanatic but I do believe that a lot of edge cases might go away as we adapt to them.
Meanwhile a small consolation is that https://slatestarcodex.com/2016/06/17/against-dog-whistles/ agrees with me. So I’m in decent company.
instead its about being empathetic of the human suffering this work entails and finding ways to treat their contractors as humans instead of 'far off resources'
outsourcing this dirty and dingy work to African countries in this way without caring for the 'contractors' is a recipe for de-humanization of people..
their team page is funny reminder of classism and racial disparity in the world white people at the top and black people at the bottom.. lol I know they aren't racially driven and there is real economic value for the contractors as jobs but our current hyper-capitalistic global system is mostly setup to exploit offshore people instead of elevate them
the world is what it is..
IMO there is the even more important point that beyond being a "judgement call", humans are far from being in agreement with what the "right answer" is here - it is inherently an impossible problem to solve, especially at the edge cases.
Just look at the current debate in the US. There are tons of people screeching from the right that large online social networks and platforms censor conservative views, and similarly there are tons of people screeching from the left about misinformation and hate speech. In many cases they are talking about the exact same instances. It is quite literally a no-win situation.
That seems extremely wrong, especially in this context, given that LLMs make no attempt to formalize "ideas", they're only interested in syntax.
To my mind, this dog whistle moniker is more of a tool for suppressing dissenting views than identifying covert bigotry.
Apparently all the critical thinking has already been done off stage and now only those whom we agree with are tolerated. The others are shunned as racists or worse.
Stating the position "torture is bad" is enough to get you banned from some places (because it's offensive to people who believe that it's okay as long as the victims are less-than-human).
Getting better models at the edge would help in this to some extent as it will decentralise the runtime of AI (still model training would happen in data centers)
There are a million things to criticize AI for, but this take is domain-illiterate – they’re simply drawing a connection between the hyped and fancy (currently AI) and poor working conditions in one part of the tech sector (content moderation).
Look, I’m sure the “data industry” has massive labor issues, heck these companies treat their warehouse workers like crap. Maybe there are companies who exploit workers more in order to train AI models. But the article is clearly about human-created content moderation for social media.
Of all the things AI does, it is pretty good at determining what’s in an image or video. Personally I think sifting through troves of garbage for abusive photos and videos (the most traumatizing for workers) is one of the better applications for AI. (Then you’ll see another sob story article about these people losing their jobs.)
This can be done from both sides. Examples:
Not sufficiently (for whoever) enforcing immigration laws? "Trying to eliminate the majority population, gradual ethnic cleansing".
Talking about deporting illegal immigrants? "The first step on the road to murdering people they don't want in the country."
And if the local judiciary or law enforcement is aligned with the interests of one side or the other, they can stretch the anti hate speech laws to use the legal system against their opponents.
It may be fuzzy on the far edges, but any speech that calls for the elimination, marginalizes, dehumanizes or denies human or civil rights of a group of people is right in the heart of the meaning of hate speech.
That definition still leaves huge amounts of space for satire, comedy, political and other forms of protected speech, even "offensive speech".
In general, yes: there is a long history of conversation on various topics, actions that have caused trust levels to be preset among various groups, and meta-symbols constructed atop that information. Those new to the conversation may be unaware of the context.
> and now only those whom we agree with are tolerated
I'm not sure who "we" is in that context. In the US, currently, the polity is very divided because sevaral key events have, in a sense, caused "mask off" to occur in the mainstream of both political parties that makes it difficult for anyone to believe one of them is willing to share power.
(as a side note: rhetorical questions don't usually convey well through text media. If you didn't literally mean "I really don't get it" when you said you didn't get it, making clear you are being rhetorical could be considered polite).
Issue 1, the direct trauma, is tragically endemic to providing fora for people to communicate online. Someone will be the front-line of dealing with the fringe of those communications. If it isn't people training AIs to do some of the 90%-work, it's instead human moderators having to review every complaint, which is strictly more trauma.
So we will forever be bearing that cost as long as people are allowed to use the Internet generally, and how to minimize the harm to those who bear it is a good question.
Perhaps the nation’s division is evidence of the lack of genuine sharing of ideas? Where would one go to have an intellectual discussion in safety? Workplace? Obviously not. Online forum? Downvotes, brigadding, and generally lack of tolerance.
Small wonder that I’m not being persuaded and neither are you.
This definitely fits the bill.
Definitely not. I do expect them to listen before speaking out. It was a hard lesson I myself had to learn when I was one of those young adults coming out of school. Sometimes, conventional wisdom is just accrued prejudice. Sometimes it is accrued experience and people are as they are for a reason. It's probably best to have enough information to know before staking a position openly and pushing other people off their own.
> Where would one go to have an intellectual discussion in safety?
Traditionally? The bar. I'm not even kidding. This is the kind of thing people discuss face-to-face most effectively. We do less of that these days.
Consider for example that ChatGPT wasn't specifically designed to be good at programming Commodore 64 basic, which is a niche within a niche, but it can do that fine even when instructed in Welsh*, and if it can do that then surely it can spot these things too?
> In the most subtle of cases, there are a lot of judgement calls to make.
I agree; while they know a lot, they know it poorly, and make decisions unwisely.
> I wish there was a way to force these training data and moderation companies to provide proper mental health care .
Good news, there is. An old flame used to work in a call center, ended up unionising the place.
Bad news (from the POV of many here): she's literally a communist — and that's not a metaphor for "Democrats", she thinks the Dems are evil neoliberals.
* I've not actually tried running this, because on a related note, can anyone recommend an emulator that will let me paste in text as if I was typing the content of the pasteboard on the keyboard?
https://chatgpt.com/share/6717ff4e-db08-8011-8f2c-a33fa9653a...
A relatively small share of people openly identify as racist, but many, if not most, people hold at least some racist views since these are the cultural waters we swim in. Dog whistling lets you have it both ways. When called out, the offender can always say: that's not what I meant or I was just joking. Then they can accuse the others of deliberately misconstruing their statements. And how the listener responds is largely a function of their prior beliefs. Again, most people don't want to think of themselves as racist so they will be generous to the dog whistler since to admit there was racism (or whatever ism) in the statement of someone they support would implicate them. And to the people it was intended, they will believe that the dog whistler is denying it not because they don't believe it but because they need to do so politically.
Once we do reach a point where AI could do the filtering, who is going to draw the limit to where free speech ends? Should they have that much power?
but you've already lumped together a huge range of behaviours and impacts. Elimination? OK, we can probably broadly define that, but I just heard news reports with quotes of Israelis calling for the elimination of Hamas, and Iran the elimination of Israel. How do we handle that? marginalized? as defined by who? What about marginalizing undesirable behaviours or speech? What does "dehumanize" mean? Who's definition of human or civil rights?
You are seeing this EXACT thing in the middle east right now.
E.g. "X race/gender/sexual orientation are bad for the society for reason Y, and therefore they should be treated with Z (a negative consequence)"
So intending to call out harm because of certain inherent characteristics a group of people have, and such characteristics that are not harmful for the society.