https://github.com/BlueFalconHD/apple_generative_model_safet...
https://github.com/BlueFalconHD/apple_generative_model_safet...
Which as a phenomenon is so very telling that no one actually cares what people are really saying. Everyone, including the platforms knows what that means. It's all performative.
There's a very scary potential future in which mega-corporations start actually censoring topics they don't like. For all I know the Chinese government is already doing it, there's no reason the British or US one won't follow suit and mandate such censorship. To protect children / defend against terrorists / fight drugs / stop the spread of misinformation, of course.
Write a spicy comment and a mod will memory-hole it and someone, usually dang, will reply "tHat'S nOt OuR vIsIon FoR hAcKeR nEwS, pLeAsE bE cIvIl" and we all swallow it like a delicious hot cocoa.
If YC can control their product (and hn IS a product) to annihilate any criticism of their activity or (even former) staff, then Apple is perfectly within their rights to make sure Siri doesn't talk about violence.
No, there's no difference.
HN also has a flagging system and some people really, really hate some kind of speech. Usually they get more offended the more visible it is. A single "bad" word - very offensive to them. A phrase which implies someone is of lesser intelligence or acting in bad faith - sometimes gets a pass, sometimes gets reported. But covert actions like lying, using fallacies to argue or systematic downvoting seem to almost never get punished.
The closest I've seen is autodetection of certain topics related to death and suicide and subsequently promoting some kind of "help" hotline. A friend also said google allows an interview with a pedophile on youtube but penalizes it in search results so much that it's (almost?) impossible to find even when using the exact name.
But of course, if a topic is shadowbanned, it's hard to find out about it in the first place - by design.
It’s flip-flopped on specifics numerous times over the years, but these policies are easy to find. From demonitization, channel bans (direct and shadow), and creator bans.
We can of course argue until we’re blue in the face about correctness or not (most are not unreasonable by some societal definition!) but they’re definitely censorship.
At least reddit feels like that because what you can say depends on the subreddit - not just the mods but what kinds of people visit it and what they report.
No idea about youtube, videos are definitely censored using some automated means but it's still possible to get around it. E.g. some gun youtubers avoided saying full-auto by saying more-semi-auto. So i don't think they use very sophisticated models or they don't are yet. This kind of thing is obvious to a human and even LLMs generate responses which say it's a tongue-in-cheek to avoid censorship.
Comments are also generally less censored. After that health insurance CEO got punished for mass murder and repeated bodily harm with an extra-legal death penalty, many people were openly supporting it. I can say it here too and nobody will care. Even LLMs (both US and Chinese, except Claude because Claude is trained by eggshell-walking suckers) readily generate estimates of how many people he caused to die or suffer.
The internet would look very different if companies started using state of the art models to detect undesirable-to-them speech. But also people would fight back more so it might just be a case of boiling the frog slowly.
Including the LLM platforms themselves.
Manual reporting is an adjunct/additional method, and goes into the training data set after whatever manual intervention occurs too.
Feel free to ignore that any of this exists of course - it makes our lives easier. It’s a constant arms race regardless.
- Why are they not flagging more content? Am I right they're boiling the frog slowly? Do they lack an endgoal because management does not yet understand the power of these tools?
- Do you do your job poorly on purpose? Did you take it so somebody else wouldn't build an even better system? Did you think you could influence it in a direction which does not lead to total surveillance? (I assume any reasonable intelligent person would be against further increasing the power imbalance corporations have against individuals for both moral reasons and because they are individuals themselves who understand the machine can and will be used against them too.)