Here's a few citations:
https://www.bbc.com/news/technology-52388586
https://fee.org/articles/youtube-to-ban-content-that-contrad...
https://www.theblaze.com/news/youtube-will-remove-any-corona...
https://www.socialmediatoday.com/news/youtube-ramps-up-actio...
These haven't been cherry picked, they're the top 4 results on a DDG search.
Your example only demonstrates that you don't understand the difficulty in censoring content on social platforms as enormous as YouTube; not that Google don't have a policy that prohibits such content.
With regards to the app removal, do we know it is a Google management decision and not the work of an overzealous app reviewer or an algorithm (the latter being the way Google usually operate)?
The reason I talk about internal consistency is because it is hard getting the right balance between removing stuff that should be vs stuff that shouldn’t and that problem is only magnified when when doing so at scale. So if this were a management then I totally understand the pitchforks but if it’s a false positive an in algorithm then hopefully Google will rectify and we can all go back to moaning about Electron or whatever the next meme is.
The real question is whether it should be consistently upheld without exceptions and the answer to that is obviously “no” because some apps are hubs to user curated content like that podcast app is.
As for your comment about them suspending their entire YouTube app, you talk as if this was a management exec decision rather than rouge judgement that will inevitably get overturned.
Google isn’t a single entity. It’s a collection of people and algorithms all making their own judgements based on Google’s policies. Sometimes they miss stuff they should moderate and sometimes they get overzealous and remove content they shouldn’t. There is such a large grey area and scope for personal judgement that you have to expect some unpopular verdicts from time to time. It’s shit but no two situations are identical so it’s a problem that’s impossible to avoid. The real tell is whether Google reverse the decision once the complaint gets escalated.
> It would be trivial for a human to find this.
Someone literate in Swedish maybe (to gather the context of those key words) but it isn't humans which do this.
Google are big into automation to the extent that they have machines doing their review. You might consider that wrong but then you have to ask yourself how many humans would it take to moderate a platform as large as YouTube. I bet you that whatever number you come up wouldn't be enough and someone else would say "I found another video that was trivial to find, Google don't hire enough platform moderators!"
Plus it's a pretty horrible job being a professional moderator and spending your whole day reviewing the dregs of society. I've read reports where people who've done it had said it's had a very real negative impact on their mental health.
As I said earlier, fixing problems like this at scale is insanely hard. It's one of those things that might seem easy at a superficial level but it's fraught with errors and you can guarantee that whatever decision the moderator makes (be that human or algorithm) someone will be unhappy and claim it's not fair.