Maybe Google has done the math and realized it's cheaper to upscale in realtime than store videos at high resolution forever. Wouldn't surprise me considering the number of shorts is probably growing exponentially.
My despondent brain auto-translated that to: "My livelihood depends on Youtube"
It's most glaringly obvious in TV shows. Scenes from The Big Bang Theory look like someone clumsily tries to paint over the scenes with oil paint. It's as if the actors are wearing an inch thick layer of poorly applied makeup.
It's far less glaring in Rick Beato's videos, but it's there if you pay attention. Jill Bearup wanted to see how bad it could get and reuploaded the "enhanced" videos a hundred times over until it became a horrifying mess of artifacts.
The question remains why YouTube would do this, and the only answers I can come up with are "because they can" and "they want to brainwash us into accepting uncanny valley AI slop as real".
This might be the uploaders doing to avoid copyright strikes.
It's true though that aggressive denoising gives things an artificially generated look since both processes use denoising heavily.
Perhaps this was done to optimize video encoding, since the less noise/surface detail there is the easier it is to compress.
And a new generation what is trained on a constantly enabled face filters and 'AI'-upscaled slop is already here.
Who in his right mind thought this was a good idea??
I have a Firefox extension which tries to suppress the translations, but it only works for the main view, not for videos in the sidebar. It's better than nothing.
Says everything. Hey PM at YouTube: How about you think stuff through before even starting to waste time on stuff like this?
If so it's really just another kind of lossy compression. No different in principle from encoding a video to AV-1 format.
---
By the way, this reminds me also of another stupid Google thing related to languages:
Say your Chrome is set to English. When encountering another language page, Chrome will (since a decade ago or so) helpfully ask you to auto-translate by default. When you click a button "Never translate <language>", it will add language to the list which is sent out to every HTTP request the browser makes via `Accept-Language` header (it's not obvious this happens unless you're the kind of person who lives in DevTools and inspects outgoing traffic).
Fast-forward N years, Chrome privacy team realizes this increases fingerprinting surface, making every user less unique, so they propose this: "Reduce fingerprinting in Accept-Language header information" (https://chromestatus.com/feature/5188040623390720)
So basically they compensate for one "feature" with another, instead of not doing the first thing in the first place.
The controversy is that YouTube is making strange changes to the videos of users, that make the videos look fake.
YouTube creators put hours upon hours on writing, shooting and editing their videos. And those that do it full time often depend on YouTube and their audience for income.
If YouTube messes up the videos of creators and makes the videos look like they are fake, of course the creators are gonna be upset!
At this point getting involved with youtube is just the usual naive behaviour that somehow you are the exception and bad things won't happen to you.
It's 100% a push to remove human creators from the equation entirely.
The level of post-processing matters. There is a difference between color grading an image and removing wrinkles from a face.
The line is not cut clear but these companies are pushing the boundaries so we get used to fake imagery. That is not good.
Maybe you’re thinking of TikTok and samsung facial smoothing filters? Those are a lot more subtle and can be turned off.
The Venn diagram of AI voice users and good content creators is pretty close to two separate circles. I don't really care about the minority in the intersection.
1. See that AI upscaling works kinda well on certain illustrations.
2. Start a project to see if you can do the same with video.
3. Develop 15 different quality metrics, trying to capture what it means when "it looks a bit fake"
4. Project's results aren't very good, but it's embarrassing to admit failure.
5. Choose a metric which went up, declare victory, put it live in production.
Just a couple days ago I got an ad with a Ned Flanders singing about the causes of erectyle dysfunction (!), a huge cocktail of copyright infringement, dangerous medical advice and AI generated slop. Youtube answered the report telling me they've reviewed and found nothing wrong.
The constant low quality, extremely intertwined ads start to remind me of those of shady forums and porn pages of the nineties. I'm expecting them to start advertising heroine now they've decided short term profits trump everything else.
I haven't noticed it outside copyrighted material, so it's probably intentional.
As a french-speaking person, I now find myself seeing french youtubers seemingly posting videos with english titles and robotic voice, before realizing that it's Youtube being stupid again.
What's more infuriating is that it's legitimately at heart a cool feature, just executed in the most brain-dead way possible, by making it opt-out and without the ability to specify known languages.
Touching up videos is bad but it is hardly material to break out the pitchforks compared to some of the political manoeuvres YouTube has been involved in.
https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...
I skimmed the videos as well, and there is much more talk about this thing, and barely any examples of it. As this is an experiment, I guess that all this noise serves as a feedback to YouTube.
What makes you think they don't think it through? This effect is an experiment that they are running. It seems to be useless, unwanted from our perspective, but what if they find that it increases engagement?
- auto-dubbing
- auto-translation
- shorts (they're fine in a separate space, just not in the timeline)
- member only streams (if I'm not a member, which is 100% of them)
The only viable interface for that is the web and plenty of browser extensions.
For now it's a kind of autoencoding, regenerating the same input video with minimal changes. They will refine the pipeline until the end video is indistinguishable from the original. Then, once that is perfected, they will offer famous content creators the chance to sell their "image" to other creators, so less popular underpaid creators can record videos and change their appearance to those of famous ones, making each content creator a brand to be sold. Eventually humans will get out of the pipeline and everything will be autogenerated, of course.
Were you asleep in the last 10 years ? /s They have names for it: accessibily, User eXperience. Or as some other people put it: enshitification.
I'm frightened by how realistic this sounds.
there are ways to get this same experience with android. Use https://github.com/ReVanced/ and make your phone work for you instead of working for someone else.
As long as YouTube continues to be the Jupiter sized gorilla in the room, they're not going to care very much about what the plebes think.
This is what tech bros in SV built and they all love it.
Sometimes it feels like Google keeps anyone with any kind of executive power hermetically sealed a some house borrowed from a reality TV show where they're not allowed any contact with the outside world.
Until content start being published elsewhere it's fair to say we are forced to go to YouTube to access it.
No they're not. Nothing that mandates vertical video has ever been fine nor ever will be. Tiktok, Reels, Shorts, all bad and should be destroyed.
Unless the action is primarily vertical, which is rarely ever the case, it's always been and always will be wrong.
Yes I will die on this hill. Videos that are worse to watch on everything but a phone and have bad framing for most content are objectively bad.
There is nothing wrong with the concept of short videos of course, but this "built for phones, sucks for everything else" trash needs to go away.
Now imagine the near future of the Internet, when all people have to adapt to that in order to not be dismissed as AI.
However, polished to a point that we humans start to lose our unique tone is what style guides that go into the minutiae of comma placement try do do. And I'm currently reading a book I'm 100% sure has been edited by an expert human editor that did quite the job of taking away all the uniqueness of the work. So, we can't just blame the LLMs for making things more gray when we have historically paid other people to do it.
I'm pretty sure all the hand wringing about A.I. is going to fade into the past in the same way as every other strand of technophobia has before.
Just the consideration of these possibilities was enough to shake the authenticity of my reality.
Even more unsettling is when I contemplate what could be done about data authenticity. There are some fairly useful practical answers such as an author sharing the official checksum for a book. But, ultimately, authenticity is a fleeting quality and I can’t stop time.
Vertical videos, if they're focused on a human, work fine for the same reason.
Moreover, most people have more attachment to their own thoughts or to reading the unaltered, genuine thoughts of other humans than to a hole in the ground. The comment you respond to literally talks about the Orwellian aspects of altering someone's works.
People may be upset, and I get that. But it's not like the videos were in their original format anyway. If you want to maintain perfect video fidelity, you wouldn't choose YouTube. You chose YouTube because it's the path of least resistance. You wanted massive reach and a dead simple monetization route.
It looks like you see writing & editing as a menial task that we just do for it's extrinsic value, whereas these people who complain about quality see it as art we make for it's intrinsic value.
Where I think a lot of this "technophobia" actually comes from though are people who do/did this for a living and are not happy about their profession being obsolesced, and so try to justify their continued employment. And no, "there were new jobs after the cotton gin" will not comfort them, because that doesn't tell them what their next profession will be and presumes that the early industrial revolution was all peachy (it wasn't).
Excavation is an inherently dangerous and physically strenuous job. Additionally, when precision or delicateness is required human diggers are still used.
If AI was being used to automate dangerous and physically strenuous jobs, I wouldn't mind.
Instead it is being used to make everything it touches worse.
Imagine an AI-powered excavator that fucked up every trench that it dug and techbros insisted you were wrong for criticizing the fucked up trench.
You're implying the latter doesn't happen normally but denoising (which basically every smartphone camera does) often has the effect of removing details like wrinkles. The effect is especially pronounced in low light settings, where noise is the highest.
Also, if you have an Android TV, I'd suggest SmartTube, it's way better than the original app and it has the same benefits of ReVanced: https://github.com/yuliskov/SmartTube
They posited that a similar series of events happen before, and predicted they will happen again.
Even if the text is a simple article, a personal touch / style will go a long way to make it more pleasant to read.
LLMs are just making everything equally average, minus their own imperfections. Moving forward, they will in-breed while everything becomes progressively worse.
That's death to our culture.
"By AI" or "with AI?" If I write the book and have AI proof read things as I go, or critique my ideas, or point out which points do I need to add more support for, is that written "by AI?"
When Big Corp says 30% of their code is now written "by AI," did they write the code by following thoughtful instruction from a human expert, who interpeted the work to be done, made decisions about the architectural impact, outlined those things and gave detailed instructions that the LLM could execute in small chunks?
This distinction I feel is going to become more important. AI tools are useful, and most people are using them for writing code, literature, papers, etc. I feel like, in some cases, it is not fair to say the thing was written by AI, even when sometimes it technically was.
"No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)"
https://x.com/youtubeinsider/status/1958199532363317467?s=46
Considering how aggressive YouTube is with video compression anyways (which smooths your face and makes it blocky), this doesn't seem like a big deal. Maybe it overprocesses in some cases, but it's also an "experiment" they're testing on only a fraction of videos.
I watched the comparisons from the first video and the only difference I see is in resolution -- he compares the guitar video uploaded to YT vs IG, and the YT one is sharper. But for all we know the IG one is lower resolution, that's all it looks like to me.
Even if you produce interesting videos, you still must MB to get the likes, to stay relevant to the algorithm, to capture a bigger share of the limited resource that is human attention.
The creators are fighting each other for land, our eyeballs are the crops, meanwhile the landlord takes most of the profits.
Secret experiments are never meant to be little one-offs, they're always carried out with the goal of executing a larger vision. If they cared about user input, they'd make this a configurable setting.
Basically YouTube is applying a sharpening filter to "Shorts" videos.
Eh. There might be a tacit presumption here that correctness isn't real, or that style cannot be better or worse. I would reject this notion. After all, what if something is uniquely crap?
The basic, most general purpose of writing is to communicate. Various kinds of writing have varying particular purposes. The style must be appropriate to the end in question so that it can serve the purpose of the text with respect to the particular audience.
Now, we may have disagreements about what constitutes good style for a particular purpose and for a particular audience. This will be a source of variation. And naturally, there can be stylistic differences between two pieces of writing that do not impact the clarity and success with which a piece of writing does its job.
People will have varying tastes when it comes to style, and part of that will be determined by what they're used to, what they expect, a desire for novelty, a desire for clarity and adequacy, affirmation of their own intuitions, and so on. We shouldn't obfuscate and sweep the causes of varying tastes under the rug of obfuscation, however.
In the case of AI-generated text, the uncanny, je ne said quoi character that makes it irritating to read seems to be that it has the quality of something produced by a zombie. The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
ive heard of many professions complain about their version of “editors” from comedians, to video producers, and radio jockies.
Those AI skin enhancement filters are always terrible. Especially on men. Crazy they'd try it automatically. This isn't like the vocal boosting audio EQing they do without asking.
Google must have some questionable product management teams these days if they are pushing out this stuff without configuration. Probably trying to A/B it for interal data to justify it before facing the usual anti-AI backlash crowd when going public.
From a technical standpoint, it's easy to think of AI-based cleanup as in the same category as "improving the compression algorithm" or "improving the throughput to the client": just a technically-mediated improvement. But people have a subjectively-different reaction between decreasing instances of bandwidth-related pixelation and making faces baby-smooth, and anyone on the community side of things could have told the team responsible (if they'd known about it).
Sometimes Google's tech-expert-driven-company approach has negative consequences.
The idea of it being "without consent" is absurd. Your phone doesn't ask you for consent to apply smoothing to the Bayer filter, or denoising to your zoom. Sites don't ask you for consent to recompress your video.
This is just computational image processing. Phones have been doing this stuff for many years now.
This isn't adding new elements to a video. It's not adding body parts or changing people's words or inventing backgrounds or anything.
And "experiments" are just A/B testing. If it increases engagement, they roll it out more broadly. If it doesn't, they get rid of it.
https://en.wikipedia.org/wiki/Marion_Steam_Shovel_(Le_Roy,_N...
> We hear you, and want to clear things up! This is from an experiment to improve video quality with traditional machine learning – not GenAI. More info from @YouTubeInsider here:
> No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)
> YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features
> enhancing it with care
I get what you’re going for with this comment, but it seamlessly anthropomorphizes what’s happening in a way that has the opposite impact I think.
There is no thoughtfulness or care involved. Only algorithmic conformance to some non-human synthesis of the given style.
The issue is not just about the words that come out the other end. The issue is the loss of the transmission of human thoughts, emotions, preferences, style.
The end result is still just as suspect, and to whatever degree it appears “good”, even more soulless given the underlying reality.
Say what you want about Microsoft, but if I have a problem with something I've pretty much always ended up getting support for that problem. I think Google's lack of response adds to their "mystique".
But it also creates superstitions since creators don't really understand the firm rules to follow.
Regardless, it is one of the most dystopian things about modern society - the lack of accountability for their decisions.
Lots of very hateful, negative content too. It didn’t take me long to find the video “why this new artist sucks.” Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.” Maybe, somehow, these two things are related.
Do these videos that YT creates to backfill their lack of Shorts get credited back to the original creator as far as monetization from ads?
This really has a feel of the delivery apps making websites for the restaurants that did not previously have one without the restaurant knowing anything about it while setting higher prices on the menu items while keeping that extra money instead of paying the restaurants the extra.
Imagine someone shot a basketball, and it didn't go into the hoop. Why would telling a story about somebody else who once shot a basketball which failed to go into the hoop be helpful or relevant?
I toss all of my work into Apple Pages and Google Docs, and use them both for spelling and grammar check. I don't just blindly accept whatever they tell me, though; sometimes they're wrong, and sometimes my "mistakes" are intentional.
I also make a distinction between generating content and editing content. Spelling and grammar checkers are fine. Having an AI generate your outline is questionable. Having AI generate your content is unacceptable.
I suspect that a still image is also different from video because, without motion, there's no feeling that if the person might move a few inches to one side and go out of frame.
Aside: The mention of Technorati tags (and even Flickr) in the linked blog post hit me right in the Web 2.0 nostalgia feels.
[0] https://colorspretty.blogspot.com/2007/01/flickrs-dirty-litt...
If you're referring to his video I'm Sorry...This New Artist Completely Sucks[1], then it's a video about a fully AI generated "artist" he made using various AI tools.
So it's not hateful against anyone. Though the title is a bit clickbait-y, I'll give you that.
That's not what AI slop means. There's no GenAI.
I watched the video. It's literally just some mild sharpening in the side-by-side comparison.
But serious discussion demands the truth: It is fiction, in the style of a twitter thread.
Yet.
Xe also occasionally reminds people that, equal temperament being what it is, this pitch correction is actually in a few cases making people less well in tune than they originally were.
It certainly removes unique tone. Yesterday's was a pitch corrected version of a performance by John Lennon from 1972, that definitely changed Lennon's sound.
Language and music (which is a type of language) are a core of shared convention wrapped in a fuzzy liminal bark, outside of which, there is nonsense. An artist, be it a writer or a musician, is essentially somebody whose path stitches the core and the bark in their own unique way, and because those regions are established by common human consensus, the artist, by the act of using that consensus, is interacting with its group. And so is the person who enjoys the art. So, our shared conventions and what we dare call correctness are a medium for person-to-person communication, the same way that air is a medium to conduct sound or a piece of paper is a medium for a painting.
Furthermore, the core of correctness is fluid; language changes and although, at any time and place there is a central understanding of what is good style, the easy rules, such as they exist, are limited and arbitrary. For example, two different manuals of style will mandate different placements of commas. And somebody will cite a neurolinguistics study to dictate on the ordering of clauses within a sentence. For anything more complex, you need a properly trained neural network to do the grasping; be it a human editor or an LLM.
> The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
Somebody in amphetamines is still intrinsically human, and here too we have some disagreement. I can not concede that AI’s output is always of the quality produced by a zombie, at least no more than the output of certain human editors, and at least not by looking at the language alone; otherwise it would be impossible for the AI to fool people. In fact, AI’s output is better (“more correct”) than what most people would produce if you forced them to write with a gun pointed to their head, or even with a large tax deduction.
What makes LLMs irritating is the suspicion that one is letting one’s brain engage with output from a stochastic parrot in contexts where one expects communication from a fellow human being. It’s the knowledge that, at the other end, somebody may decide to take your attention and your money dishonestly. That’s why I have no trouble paying for a ChatGPT plan—-it’s honest, I know what I get—-but hesitate to hire a human editor. Now, if I could sit at a caffe with said editor and go over their notes, then I would rather do just that.
In other words, what makes AI pernicious is not a matter of style or correctness, but that it poisons the communication medium—-it seeds doubt and distrust. That’s why people—-yours truly—-are burning manuals of style and setting shop in the bark of the communication medium, knowing that’s a place less frequented by LLMs and that there is a helpful camp filled with authoritative figures whose job of asserting absolute correctness may, perhaps, keep the LLMs in that core for a little longer.
Those are workarounds, however. It's too early to know for sure, but I think our society will need to rewrite its rules to adjust to AI. Anything from seclusion and attestation rituals for writers to a full blown Butlerian Jihad. https://w.ouzu.im/
It's almost as if there's a mindless robot submitting the claims to YouTube. Perish the thought! (-:
It's worth stating, though, that the vast majority of youtube's problems are the fault of copyright law and massive media publishers. Google could care less if you wanted to upload full camrips of 2025's biggest blockbusters, but the powers-that-be demand Google is able to take it down immediately. This is why 15 seconds of a song playing in the background gets your video demonitized.
As a viewer I certainly hate that crap and wish Google didn't intentionally make it this way.
Then again, it only takes 2 minutes to come to that realization when talking with many humans.
We can only be stoic and say "slop is gonna be slop". People are getting used to AI slop in text ("just proofreading", "not a natural speaker") and they got used to artificial artifacts in commercial/popular music.
It's sad, but it is what it is. As with DSP, there's always a creative way to use the tools (weird prompts, creative uses of failure modes).
In DSP and music production, auto-tune plus vocal comping plus overdubs have normalized music regressing towards an artificial ideal. But inevitably, real samples and individualistic artists achieve distinction by not using the McDonald's-kind of optimization.
Then, at some point, some of this lands in mainstream music, some of it doesn't.
There were always people hearing the difference.
It's a matter of taste.
> And comparing digging through the ground to human thought and creativity is an odd mix of self debasement and arrogance.
> I'm guessing there is an unspoken financial incentive guiding your point of view.
I mostly don't watch them. But they literally spam every single search. (While we're at it, Youtube also isn't very good at honoring keywords in searches either)
I mean, I'm also not Brad Pitt. "Yet."
It's not making the videos look fake, any more than your iPhone does. Most of what's shown in the example video, it might very well be phones applying the effect, not YouTube.
- Companies who put their product instruction manual exclusively on YouTube
- university curriculum who require you to watch contain that is on YouTube only.
Sure I'm free not to buy any manufactured products or not resume my studies, but it's like saying the Gulag was OK because people were free not to criticize Stalin.
To you, that result looks like it was shot with a phone filter. To me it looks like it was generated with AI. Either way, it doesn't really matter. It's not what the creator intended. Many creators spend a lot of effort and money on high-end cameras, lenses, lighting, editing software, and grading systems to make their videos look a specific way. If they wanted their videos to look like whatever this is, they would have made it that way by choice.
Although, I probably wouldn't want any automatic filtering applied to my video either, AI modifications or not.
Basing it on a lot of stupid decisions youtube has made over the years, the last being the horrendous autotranslation of titles/descriptions/audio that can't be turned off. Can only be explained by having morons making decisions, who can't imagine that anyone could speak more than one language.
Also shorts seem to be increasing exponentially... but Youtube viewership is not. So compute wouldn't need to increase as fast as storage.
I obviously don't know the numbers. Just saying that it could be a good reason why Youtube is doing this AI upscaling. I really don't see why otherwise. There's no improvement in image quality, quite the contrary.
PS: this isn't "generative AI" It's basic ML enhancement (denoise/sharpen/tone-map)
That's why I think it's funny that they claim they will now be "using AI" to determine if someone is an adult and able to watch certain youtube videos. Google already knows how old you are. It doesn't need a new technique to figure out that you're 11 years old or 39 years old. They're literally just pretending to not know this information.
If you want to make pristine originals available to the masses, seed a torrent.
That's about AI, not very polarizing at the level it's currently at.
> Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.”
That might be overblown, but it doesn't sound polarizing at all. OP was saying he always has the most polarizing opinions.
If that last one is the vid I'm thinking of, the same record company has sent him hundreds of copyright strikes and he has to have a lawyer constantly fighting them for fair use. He does some stuff verging on listen-along reaction videos, but the strikes he talks about there are when he is interviewing the artists who made the songs and they play short snippits of them for reference while talking about the history of making them, thought process behind the songwriting, etc.
I think it's not just automated content ID stuff where it claims the monetization, but the same firm for that label going after him over and over where 3 strikes removes his channel. The title or thumbnail might be overblown, probably the firm just earns a commission and he's dealing with a corporate machine that is scatter shotting against big videos with lots of views that have any of their sound rather than targetting him to silence something they don't want to get out, but I don't think the video was very polarizing.
Have to say, I am not a fan of the AI sharpening filter at all. Would much prefer the low res videos.
(I don't have any other YouTube-like on my phone, particularly no TikTok. Actually started reading more books instead.)
I'm not seeing the outrage here.
Your bias is showing through.
For what it's worth, it has made everything I use it for, much better. I can search the web for things on the net in mere seconds, where previously it could often take hours of tedious searching and reading.
And it used to be that Youtube comments were an absolute shit show of vitriol and bickering. A.I. moderation has made it so that now it's often a very pleasant experience chatting with people about video content.
Their AI answers box (and old quick answer box) has already affected traffic to outside sites with answers scraped from those sites. Why wouldn't they make fake YouTubers?
> I mean, I'm also not Brad Pitt. "Yet."
Not with that attitude!
I do not get the argument of "if nothing happens when an ant bites you then I can shot you with a cannon be cause it is the same thing just larger". The impact on society matters for me, and it is very different. Justifying AI at any cost is a marketing strategy that will hurt us long-term.
To boycott Google I'd be forced to quit my job for example, as it literally forces me into Google's services.
Specifically YouTube has very little in the way of alternatives, but I get what you're saying — I just respectfully disagree with the coping method. Which is to say, on the gradient between "we should suck it up" and "we should Luigi Mangione the person responsible" I fall somewhere in the middle.
going on YouTube to watch a single video from a manual is a very different thing. I didn't move the goal post, I pointed out your motte and bailey position.
this argument fails because at least in the examples provided, it's closer to the non-gen ai denoising/upscaling algorithms than telling chatgpt to upscale an image.
>The impact on society matters for me, and it is very different. Justifying AI at any cost is a marketing strategy that will hurt us long-term.
There's no indication it's AI, except some vague references to "machine learning".
The key problem isn't that YouTube has been degrading its user experience for a while, the problem is that we don't have anywhere else to go as YouTube is the most encroached monopoly in the tech scene (which is no small feat).
The funny thing is that you never gave me a slight bit of impression that you were arguing in good faith, and now you whine about that.
I don't think it's stupidity, or shortsightedness, or ignorance, or anything like that. They just have different priorities. And they are not having enough negative feedback to reconsider these decisions.
Article 8 can be revoked for public safety, prevention of disorder or crime, protection of the rights of other people, but also for the protection of health and morals.
Given the problems with attention spans in systems like TikTok and shorts, they definitely could ban it even given article 8.
Sorry to burst your bubble.
There is no practical difference between an idiot on the internet and a smart troll role-playing an idiot 100% of the time.
Since youtube has been making a lot of objectively stupid decisions, it does not matter if they are actually stupid or this is some kind of meta commentary on the power elites running their way and ordinary folks not able to do anything about it. It's all the same in practice.