Maybe Google has done the math and realized it's cheaper to upscale in realtime than store videos at high resolution forever. Wouldn't surprise me considering the number of shorts is probably growing exponentially.
And a new generation what is trained on a constantly enabled face filters and 'AI'-upscaled slop is already here.
It's 100% a push to remove human creators from the equation entirely.
1. See that AI upscaling works kinda well on certain illustrations.
2. Start a project to see if you can do the same with video.
3. Develop 15 different quality metrics, trying to capture what it means when "it looks a bit fake"
4. Project's results aren't very good, but it's embarrassing to admit failure.
5. Choose a metric which went up, declare victory, put it live in production.
For now it's a kind of autoencoding, regenerating the same input video with minimal changes. They will refine the pipeline until the end video is indistinguishable from the original. Then, once that is perfected, they will offer famous content creators the chance to sell their "image" to other creators, so less popular underpaid creators can record videos and change their appearance to those of famous ones, making each content creator a brand to be sold. Eventually humans will get out of the pipeline and everything will be autogenerated, of course.
I'm frightened by how realistic this sounds.
Also shorts seem to be increasing exponentially... but Youtube viewership is not. So compute wouldn't need to increase as fast as storage.
I obviously don't know the numbers. Just saying that it could be a good reason why Youtube is doing this AI upscaling. I really don't see why otherwise. There's no improvement in image quality, quite the contrary.