←back to thread

548 points CharlesW | 1 comments | | HN request time: 0s | source
Show context
VerifiedReports ◴[] No.46156575[source]
I had forgotten about the film-grain extraction, which is a clever approach to a huge problem for compression.

But... did I miss it, or was there no mention of any tool to specify grain parameters up front? If you're shooting "clean" digital footage and you decide in post that you want to add grain, how do you convey the grain parameters to the encoder?

It would degrade your work and defeat some of the purpose of this clever scheme if you had to add fake grain to your original footage, feed the grainy footage to the encoder to have it analyzed for its characteristics and stripped out (inevitably degrading real image details at least a bit), and then have the grain re-added on delivery.

So you need a way to specify grain characteristics to the encoder directly, so clean footage can be delivered without degradation and grain applied to it upon rendering at the client.

replies(2): >>46156722 #>>46163995 #
bob1029 ◴[] No.46163995[source]
Actual film grain (i.e., photochemical) is arguably a valid source of information. You can frame it as noise, but does provide additional information content that our visual system can work with.

Removing real film grain from content and then recreating it parametrically on the other side is not the same thing as directly encoding it. You are killing a lot of information. It is really hard to quantify exactly how we perceive this sort of information so it's easy to evade the consequences of screwing with it. Selling the Netflix board on an extra X megabits/s per streamer to keep genuine film grain that only 1% of the customers will notice is a non-starter.

replies(1): >>46166719 #
VerifiedReports ◴[] No.46166719[source]
Exactly. In the case of stuff shot on film, there's little to be done except increase bitrate if you want maximal fidelity.

In the case of fake grain that's added to modern footage, I'm calling out the absurdity of adding it, analyzing it, removing it, and putting yet another approximation of it back in.

replies(1): >>46180252 #
breve ◴[] No.46180252{3}[source]
> I'm calling out the absurdity of adding it, analyzing it, removing it, and putting yet another approximation of it back in

Why is it absurd? The entire encoding process is an approximation of the original image. Lossy compression inevitably throws away information to make the file size smaller. And the creation of the original video is entirely separated from the distribution of it. It'll be stored losslessly for one thing.

The only question that matters is does the image look good enough after encoding. If it doesn't look good enough then no one will watch it. If it does look good enough then you've got a viable video distribution service that minimizes its bandwidth costs.

replies(1): >>46182974 #
VerifiedReports ◴[] No.46182974{4}[source]
"Lossy compression inevitably throws away information to make the file size smaller."

So? That fact only emphasizes the absurdity and loss potential here:

1. Acquire "clean" digital footage.

2. Add fake grain to said footage

3. Compress the grainy footage with lossy compression, wasting a bunch of the data on fake detail that you just added.

4. Analyze the footage to determine the character of the fake grain, calculate parameters to approximate it later with other fake grain

5. Strip out the fake grain, with potential for loss of some original image details

6. Re-add fake grain with the calculated parameters

If you don't see the absurdity there, I don't know what to tell you.

replies(1): >>46183841 #
1. breve ◴[] No.46183841{5}[source]
No, the source material is stored losslessly. Many different effects will be applied to the source material to make the moving images look the way the director wants them to look.

The creation of the original video is separate from the distribution of the video. In distribution the video will be encoded to many formats and many bitrates to support playback on as many devices at as many network speeds as possible.

The distributed video will never exactly match the original. There simply isn't the bandwidth. The goal of video encoding is always just to make it look good enough.