It's like reducing an image to tiny dots with dithering (reminds of Atinkson dithering). Those grains are not a noise, they are a detail, actual data. That's why real grain looks good IMO.
It's like reducing an image to tiny dots with dithering (reminds of Atinkson dithering). Those grains are not a noise, they are a detail, actual data. That's why real grain looks good IMO.
There are two possible advantages for this kind of grain synthesis. For Netflix, they could produce the same perceived quality at lower bitrates, which reduces costs per view and allows customers with marginally slow connections to get a higher quality version. For a consumer, the advantage would be getting more non-grain detail for a fixed bitrate.
You are right that if you subtract the dentists frame from the raw one, showing only the estimated noise, you would get some impression of the scene. I think there’s two reasons for this. Firstly, the places where the denoiser produced a blurry line that should be sharp may show up as faint lines. I don’t think this is ‘hidden information’ so much as it is information lost to lossy compression. In the same way, if you look at the difference between a raw image and one with compression, you may see some emphasized edges due to compression artefacts. Secondly, the less exposed regions of the film will have more noise so noisiness becomes a proxy for darkness, allowing some reproduction of the scene. I would expect this detail to be lost after adjusting for the piecewise linear function for grain intensity at different brightness levels.
Perhaps a third thing is the level of noise in the blacks and the ‘grain size’ or other statistical properties tell you about the kind of film being used, but I think those things are captured in the film grain simulation model.
Possibly there are some other artefacts like evidence of special effects, post processing, etc.