←back to thread

230 points perryflynn | 1 comments | | HN request time: 0.208s | source
Show context
john01dav ◴[] No.43747099[source]
Even with all of this onerous encryption and DRM, it's not hard to find pirated copies of movies. It makes me think that the sacrifice in ownership rights for the theaters over their equipment isn't worth it.
replies(7): >>43747126 #>>43747412 #>>43747502 #>>43748205 #>>43748460 #>>43750381 #>>43760249 #
perryflynn ◴[] No.43747502[source]
It also contains watermarks. So theatres which failed to prevent recording will run into serious issues. See https://dcpomatic.com/forum/viewtopic.php?t=2372
replies(3): >>43747796 #>>43747864 #>>43748013 #
coppsilgold ◴[] No.43748013[source]
If the software to watermark is widely available (as it appears to be) then an adversary has all they need to corrupt any existing watermark.

These steganographic watermarks depend on no knowledge of the process. If the method is particularly ingenious (one of the inputs is centrally stored entropy which the extractor references by trialing them all) then knowledge of the process alone may not be sufficient to obtain a high quality result (as too much corruption may be required) but could be used to inform the next step:

If you obtain two or more copies of the decrypted content you will be able to diff them and work out what you need to corrupt even without knowledge of the watermarking process. This probably won't work with pirated CAM's or take quite an effort to find the signal in the noise.

Edit: After some more research it looks like they don't actually watermark the distributed data (the movie sent to cinemas). The projector inserts its unique watermark during playback. There may be other secret watermarks put in by distributors not mentioned anywhere.

replies(3): >>43748244 #>>43748467 #>>43749452 #
1. azalemeth ◴[] No.43749452[source]
I'm friends with a professor of steganography. Apparently most cinema watermarking is based on very heavily error correcting codes within the wavelet domain that are specifically designed such that they are resistant to collusion attacks, i.e. the statistical properties of the "indistinguishable from random" noise are such that it is highly correlated among different viewers such that they are very much more likely to have bits in common rather than bits different. I'm relatively sure that the obvious things like taking the mean of two images (or randomly picking one of them) have been considered.

Put it this way -- You've got huge amounts of cover data (a hard drive's worth) and a desire to encode at most, what, 128 bits of data, across about two hours, with as much redundancy as possible. There are plenty of patents that explain in detail how.

My friend considers this a moderately distasteful problem, and mostly works on steganalysis, identifying where steganographic techniques have been used, as he thinks it's more interesting and frequently more morally justified...