And further, I can imagine some person actually having such footage of themselves being threatened to be released, then using the former narrative as a cover story were it to be released. Is there anything preventing AI generated images, video, etc from being always detectible by software that can intuit if something is AI? what if random noise is added, would the "Is AI" signal persist just as much as the indication to human that the footage seems real?