←back to thread

Gemini 2.5 Flash Image

(developers.googleblog.com)
1092 points meetpateltech | 1 comments | | HN request time: 0.213s | source
Show context
atleastoptimal ◴[] No.45034465[source]
I can imagine an automated blackmail bot that scrapes image, video, voice samples from anyone with the most meagre online presence, which then creates high resolution videos of that person doing the most horrid acts, then threatening to share those videos with that person's family, friends and business contacts unless they are paid $5000 in a cryptocurrency to an anonymous address.

And further, I can imagine some person actually having such footage of themselves being threatened to be released, then using the former narrative as a cover story were it to be released. Is there anything preventing AI generated images, video, etc from being always detectible by software that can intuit if something is AI? what if random noise is added, would the "Is AI" signal persist just as much as the indication to human that the footage seems real?

replies(7): >>45034627 #>>45034841 #>>45035145 #>>45035482 #>>45041034 #>>45041047 #>>45060319 #
1. goosejuice ◴[] No.45034627[source]
SynthID claims to be designed to persist through several methods of modification. I suspect such attacks you mention will happen, but by those with deep pockets. Like a nation-state actor with access to models that don't produce watermarks.