←back to thread

Gemini 2.5 Flash Image

(developers.googleblog.com)
1092 points meetpateltech | 1 comments | | HN request time: 0.201s | source
Show context
atleastoptimal ◴[] No.45034465[source]
I can imagine an automated blackmail bot that scrapes image, video, voice samples from anyone with the most meagre online presence, which then creates high resolution videos of that person doing the most horrid acts, then threatening to share those videos with that person's family, friends and business contacts unless they are paid $5000 in a cryptocurrency to an anonymous address.

And further, I can imagine some person actually having such footage of themselves being threatened to be released, then using the former narrative as a cover story were it to be released. Is there anything preventing AI generated images, video, etc from being always detectible by software that can intuit if something is AI? what if random noise is added, would the "Is AI" signal persist just as much as the indication to human that the footage seems real?

replies(7): >>45034627 #>>45034841 #>>45035145 #>>45035482 #>>45041034 #>>45041047 #>>45060319 #
1. atonse ◴[] No.45060319[source]
I’m also wondering if the opposite will be true. That people might claim something is AI generated to discredit it?