←back to thread

Gemini 2.5 Flash Image

(developers.googleblog.com)
1092 points meetpateltech | 1 comments | | HN request time: 0s | source
Show context
atleastoptimal ◴[] No.45034465[source]
I can imagine an automated blackmail bot that scrapes image, video, voice samples from anyone with the most meagre online presence, which then creates high resolution videos of that person doing the most horrid acts, then threatening to share those videos with that person's family, friends and business contacts unless they are paid $5000 in a cryptocurrency to an anonymous address.

And further, I can imagine some person actually having such footage of themselves being threatened to be released, then using the former narrative as a cover story were it to be released. Is there anything preventing AI generated images, video, etc from being always detectible by software that can intuit if something is AI? what if random noise is added, would the "Is AI" signal persist just as much as the indication to human that the footage seems real?

replies(7): >>45034627 #>>45034841 #>>45035145 #>>45035482 #>>45041034 #>>45041047 #>>45060319 #
shibeprime ◴[] No.45034841[source]
I’m more bullish on cryptographic receipts than on AI detectors. Capture signing (C2PA) plus an identity bind could give verifiable origin. The hard parts, in my view, are adoption and platform plumbing.

If we have a trust worthy way to verify proof-of-human made content than anything missing those creds would be red flags.

https://iptc.org/news/googles-pixel-10-phone-supports-c2pa-u...

replies(1): >>45040914 #
1. arsome ◴[] No.45040914[source]
This seems absolutely silly, it's not hard to take a photo of a photo and there's both analog (building a lightbox) and digital (modifying the sensor input) means which would make this entirely trivial to spoof.