←back to thread

110 points jonbaer | 3 comments | | HN request time: 0.485s | source
1. kedv ◴[] No.45073168[source]
Would be nice if you guys open source the detection code, similar to the way C2PA is open
replies(2): >>45073352 #>>45074799 #
2. harshreality ◴[] No.45073352[source]
That's like asking for Adobe to open source their C2PA signing keys.

AI watermarking is adversarial, and anyone who generates a watermarked output either doesn't care, or wants the watermarked removed.

C2PA is cooperative: publishers want the signatures intact, so that the audience has trust in the publisher.

By "adversarial" and "cooperative", I mean in relation to the primary content distributor. There's an adversarial aspect to C2PA, too: bad actors want leaked keys so they can produce fake video and images with metadata attesting that they're real.

A lot of people have a large incentive to disrupt the AI watermark. Leaked C2PA keys will be a problem, but probably a minor one. C2PA is merely an additional assurance, beyond the reputation and representation of the publishing entity, of the origin of a piece of media.

3. spidersouris ◴[] No.45074799[source]
There is a repo: https://github.com/google-deepmind/synthid-text