←back to thread

110 points jonbaer | 2 comments | | HN request time: 0s | source
Show context
mingtianzhang ◴[] No.45073410[source]
1. One-sample detection is impossible. These detection methods work at the distributional level—more like a two-sample test in statistics—which means you need to collect a large amount of generated text from the same model to make the test significant. Detecting based on a short piece of generated text is theoretically impossible. For example, imagine two different Gaussian distributions: you can never be 100% certain whether a single sample comes from one Gaussian or the other, since both share the same support.

2. Adding watermarks may reduce the ability of an LLM, which is why I don’t think they will be widely adopted.

3. Consider this simple task: ask an LLM to repeat exactly what you said. Is the resulting text authored by you, or by the AI?

replies(1): >>45073419 #
mingtianzhang ◴[] No.45073419[source]
For images/video/audio, removing such a watermark is very simple. By adding noise to the generated image and then using an open-source diffusion model to denoise it, the watermark can be broken. Or in an autoregressive model, use an open-sourced model to do generation with "teacher forcing" loll.
replies(3): >>45073483 #>>45075121 #>>45081558 #
1. kimi ◴[] No.45075121[source]
For text, have a big model generate the "intelligent" answer, and then ask a local LLM to rephrase.
replies(1): >>45076846 #
2. mingtianzhang ◴[] No.45076846[source]
Yeah exactly, you can always do that by using another model that doesn't have the watermark.