←back to thread

110 points jonbaer | 4 comments | | HN request time: 0.022s | source
Show context
utilize1808 ◴[] No.45073112[source]
I feel this is not the scalable/right way to approach this. The right way would be for human creators to apply their own digital signatures to the original pieces they created (specialised chips on camera/in software to inject hidden pixel patterns that are verifiable). If a piece of work lacks such signature, it should be considered AI-generated by default.
replies(3): >>45073155 #>>45073302 #>>45073834 #
shkkmo ◴[] No.45073155[source]
That seems like a horrible blow to anonymity and psuedonymity that would also empower identity thieves.
replies(3): >>45073244 #>>45073831 #>>45074209 #
utilize1808 ◴[] No.45073244[source]
Not necessarily. It’s basically document signing with key pairs —- old tech that is known to work. It’s purpose is not to identify the individual creators, but to verify that a piece of work was created by a process/device that is not touched by AI.
replies(2): >>45073863 #>>45076968 #
1. BoiledCabbage ◴[] No.45073863{3}[source]
And what happens when someone uses their digital signature to sign an essay that was generated by AI?
replies(1): >>45073997 #
2. utilize1808 ◴[] No.45073997[source]
You can’t. It may be set up such that your advisor could sign it if they know for sure that you wrote it yourself with using AI.
replies(1): >>45074751 #
3. akoboldfrying ◴[] No.45074751[source]
> You can’t.

I like the digital signature approach in general, and have argued for it before, but this is the weak link. For photos and video, this might be OK if there's a way to reliably distinguish "photos of real things" from "photos of AI images"; for plain text, you basically need a keystroke-authenticating keyboard on a computer with both internet access and copy and paste functionality securely disabled -- and then you still need an authenticating camera on the user the whole time to make sure they aren't just asking Gemini on their phone and typing its answer in.

replies(1): >>45077091 #
4. shkkmo ◴[] No.45077091{3}[source]
> for plain text, you basically need a keystroke-authenticating keyboard on a computer with both internet access and copy and paste functionality securely disabled -- and then you still need an authenticating camera on the user the whole time to make sure they aren't just asking Gemini on their phone and typing its answer in.

Which is why I say it would destroy privacy/pseudonymity.

> For photos and video, this might be OK if there's a way to reliably distinguish "photos of real things" from "photos of AI images";

I suspect if you think about it, many of the issues with text also apply to images and videos.

You'd need a secure enclave You'd need a chain of signatures and images to allow human editing. You'd need a way of revoking the public keys of not just insecure software, but bad actors. You would need verified devices to prevent allowing AI tooling the using software to edit the image....etc.

This are only the flaws I can think of in like 5 minutes. You've created a huge incentive to break an incredibly complex system. I have no problem comfortably saying that the end result is a complete lack of privacy for most people while those with power/knowledge would still be able to circumvent it.