I feel this is not the scalable/right way to approach this. The right way would be for human creators to apply their own digital signatures to the original pieces they created (specialised chips on camera/in software to inject hidden pixel patterns that are verifiable). If a piece of work lacks such signature, it should be considered AI-generated by default.
Then you just point the special camera at a screen showing the AI content.
Sure. But then it will receive more scrutiny because you are showing a "capture" rather than the raw content.
Actually come to think of it, I suppose a "special camera" could also record things like focusing distance, zoom, and accelerations/rotation rates. These could be correlated to the image seen to detect this kind of thing.