The Coalition for Content Provenance and Authenticity https://c2pa.org/
"The Coalition for Content Provenance and Authenticity, or C2PA, provides an open technical standard for publishers, creators and consumers to establish the origin and edits of digital content. It’s called Content Credentials, and it ensures content complies with standards as the digital ecosystem evolves."
I first heard about this here, which provides a good overview of how t works right now: https://www.tbray.org/ongoing/When/202x/2025/09/18/C2PA-Inve...
If so, how do we know, you ain't AI?
Well, we cannot know for sure, but we can look at your profile, see your comment history, spot irregularities in your text, etc. - same with the picture. The photographer is a real person, with a history. And as of my knowledge, at very high resolutions, a complete fabrication would take a lot of work, that it could not be spot easily.
My question isn’t fact or fiction, it’s just a question.
So yes, maybe it is all completely fabricated, but unlikely, as this would destroy his reputation. And as of my knowledge, it is allmost impossible to create high resolution pictures like these, that cannot easily be exposed for fraud (but the tech might have improved, since I last looked into it)
But if you want it way more general, you can also ask Sokrates about, what do we know for sure at all.
A subjective answer is, if you have been there and know this to be real from personal experience.
A more general answer would be, as long as we humans sufficiently interact with reality, we will have a respository of life experience to benchmark against.
Once we cease to do that, and are the product of a life in front of the screen, then we won’t know anymore.
Edit: This place is relatively close to where I live.
The very same question like it is, could be literally repeated under any article and is definitely offtopic as it is a general debate how to spot AI and what are the limits of knowledge. Interesting offtopic, so tolerated here if the debate that follows is interesting, but offtopic nevertheless. More ontopic would have been to state why these concrete pictures seem fake.
We only know now because generated things still have artifacts. That is slowly changing. If the article was written by an AI right now absolutely cannot be fully known.
Spotting fakes and then not trusting any visual media at all?
Doubt it. Sensors are getting better as well. (More real data, harder to fake).
But I will likely stop debating with internet strangers and rather focus on verified humans, preferably in the real world.
Sensors can’t tell if something is indistinguishable.
(Hint, reality is infinitely complex and can only partly modelled, in other words, real world sensor data will always be different, from fake sensor data)
So yes, the tech gets better and better and low resolution stuff can be quite convincing already, so in a few years even quite high resolution(by today standards) might be easy to fake. But even in 20 years, I really, really doubt that the best AI stuff will be close to what the best sensors can deliver.
https://images.squarespace-cdn.com/content/v1/58a13eba20099e...
I disagree. And my evidence for it is that the trendline clearly shows the rate of improvement for pictures is happening at a rate which clearly shows it will produce pictures that will be indistinguishable.
Like this is independent of whether or not the world needs to be simulated. The On the ground evidence literally shows that the projected trendline is that all pictures and all videos from AI will be indistinguishable in 5 years or less. Right now I would say a PORTION of AI and that portion increases every month.
Are you aware of the advancements in Astronomy for example? Made possible by ever increasing sensor tech.
So ... if you say, AI pictures might get soon to the point, where they are indistingushable from a ordinary mobile phone camera - then I would say maybe, but I really doubt it will be in 5 years.
But when you say sensor data in general will be indistinguishable from AI generated ones, then I disagree.