You raise a really amazing point! One that should get more attention in these discussions on HN! I'm a painter in my spare time. I think it
is okay to sit down and paint a picture of Harrison Ford (on velvet, maybe), and sell it on Etsy or something if you want to. Before you accuse me of hypocrisy, let me stipulate: Either way, it would
not be ok for someone to buy that painting and use it in an ad campaign that insinuated that their soap had been endorsed by Harrison Ford. As an art director, it has obviously never been okay to ask someone to paint Harrison Ford and use that picture in a soap ad. I go through all kinds of hoops and do tons of checking on my artists' work to make sure that it doesn't violate anyone else's IP, let alone anyone's human likeness.
But that's all known. My argument for why me selling that painting is okay, and why an AI company with a neural network doing the same thing and selling it would not be okay, is a lot more subtle and goes to a question that I think has not been addressed properly: What's the difference between my neurons seeing a picture of Harrison Ford, and painting it, and artificial neurons owned by a company doing the same thing? What if I traced a photo of Ford and painted it, versus doing his face from memory?
(As a side note, my friend in art school had an obsession with Jewel, the singer. He painted her dozens of times from memory. He was not an AI, just a really sweet guy).
To answer why I think it's ok to paint Jewel or Ford, and sell your painting, I kind of have to fall back on three ideas:
(1) Interpretation: You are not selling a picture of them, you're selling your personal take on your experience of them. My experience of watching Indiana Jones movies as a kid and then making a painting is not the same thing as holding a compressed JPEG file in my head, to the degree that my own cognitive experience has significantly changed my perceptions in ways that will come out in the final artwork, enough to allow for whatever I paint to be based on some kind of personal evolution. The item for sale is not a picture of Harrison Ford, it's my feelings about Harrison Ford.
(2) Human-centrism: That my neurons are not 1:1 copies of everything I've witnessed. Human brains aren't simply compression algorithms the way LLMs or diffusers are. AI doesn't bring cognitive experience to its replication of art, and if it seems to do so, we have to ask whether that isn't just a simulacrum of multiple styles it stole from other places laid over the art it's being asked to produce. There's an anti-human argument to be made that we do the exact same thing when we paint Indiana Jones after being exposed to Picasso. But here's a thought: we are not a model. Or rather, each of us is a model. Buying my picture of Indiana Jones is a lot like buying my model and a lot less like buying a platonic picture of Harrison Ford.
(3) Tools, as you brought up. The more primitive the tools used, the more difficult we can prove it to be to truly copy something. It takes a year to make 4 seconds of animation, it takes an AI no time at all to copy it... one can prove by some function of work times effort that an artwork is, at least, a product of one's own if not completely original.
I'm throwing these things out here as a bit of a challenge to the HN community, because I think these are attributes that have been under-discussed in terms of the difference between AI-generated artwork and human art (and possibly a starting point for a human-centric way of understanding the difference).
I'm really glad you made me think about this and raised the point!
[edit] Upon re-reading, I think points 1 and 2 are mostly congruent. Thanks for your patience.