In a way, the “it just does what humans do but faster” argument is starting to follow the “a number can’t be illegal” trajectory.
Sounds like the AI model should be paying royalties to every affected artist for the right to sample their work.
If it's a 1:1 copy, I agree. If it's a "that looks vaguely like the style that xyz likes to use", I disagree.
And I assume you'd run into plenty of situations where multiple people would discover that it's their unique style that is being imitated. Kind of like that story about a hipster threatening to sue a magazine for using his image in an article only to find out that he's a hipster and dresses and styles himself like countless other people, so much so that he himself wasn't able to tell himself apart from another guy.
It is not.
The AI model can only regurgitate stolen mash-ups of other people’s work.
Everything it produces is trivially derivative of the work it has consumed.
Where it succeeds, it succeeds because it successfully correlated stolen human-written descriptions to stolen human-produced images.
Where it fails, it does so because it cannot understand what it’s regurgitating, and it regurgitates the wrong stolen images for the given prompt.
AI models are incapable of producing anything but purely derivative stolen works, and the (often unwillingly) contributors to their training dataset should be entitled to copyright protections that extend to the derivative works.
That’s true whether we’re discussing dall-e or GitHub copilot.
We all stand on the shoulders of giants. If you're very dismissive, I think it's easy to say the same about most artists. They're not genre-redefining, they carve out their niche that works (read: sells) for them.