←back to thread

114 points valgaze | 1 comments | | HN request time: 0.001s | source
Show context
cercatrova ◴[] No.32461248[source]
Title should be more like, artists concerned about Stable Diffusion AI model that makes images look human-made.
replies(1): >>32461290 #
antiterra ◴[] No.32461290[source]
It also apparently copies the artist’s logo.

In a way, the “it just does what humans do but faster” argument is starting to follow the “a number can’t be illegal” trajectory.

replies(1): >>32461342 #
cercatrova ◴[] No.32461342[source]
Either way I support AI art and AI in other fields. Just because artists are mad it's gonna take their jobs does not seem like a legitimate reason to halt human progress. It's just inevitable the way things are going.
replies(6): >>32461451 #>>32461558 #>>32461593 #>>32461841 #>>32462157 #>>32462306 #
teakettle42 ◴[] No.32461841[source]
Stealing people’s work to serve as your training set is not human progress.

Sounds like the AI model should be paying royalties to every affected artist for the right to sample their work.

replies(3): >>32461865 #>>32462631 #>>32462693 #
luckylion ◴[] No.32462631{3}[source]
What do you think the artists "trained on"? Didn't they ever see anyone else's art? Don't you think it influenced their art?

If it's a 1:1 copy, I agree. If it's a "that looks vaguely like the style that xyz likes to use", I disagree.

And I assume you'd run into plenty of situations where multiple people would discover that it's their unique style that is being imitated. Kind of like that story about a hipster threatening to sue a magazine for using his image in an article only to find out that he's a hipster and dresses and styles himself like countless other people, so much so that he himself wasn't able to tell himself apart from another guy.

replies(1): >>32463266 #
teakettle42 ◴[] No.32463266{4}[source]
Your entire point hinges on a false assumption; “training” a human artist (or programmer) is the same as training an AI model.

It is not.

The AI model can only regurgitate stolen mash-ups of other people’s work.

Everything it produces is trivially derivative of the work it has consumed.

Where it succeeds, it succeeds because it successfully correlated stolen human-written descriptions to stolen human-produced images.

Where it fails, it does so because it cannot understand what it’s regurgitating, and it regurgitates the wrong stolen images for the given prompt.

AI models are incapable of producing anything but purely derivative stolen works, and the (often unwillingly) contributors to their training dataset should be entitled to copyright protections that extend to the derivative works.

That’s true whether we’re discussing dall-e or GitHub copilot.

replies(1): >>32466628 #
1. luckylion ◴[] No.32466628{5}[source]
> The AI model can only regurgitate stolen mash-ups of other people’s work.

We all stand on the shoulders of giants. If you're very dismissive, I think it's easy to say the same about most artists. They're not genre-redefining, they carve out their niche that works (read: sells) for them.