←back to thread

125 points akeck | 1 comments | | HN request time: 0.223s | source
Show context
charcircuit ◴[] No.33579956[source]
Looking at the comment section it seems that people struggle to understand how it works and thinks it is literally copying parts of people's images.

Educating people about such a technical topic seems very difficult especially since people get emotional of their work being used.

replies(6): >>33580043 #>>33580089 #>>33580091 #>>33580110 #>>33580133 #>>33581243 #
kadoban ◴[] No.33580089[source]
It's worse than copying parts of images, it's replacing artists.

I know because I'm literally working on setting up Dreambooth to do what I'd otherwise have to pay an artist to do.

And not only is it replacing artists, it's using their own work to do so. None of these could exist without being trained on the original artwork.

Surely you can imagine why they're largely not happy?

replies(1): >>33580251 #
toomuchtodo ◴[] No.33580251[source]
No one is happy when technology renders them obsolete or drives the marginal cost of what they produce to zero.
replies(4): >>33580273 #>>33580364 #>>33580415 #>>33583500 #
6gvONxR4sf7o ◴[] No.33580415[source]
Traditionally technology renders people obsolete because the technologists figure out how to do something better than those people. Nobody's happy when it comes for them, but that's life. Someone invents the camera, and art is changed forever.

In this case, technologists figured out how to exploit people's work without compensating them. A camera is possible without the artists it replaces. Generative modeling is not. It's fundamentally different.

If people figured out how to generate this kind of art without exploiting uncompensated unwilling artists' free labor, it would be a different story.

replies(5): >>33580512 #>>33580536 #>>33580597 #>>33581192 #>>33581723 #
visarga ◴[] No.33580597[source]
> If people figured out how to generate this kind of art without exploiting uncompensated unwilling artists' free labor, it would be a different story.

No it wouldn't. It would still compete against artists. We'd have worse models in the beginning and it would take time until someone licensed enough images to improve the models, but the capability is there and we know about it, too late to stop.

By the way, Stable Diffusion has been fine-tuned with Midjourney image text pairs. So now we also have AI trained on AI images.

replies(1): >>33581051 #
1. orbital-decay ◴[] No.33581051[source]
>So now we also have AI trained on AI images

It doesn't matter, and never did in the first place. All large models (including SD) are already trained on other models output, since there's simply no possible way to have a high quality tagged dataset of the size they need. Smaller models are used to classify the data for larger ones, then the process is repeated for even larger models, with whatever manual data you have. Humans only select the data sources, and otherwise curate the entire bootstrapping process. This kind of curated training actually produces better results.