Educating people about such a technical topic seems very difficult especially since people get emotional of their work being used.
Educating people about such a technical topic seems very difficult especially since people get emotional of their work being used.
I know because I'm literally working on setting up Dreambooth to do what I'd otherwise have to pay an artist to do.
And not only is it replacing artists, it's using their own work to do so. None of these could exist without being trained on the original artwork.
Surely you can imagine why they're largely not happy?
In this case, technologists figured out how to exploit people's work without compensating them. A camera is possible without the artists it replaces. Generative modeling is not. It's fundamentally different.
If people figured out how to generate this kind of art without exploiting uncompensated unwilling artists' free labor, it would be a different story.
No it wouldn't. It would still compete against artists. We'd have worse models in the beginning and it would take time until someone licensed enough images to improve the models, but the capability is there and we know about it, too late to stop.
By the way, Stable Diffusion has been fine-tuned with Midjourney image text pairs. So now we also have AI trained on AI images.
It doesn't matter, and never did in the first place. All large models (including SD) are already trained on other models output, since there's simply no possible way to have a high quality tagged dataset of the size they need. Smaller models are used to classify the data for larger ones, then the process is repeated for even larger models, with whatever manual data you have. Humans only select the data sources, and otherwise curate the entire bootstrapping process. This kind of curated training actually produces better results.