Sounds like the AI model should be paying royalties to every affected artist for the right to sample their work.
> At CC, we believe that, as a matter of copyright law, the use of works to train AI should be considered non-infringing by default, assuming that access to the copyright works was lawful at the point of input.
https://creativecommons.org/2021/03/04/should-cc-licensed-co...
Do you think AlphaGo was motivated by a burning desire to discover better board game strategies?
You will know it when you see it.
It's not backwards. It's the same as if a human artist studied it.
Backwards would be thinking that any creative work you make is a derivative work of many creative works that you've seen in the past which you aren't copying from.
If it's a 1:1 copy, I agree. If it's a "that looks vaguely like the style that xyz likes to use", I disagree.
And I assume you'd run into plenty of situations where multiple people would discover that it's their unique style that is being imitated. Kind of like that story about a hipster threatening to sue a magazine for using his image in an article only to find out that he's a hipster and dresses and styles himself like countless other people, so much so that he himself wasn't able to tell himself apart from another guy.
This is not something new. https://www.realistartresource.com/the-tradition-of-copying
Copying of existing works is part of how an art student learns. That this one happens to be a math model at its core is an interesting philosophical problem. The other people's work to serve as your training set is exactly what art students do - and the works they copied and the works that they have yet to produce are not royalty encumbered.
Nor should a mathematical model. It happens that the developers working on this problem have gotten it so that it can do its learning and creation many times faster than an art student in a gallery... but it still can't get hands and faces right.
This is a statement that is pretty quickly disproven if you actually pay attention to the generated art. Lately I've been seeing TikTok videos where people are using DALL-E to create "new aesthetics" - "vampwave", "neon apocalypse", etc.
Those discussions aside, what I meant by social reasons was people wanting to see some tech go away because it's automating jobs.
Software development is heavily labor-constrained, if copilot can make everyone a 10x developer, we'll get slightly less than 10x the features-per-year on an industry-wide basis after contributors shuffle around.
The effect will be most pronounced in application development, where a team of 1-5 is about ideal for a coherent app made with taste, and that team could produce the output of 10-50 developers. Not such a bad thing.
Unfortunately this is unlikely to be true for visiual art, I don't predict that making artists ten times as productive will meet a latent demand for ten times as much art. Could be wrong, but my sense is that about as much art is purchased as people want to buy.
It’s not a human artist, and it can only regurgitate mash-ups of work stolen from others.
It is not.
The AI model can only regurgitate stolen mash-ups of other people’s work.
Everything it produces is trivially derivative of the work it has consumed.
Where it succeeds, it succeeds because it successfully correlated stolen human-written descriptions to stolen human-produced images.
Where it fails, it does so because it cannot understand what it’s regurgitating, and it regurgitates the wrong stolen images for the given prompt.
AI models are incapable of producing anything but purely derivative stolen works, and the (often unwillingly) contributors to their training dataset should be entitled to copyright protections that extend to the derivative works.
That’s true whether we’re discussing dall-e or GitHub copilot.
“Training” an art student and training an AI model are vastly different, and your equating the two is, frankly, nonsensical and absurd.
An art student isn’t a trivial weighted model capable only of mapping stolen text prompts to stolen image representations of them.
> It happens that the developers working on this problem have gotten it so that it can do its learning and creation many times faster than an art student in a gallery
It hasn’t learned anything.
It correlates stolen textual descriptions with stolen images, and then regurgitates mash-ups of the same.
This type of AI model cannot produce anything other than purely derivative work stolen from others.
If I was to dabble in sci-fi art and made something that fit in the art style of Steward Cowley ( https://archive.org/details/terrantradeauthorityhandbookstar... ) do I need to credit the art?
When it comes to playing around in blender - my designs are obviously derivative of others - do I need to credit those artists? Even the ones that I don't remember more than a "I saw this print at a comic art show once..."
How original does my own work have to be before it isn't a mashup of stolen images that I half remember?
Probably, yes.
> When it comes to playing around in blender - my designs are obviously derivative of others - do I need to credit those artists?
Again, probably, but nobody is likely to care if you’re not actually selling your work.
> Even the ones that I don't remember more than a "I saw this print at a comic art show once..."
Then that’s not the prompt you should be starting with if your goal is to produce an original work.
> How original does my own work have to be before it isn't a mashup of stolen images that I half remember?
How original does it have to be before it’s not plagiarism?
Now, remove your ability for individual creativity, such that you cannot come up with an original idea. All you can do is plagiarize.
That’s the difference, here. This isn’t an AI trained to have creative thought, a genuine understanding of what it’s making, and original ideas. It’s an AI trained to regurgitate mashups of plagiarized works based on weighted correlation between the prompt and the (also plagiarized) descriptions of the works it’s regurgitating.
We all stand on the shoulders of giants. If you're very dismissive, I think it's easy to say the same about most artists. They're not genre-redefining, they carve out their niche that works (read: sells) for them.
Nobody will care where their art comes from any more than you care about how your food's field was plowed; and all their lives will be better for it.
Keep your comment somewhere. Come back to it when you look at a new tool that promises to merrily trample on your entire field’s income and provide an endless source of “usable, I guess” substitutes. Let it provide solace as you stare into a future with no room for the craft you’ve spent a lifetime honing.