←back to thread

114 points valgaze | 1 comments | | HN request time: 0s | source
Show context
adamhi ◴[] No.32461913[source]
I won't pretend that this isn't a troubling development for digital artists, maybe even existentially so. I hope not.

One thing that makes me a little hopeful is that every image I've generated with DALL-E 2, even the best ones, would require non-trivial work to make them "good".

There's always something wrong, and you can't tell the model "the hat should be tilted about 5 about degrees", or "the hands should not look like ghoulish pretzels, thanks".

There's also this fundamental limitation that the model can give you a thing that fits some criteria, but it has no concept of the relationships between elements in a composition, or why things are the way they are. It's never exactly right.

It's like the model gets you the first 90%, and then you need a trained painter to get the second 90%.

But yeah, it will certainly devalue the craft, don't get me wrong. And anyone who is callously making comparisons to buggy whip manufacturers should consider how it would (excuse me, will) feel when AI code generators pivot to being more than a copilot, and suddenly the development team at your office is a lot smaller than it used to be, and maybe you aren't on it anymore.

If you spend a lifetime mastering some skill, and then it's just not valued anymore, it sucks, and you get pretty mad about it.

replies(6): >>32462125 #>>32462272 #>>32462281 #>>32462452 #>>32462520 #>>32463297 #
bambax ◴[] No.32462520[source]
> If you spend a lifetime mastering some skill, and then it's just not valued anymore, it sucks, and you get pretty mad about it.

That is absolutely not what the OP is complaining about. They're not saying that because AI is good, they won't find work. They are complaining that in training AI for art generation, builders took works from living artists, without consent from them, and that in so doing allowed generators to make new art in the style of said artists.

The example given is that Stable Diffusion even tries to reproduce logos/signatures of living artists.

If I produced a rubbish search engine that bore a malformed "gigggle" logo using Google colors, how long do you thing I would survive before being sued out of existence by an army of Google lawyers?

But that's exactly what many AI generators are doing here.

Edit: the first version of this comment confused Stable Diffusion with OpenAI, and stated that OpenAI was owned by Google. OpenAI has a strong partnership with Microsoft. Stable Diffusion is not OpenAI. Sorry for the errors.

replies(2): >>32462609 #>>32462806 #
WASDx ◴[] No.32462806[source]
This is frankly what humans have always done, learning and taking inspiration from other artists. Now we have made a machine that can do the same thing.

In the case of exact reproductions, we have copyright and IP laws.

replies(2): >>32462917 #>>32463500 #
ThisIsMyAltFace ◴[] No.32462917[source]
No. This argument comes up over and over and over again and it is wrong.

These models are not learning or being inspired in the same sense humans are. The laws tgat apply to humans should not be applied to them.

replies(2): >>32464376 #>>32465309 #
cercatrova ◴[] No.32464376[source]
You're right, AI generated pictures should not be copyrighted, as is the case today. People should be free to mix and remix pictures via AI as much as they desire.
replies(1): >>32473444 #
1. PaulsWallet ◴[] No.32473444[source]
This is where I imagine things are going to get into trouble because how are you going to determine what is AI and what isn't? Especially when Stable Diffusion is directly classifying artist and cloning their signatures and watermarks. What about things that are started with AI and refined by human?