Most active commenters
  • cercatrova(3)
  • bambax(3)

←back to thread

114 points valgaze | 33 comments | | HN request time: 1.436s | source | bottom
1. adamhi ◴[] No.32461913[source]
I won't pretend that this isn't a troubling development for digital artists, maybe even existentially so. I hope not.

One thing that makes me a little hopeful is that every image I've generated with DALL-E 2, even the best ones, would require non-trivial work to make them "good".

There's always something wrong, and you can't tell the model "the hat should be tilted about 5 about degrees", or "the hands should not look like ghoulish pretzels, thanks".

There's also this fundamental limitation that the model can give you a thing that fits some criteria, but it has no concept of the relationships between elements in a composition, or why things are the way they are. It's never exactly right.

It's like the model gets you the first 90%, and then you need a trained painter to get the second 90%.

But yeah, it will certainly devalue the craft, don't get me wrong. And anyone who is callously making comparisons to buggy whip manufacturers should consider how it would (excuse me, will) feel when AI code generators pivot to being more than a copilot, and suddenly the development team at your office is a lot smaller than it used to be, and maybe you aren't on it anymore.

If you spend a lifetime mastering some skill, and then it's just not valued anymore, it sucks, and you get pretty mad about it.

replies(6): >>32462125 #>>32462272 #>>32462281 #>>32462452 #>>32462520 #>>32463297 #
2. cercatrova ◴[] No.32462125[source]
Have you seen Stable Diffusion, the AI in question in the tweet? The images are astoundingly good, much better than DALL-E 2 even.

https://twitter.com/StableDiffusion/

replies(5): >>32462253 #>>32462268 #>>32463111 #>>32463372 #>>32467856 #
3. hgomersall ◴[] No.32462253[source]
The shadow was wrong in the man-tree-beach-sea image. I guess it might be artistically wrong, but I'm sceptical of what that even means in this context.
4. exitb ◴[] No.32462268[source]
Oh, that just drives the point home. Any flaw you can find in these models to build your hope on will just get corrected in the next iteration, only a couple months down the line.
5. hbosch ◴[] No.32462272[source]
> and you can't tell the model "the hat should be tilted about 5 about degrees"

Actually, I think you can.

6. xg15 ◴[] No.32462281[source]
> It's like the model gets you the first 90%, and then you need a trained painter to get the second 90%.

Call me a doomer, but I think this makes the possible consequences even worse.

Remember the 80/20 rule.

A lot of modern product innovation is not really about improving quality - rather, its about introducing lower-quality versions of existing products which are significantly cheaper than the original but still "good enough".

Dalle2 and friends could fall into the same bucket. If they produce artwork that is objectively worse than a human-painted version would be, but still "good enough" for many mundane usecases - stock photos, concept art, etc - we might still see a wide adoption and displacement of human artists from those usecases - along with an overall drop in quality of artworks.

replies(3): >>32462296 #>>32462539 #>>32462592 #
7. saurik ◴[] No.32462296[source]
Ugh... I somehow hadn't yet even considered the part where we all have to tolerate almost every single image we see during the day being generated by some creepy AI model; but OF COURSE that's how this is going to play out :( :(. I mean, mant of the products I purchase on Amazon don't even spell check their product marketing images as it stands...
8. Aransentin ◴[] No.32462452[source]
Considering the staggering speed that image generation is improving, that 10% gap will only continue to close.

Starting e.g. an art education right now seems likely to be extremely nerve-wracking as your talents may very well be woefully obsolete by the time you graduate; the exception perhaps being those top-0.1% talents that will feed the models of the future with new material.

replies(1): >>32463866 #
9. bambax ◴[] No.32462520[source]
> If you spend a lifetime mastering some skill, and then it's just not valued anymore, it sucks, and you get pretty mad about it.

That is absolutely not what the OP is complaining about. They're not saying that because AI is good, they won't find work. They are complaining that in training AI for art generation, builders took works from living artists, without consent from them, and that in so doing allowed generators to make new art in the style of said artists.

The example given is that Stable Diffusion even tries to reproduce logos/signatures of living artists.

If I produced a rubbish search engine that bore a malformed "gigggle" logo using Google colors, how long do you thing I would survive before being sued out of existence by an army of Google lawyers?

But that's exactly what many AI generators are doing here.

Edit: the first version of this comment confused Stable Diffusion with OpenAI, and stated that OpenAI was owned by Google. OpenAI has a strong partnership with Microsoft. Stable Diffusion is not OpenAI. Sorry for the errors.

replies(2): >>32462609 #>>32462806 #
10. Barrin92 ◴[] No.32462539[source]
>but still "good enough" for many mundane usecases - stock photos, concept art, etc - we might still see a wide adoption and displacement of human artists

you're not afraid of DALL-E, you're afraid of an army of fiverr workers stealing your job. Stock photos and low quality art have already been commodified. Very few people go and commission bespoke stock art from the individual working artist, they get a subscription from one of the gazillion content stock photo factories for a few cents.

11. notahacker ◴[] No.32462592[source]
> If they produce artwork that is objectively worse than a human-painted version would be, but still "good enough" for many mundane usecases - stock photos, concept art, etc - we might still see a wide adoption and displacement of human artists from those usecases - along with an overall drop in quality of artworks.

If people are happy with "good enough" they generally don't hire a digital artist in the first place (the whole reason DALL-E can exist is because there's a lot of digital imagery on relevant subjects/objects available to it to train, and there's even more an internet search away) or if they do, they get one off Fiverr.

For mocking up quick concepts, that might be different, but that's a workflow improvement.

12. Kiro ◴[] No.32462609[source]
Not only is your rant about Google misplaced considering Dall-E is OpenAI, not Google, but the thread is also not complaining about Dall-E. It's about Stable Diffusion (https://stability.ai/blog/stable-diffusion-announcement) which is explicitly trained on working artists. That's why it tries to reproduce the logo.
replies(2): >>32462720 #>>32462803 #
13. v64 ◴[] No.32462720{3}[source]
Dall-E has also been trained on watermarked art. Here [1] [2] are some examples from images I've generated exhibiting that.

[1] https://ibb.co/Q86zDSw

[2] https://ibb.co/njvLMQ2

14. bambax ◴[] No.32462803{3}[source]
You're right of course! I don't know what I was thinking. Dall-e is MS. I will edit.
replies(2): >>32463120 #>>32463646 #
15. WASDx ◴[] No.32462806[source]
This is frankly what humans have always done, learning and taking inspiration from other artists. Now we have made a machine that can do the same thing.

In the case of exact reproductions, we have copyright and IP laws.

replies(2): >>32462917 #>>32463500 #
16. ThisIsMyAltFace ◴[] No.32462917{3}[source]
No. This argument comes up over and over and over again and it is wrong.

These models are not learning or being inspired in the same sense humans are. The laws tgat apply to humans should not be applied to them.

replies(2): >>32464376 #>>32465309 #
17. Thiez ◴[] No.32463111[source]
Seems the account got suspended.
replies(1): >>32464339 #
18. alex_young ◴[] No.32463120{4}[source]
Still wrong. DALL-E is Open AI
replies(1): >>32463173 #
19. bambax ◴[] No.32463173{5}[source]
Yes, that's what the edited comment says. Dall-e is OpenAI; OpenAI has strong links with MS.
20. tracerbulletx ◴[] No.32463297[source]
MidJourney is a lot better at making very convincing and usable output in one shot.
21. a_f ◴[] No.32463372[source]
Also woth checking out the submissions on the subreddit, https://old.reddit.com/r/StableDiffusion/

Some of them really are outstandingly good. Beyond what I expected, and I have had access to Dalle2.

22. greysphere ◴[] No.32463500{3}[source]
It feels like a big stretch to consider an algorithm to be 'inspired'. Where are the bits that correspond to 'inspiration'? Seems like that would answer a lot of big questions in philosophy.
replies(1): >>32464429 #
23. Kiro ◴[] No.32463646{4}[source]
Thanks! Good edit. For the record I agree with your comment but got distracted by the error.
24. p1esk ◴[] No.32463866[source]
So an art training will become more like pro sports training where “success” means to be top 100 or so in the world. Note it does not prevent one to do arts (or sports) as a hobby. People didn’t stop playing chess after Kasparov lost to Deep Blue.
replies(1): >>32464371 #
25. cercatrova ◴[] No.32464339{3}[source]
Wow that was quite fast. Check out the subreddit then like someone else mentioned: https://old.reddit.com/r/StableDiffusion/
26. apatil ◴[] No.32464371{3}[source]
Given how quickly AI image generation, and creativity generally, has progressed, I think it's perfectly plausible that within ten years we will be able to tell an AI "create a work of art that is unique, highly meaningful and that would be very difficult or impossible for most humans to create with their hands," and will get a work of art that is, in blinded assessments, competitive with the work of any master.

If that happens, I agree that the top 100 human artists in the world will likely have jobs, but they won't be successful in the sense that their work is uniquely valued by society. We pay to see the very most talented humans perform tasks that have been successfully automated, such as chess and lifting heavy objects, not because we need the service they provide but because we get an emotional kick out of seeing other humans perform way outside the normal range of human abilities.

replies(1): >>32465131 #
27. cercatrova ◴[] No.32464376{4}[source]
You're right, AI generated pictures should not be copyrighted, as is the case today. People should be free to mix and remix pictures via AI as much as they desire.
replies(1): >>32473444 #
28. krapp ◴[] No.32464429{4}[source]
Where are the neurons that correspond to "inspiration?" It's algorithms all the way down.
replies(1): >>32480572 #
29. archagon ◴[] No.32465131{4}[source]
I’m willing to bet money it will never happen. People said the same about self-driving cars, but the initial razzle-dazzle blinds people to the actual dullness of the algorithms, and obscures their limitations. AI can only recombine what has already been created. It has no ability to imbue art with meaning or to push the medium forward.

What you describe can only happen with general intelligence, not these fancy neural nets. If anything, they will become powerful tools to help artists augment their creativity.

30. slowmovintarget ◴[] No.32465309{4}[source]
Exactly. The relevant law is with regard to the use of artwork on the part of the people who feed the index. Artists should have a say on whether their work gets included in a training set, if their works are not public domain.
31. ◴[] No.32467856[source]
32. PaulsWallet ◴[] No.32473444{5}[source]
This is where I imagine things are going to get into trouble because how are you going to determine what is AI and what isn't? Especially when Stable Diffusion is directly classifying artist and cloning their signatures and watermarks. What about things that are started with AI and refined by human?
33. greysphere ◴[] No.32480572{5}[source]
I claim that claiming computer algorithms are inspired is a big stretch.

I claim humans can be inspired.

I don't claim to know how human inspiration happens, or if neurons have anything to do with it. (They may, but I make no claim). Not being able to describe the process by which human inspiration happens doesn't invalidate either of my claims.

If there is a satisfactory non-bit based explanation to how computer algorithms achieve inspiration, I would accept that to. We have the advantage with computers, that their activity is conveniently summarized by their programs which are represented in bits, so expecting an explanation in that form I think is reasonable.

The defense if the claim of human inspiration is (1) we have that word for the concept (2) we have thousands of years of thought, philosophy and literature giving support and definition to the concept.