←back to thread

553 points bookofjoe | 10 comments | | HN request time: 0.001s | source | bottom
Show context
adzm ◴[] No.43654878[source]
Adobe is the one major company trying to be ethical with its AI training data and no one seems to even care. The AI features in Photoshop are the best around in my experience and come in handy constantly for all sorts of touchup work.

Anyway I don't really think they deserve a lot of the hate they get, but I do hope this encourages development of viable alternatives to their products. Photoshop is still pretty much peerless. Illustrator has a ton of competitors catching up. After Effects and Premiere for video editing are getting overtaken by Davinci Resolve -- though for motion graphics it is still hard to beat After Effects. Though I do love that Adobe simply uses JavaScript for its expression and scripting language.

replies(36): >>43654900 #>>43655311 #>>43655626 #>>43655700 #>>43655747 #>>43655859 #>>43655907 #>>43657271 #>>43657436 #>>43658069 #>>43658095 #>>43658187 #>>43658412 #>>43658496 #>>43658624 #>>43659012 #>>43659378 #>>43659401 #>>43659469 #>>43659478 #>>43659507 #>>43659546 #>>43659648 #>>43659715 #>>43659810 #>>43660283 #>>43661100 #>>43661103 #>>43661122 #>>43661755 #>>43664378 #>>43664554 #>>43665148 #>>43667578 #>>43674357 #>>43674455 #
AnthonyMouse ◴[] No.43659810[source]
> Adobe is the one major company trying to be ethical with its AI training data and no one seems to even care.

It's because nobody actually wants that.

Artists don't like AI image generators because they have to compete with them, not because of how they were trained. How they were trained is just the the most plausible claim they can make against them if they want to sue OpenAI et al over it, or to make a moral argument that some kind of misappropriation is occurring.

From the perspective of an artist, a corporation training an AI image generator in a way that isn't susceptible to moral or legal assault is worse, because then it exists and they have to compete with it and there is no visible path for them to make it go away.

replies(7): >>43659874 #>>43660487 #>>43662522 #>>43663679 #>>43668300 #>>43670381 #>>43683088 #
Sir_Twist ◴[] No.43660487[source]
I'd say that is a bit of an ungenerous characterization. Is it possible that it could be both? That while artists maybe do feel under attack in terms of competition, that there is a genuine ethical dilemma at hand?

If I were an artist, and I made a painting and published it to a site which was then used to train an LLM, I would feel as though the AI company treated me disingenuously, regardless of competition or not. Intellectual property laws aside, I think there is a social contract being broken when a publicly shared work is then used without the artist's direct, explicit permission.

replies(4): >>43660625 #>>43660937 #>>43660970 #>>43661337 #
1. AnthonyMouse ◴[] No.43660970[source]
> Is it possible that it could be both? That while artists maybe do feel under attack in terms of competition, that there is a genuine ethical dilemma at hand?

The rights artists have over their work are economic rights. The most important fair use factor is how the use affects the market for the original work. If Disney is lobbying for copyright term extensions and you want to make art showing Mickey Mouse in a cage with the CEO of Disney as the jailer, that's allowed even though you're not allowed to open a movie theater and show Fantasia without paying for it, and even though (even because!) Disney would not approve of you using Mickey to oppose their lobbying position. And once the copyright expires you can do as you like.

So the ethical argument against AI training is that the AI is going to compete with them and make it harder for them to make a living. But substantially the same thing happens if the AI is trained on some other artist's work instead. Whose work it was has minimal impact on the economic consequences for artists in general. And being one of the artists who got a pittance for the training data is little consolation either.

The real ethical question is whether it's okay to put artists out of business by providing AI-generated images at negligible cost. If the answer is no, it doesn't really matter which artists were in the training data. If the answer is yes, it doesn't really matter which artists were in the training data.

replies(3): >>43661004 #>>43661497 #>>43662059 #
2. card_zero ◴[] No.43661497[source]
> But substantially the same thing happens if the AI is trained on some other artist's work instead.

You could take that further and say that "substantially the same thing" happens if the AI is trained on music instead. It's just another kind of artwork, right? Somebody who was going to have an illustration by [illustrator with distinctive style] might choose to have music instead, so the music is in competition, so all that illustrator's art might as well be in the training data, and that doesn't matter because the artist would get competed with either way. Says you.

replies(1): >>43665550 #
3. becquerel ◴[] No.43661582[source]
It crushes the orphans very quickly, and on command, and allows anyone to crush orphans from the comfort of their own home. Most people are low-taste enough that they don't really care about the difference between hand-crushed orphans and artisanal hand-crushed orphans.
replies(1): >>43668188 #
4. pastage ◴[] No.43662059[source]
Actually moral rights is what allow you to say no to AI. It is also a big part of copyright and more important in places were fair use does not exist in the extent it does in the US.

Further making a variant of a famous art piece under copyright might very well be a derivative. There are court cases here just some years for the AI boom were a format shift from photo to painting was deemed to be a derivative. The picture generated with "Painting of a archeologist with a whip" will almost certainly be deemed a derivative if it would go through the same court.

replies(1): >>43665639 #
5. AnthonyMouse ◴[] No.43665550[source]
If you type "street art" as part of an image generation prompt, the results are quite similar to typing "in the style of Banksy". They're direct substitutes for each other, neither of them is actually going to produce Banksy-quality output and it's not even obvious which one will produce better results for a given prompt.

You still get images in a particular style by specifying the name of the style instead of the name of the artist. Do you really think this is no different than being able to produce only music when you want an image?

replies(1): >>43667749 #
6. AnthonyMouse ◴[] No.43665639[source]
> Actually moral rights is what allow you to say no to AI.

The US doesn't really have moral rights and it's not clear they're even constitutional in the US, since the copyright clause explicitly requires "promote the progress" and "limited times" and many aspects of "moral rights" would be violations of the First Amendment. Whether they exist in some other country doesn't really help you when it's US companies doing it in the US.

> Further making a variant of a famous art piece under copyright might very well be a derivative.

Well of course it is. That's what derivative works are. You can also produce derivative works with Photoshop or MS Paint, but that doesn't mean the purpose of MS Paint is to produce derivative works or that it's Microsoft rather than the user purposely creating a derivative work who should be responsible for that.

replies(1): >>43667546 #
7. fc417fc802 ◴[] No.43667546{3}[source]
Well one could argue that this ought to be a discussion of morality and social acceptability rather than legality. After all the former can eventually lead to the latter. However if you make that argument you immediately run into the issue that there clearly isn't broad consensus on this topic.

Personally I'm inclined to liken ML tools to backhoes. I don't want the law to force ditches to be dug by hand. I'm not a fan of busywork.

8. card_zero ◴[] No.43667749{3}[source]
This hinges on denying that artists have distinctive personal styles. Instead your theory seems to be that styles are genres, and that the AI only needs to be trained on the genre, not the specific artist's output, in order to produce that artist's style. Which under this theory is equivalent to the generic style.

My counter-argument is "no". Ideally I'd elaborate on that. So ummm ... no, that's not the way things are. Is it?

replies(1): >>43669069 #
9. wizzwizz4 ◴[] No.43668188{3}[source]
You know "puréed orphan extract" is just salt, right? You can extract it from seawater in an expensive process that, nonetheless, is way cheaper than crushing orphans (not to mention the ethical implications). Sure, you have to live near the ocean, but plenty of people do, and we already have distribution networks to transport the resulting salt to your local market. Just one fist-sized container is the equivalent of, like, three or four dozen orphans; and you can get that without needing a fancy press or an expensive meat-sink.
10. int_19h ◴[] No.43669069{4}[source]
The argument here isn't so much that individual artists don't have their specific styles, but rather whether AI actually tracks that, or whether using "in the style of ..." is effectively a substitute for identifying the more general style category to which this artist belongs.