←back to thread

553 points bookofjoe | 10 comments | | HN request time: 0.033s | source | bottom
Show context
adzm ◴[] No.43654878[source]
Adobe is the one major company trying to be ethical with its AI training data and no one seems to even care. The AI features in Photoshop are the best around in my experience and come in handy constantly for all sorts of touchup work.

Anyway I don't really think they deserve a lot of the hate they get, but I do hope this encourages development of viable alternatives to their products. Photoshop is still pretty much peerless. Illustrator has a ton of competitors catching up. After Effects and Premiere for video editing are getting overtaken by Davinci Resolve -- though for motion graphics it is still hard to beat After Effects. Though I do love that Adobe simply uses JavaScript for its expression and scripting language.

replies(36): >>43654900 #>>43655311 #>>43655626 #>>43655700 #>>43655747 #>>43655859 #>>43655907 #>>43657271 #>>43657436 #>>43658069 #>>43658095 #>>43658187 #>>43658412 #>>43658496 #>>43658624 #>>43659012 #>>43659378 #>>43659401 #>>43659469 #>>43659478 #>>43659507 #>>43659546 #>>43659648 #>>43659715 #>>43659810 #>>43660283 #>>43661100 #>>43661103 #>>43661122 #>>43661755 #>>43664378 #>>43664554 #>>43665148 #>>43667578 #>>43674357 #>>43674455 #
f33d5173 ◴[] No.43655907[source]
Adobe isn't trying to be ethical, they are trying to be more legally compliant, because they see that as a market opportunity. Otoh, artists complain about legal compliance of AIs not because that is what they care about, but because they see that as their only possible redress against a phenomenon they find distasteful. A legal reality where you can only train AI on content you've licensed would be the worst for everybody bar massive companies, legacy artists included.
replies(7): >>43658034 #>>43658253 #>>43659203 #>>43659245 #>>43659443 #>>43659929 #>>43661258 #
_bin_ ◴[] No.43658034[source]
Right, but "distaste" isn't grounds for trying to ban something. There are all kinds of things people and companies do which I dislike but for which there's no just basis for regulating. If Adobe properly licenses all their training data artists don't have a right to say "well i think this is bad for creativity and puts my job at risk, ban it!!!" Or more precisely, they have a right to say that, but no moral justification for trying to ban/regulate/sue over it.

I hate Adobe's subscription model as much as the next guy and that's a good reason to get annoyed at them. Adobe building AI features is not.

replies(5): >>43658454 #>>43659616 #>>43660867 #>>43663988 #>>43667492 #
TeMPOraL ◴[] No.43658454[source]
> Right, but "distaste" isn't grounds for trying to ban something.

It isn't, but it doesn't stop people from trying and hoping for a miracle. That's pretty much all there is to the arguments of image models, as well as LLMs, being trained in violation of copyright - it's distaste and greed[0], with a slice of basic legalese on top to confuse people into believing the law says what it doesn't (at least yet) on top.

> If Adobe properly licenses all their training data artists don't have a right to say "well i think this is bad for creativity and puts my job at risk, ban it!!!" Or more precisely, they have a right to say that, but no moral justification for trying to ban/regulate/sue over it.

I'd say they have plenty of moral / ethical justification for trying to ban/regulate/sue over it, they just don't have much of a legal one at this point. But that's why they should be trying[1] - they have a legitimate argument that this is an unexpected, undeserved, unfair calamity for them, threatening to derail their lives, and lives of their dependents, across the entire sector - and therefore that laws should be changed to shield them, or compensate them for the loss. After all, that's what laws are for.

(Let's not forget that the entire legal edifice around recognizing and protecting "intellectual property" is an entirely artificial construct that goes against the nature of information and knowledge, forcing information to behave like physical goods, so it's not unfair to the creators in an economy that's built around trading physical goods. IP laws were built on moral arguments, so it's only fair to change them on moral grounds too.)

--

[0] - Greed is more visible in the LLM theatre of this conflict, because with textual content there's vastly more people who believe that they're entitled to compensation just because some comments they wrote on the Internet may have been part of the training dataset, and are appalled to see LLM providers get paid for the service while they are not. This Dog in the Manger mentality is distinct from that of people whose output was used in training a model that now directly competes with them for their job; the latter have legitimate ethical reasons to complain.

[1] - Even though myself I am for treating training datasets to generative AI as exempt from copyright. I think it'll be better for society in general - but I recognize it's easy for me to say it, because I'm not the one being rugpulled out of a career path by GenAI, watching it going from 0 to being half of the way towards automating away visual arts, in just ~5 years.

replies(3): >>43659609 #>>43663932 #>>43667684 #
1. skissane ◴[] No.43659609[source]
> they have a legitimate argument that this is an unexpected, undeserved, unfair calamity for them, threatening to derail their lives, and lives of their dependents, across the entire sector - and therefore that laws should be changed to shield them, or compensate them for the loss. After all, that's what laws are for.

Lots of people have had their lives disrupted by technological and economic changes before - entire careers which existed a century ago are now gone. Given society provided little or no compensation for prior such cases of disruption, what’s the argument for doing differently here?

replies(4): >>43659678 #>>43660270 #>>43661387 #>>43669091 #
2. TeMPOraL ◴[] No.43659678[source]
Moral growth and learning from history?
replies(1): >>43659856 #
3. skissane ◴[] No.43659856[source]
There’s a big risk that you end up creating a scheme to compensate for technological disruption in one industry and then fail to do so in another, based on the political clout / mindshare / media attention each has - and then there are many people in even worse personal situations (through no fault of their own) who would also miss out.

Wouldn’t a better alternative be to work on improving social safety nets for everybody, as opposed to providing a bespoke one for a single industry?

replies(1): >>43668694 #
4. CamperBob2 ◴[] No.43660270[source]
Given society provided little or no compensation for prior such cases of disruption

That's going to be hard for you to justify in the long run, I think. Virtually everybody who ever lost a job to technology ended up better off for it.

replies(2): >>43660652 #>>43664103 #
5. disconcision ◴[] No.43660652[source]
> Virtually everybody who ever lost a job to technology ended up better off for it.

this feels like a much stronger claim than is typically made about the benefits of technological progress

replies(1): >>43661389 #
6. petre ◴[] No.43661387[source]
You're only going yo get "AI art" in the future because artists will have get a second job at McDonalds to survive. The same old themes all over again. It's like the only music is Richard Clayderman tunes.
7. CamperBob2 ◴[] No.43661389{3}[source]
Certainly no stronger than the claim I was responding to. They are essentially pining for the return of careers that haven't existed for a century.
8. TeMPOraL ◴[] No.43664103[source]
> Virtually everybody who ever lost a job to technology ended up better off for it.

That's plain wrong, and quite obviously so. You're demonstrating here a very common misunderstanding of the arguments people affected by (or worried about) automation taking their jobs make. In a very concise form:

- It's true that society and humanity so far always benefited from eliminating jobs through technology, in the long term.

- It's not true that society and humanity benefited in the immediate term, due to the economic and social disruption. And, most importantly:

- It's not true that people who lost jobs to technology were better off for it - those people, those specific individuals, as well as their families and local communities, were all screwed over by progress, having their lives permanently disrupted, and in many cases being thrown into poverty for generations.

(Hint: yes, there may be new jobs to replace old ones, but those jobs are there for the next generation of people, not for those who just lost theirs.)

Understanding that distinction - society vs. individual victims - will help make sense of e.g. why Luddites destroyed the new mechanized looms and weaving frames. It was not about technology, it was about capital owners pulling the rug from under them, and leaving them and their children to starve.

9. TeMPOraL ◴[] No.43668694{3}[source]
> Wouldn’t a better alternative be to work on improving social safety nets for everybody, as opposed to providing a bespoke one for a single industry?

Yes, but:

1) It's not really an exclusive choice; different people can pursue different angles, including all of them - one can both seek immediate support/compensation for the specific case they're the victim of and seek longer-term solution for everyone who'd face the same problem in the future.

2) A bespoke solution is much more likely to be achievable than a general one.

3) I don't believe it would be good for society for artists to succeed in curtailing generative AI! But, should they succeed, I imagine the consequences will encourage people to seek the more general solution that mitigates occupational damage of GenAI while preserving its availability, instead of having to deal with a series of bespoke stopgaps that also kills GenAI entirely.

4) Not that banning GenAI has any chance of succeeding - the most we'd get is it being unavailable in some countries, who'd then be at a disadvantage in competition with countries that embraced it.

Again, I'm not in favor of banning GenAI - on the contrary, I'm in favor of giving a blanket exception from copyright laws for purposes of training generative models. However, I recognize the plight of artists and other people who are feeling the negative economic impact on their jobs right now (and hell, my own line of work - software development - is still one of the most at risk in the near to mid-term, too); I wish for a solution that will help them (and others about to be in this situation), but in the meantime, I don't begrudge them for trying to fight it - I think they have full right to. I only have problems with people who oppose AI because they feel that Big AI is depriving them of opportunity to seek rent from society for the value AI models are creating.

10. int_19h ◴[] No.43669091[source]
It depends on who was harmed. When countries were banning slavery (or serfdom in places where it was functionally equivalent, like Russia), slave owners made this very argument that depriving them of legitimately acquired workpower was an undeserved and unfair calamity for them, and were generally compensated.