←back to thread

220 points Vt71fcAqt7 | 3 comments | | HN request time: 0.001s | source
Show context
henning[dead post] ◴[] No.41862727[source]
[flagged]
david-gpu ◴[] No.41863751[source]
Do you believe that human artists should pay license fees for all the art that they have ever seen, studied or drawn inspiration from? Whether graphic artists, writers or what have you.
replies(2): >>41863881 #>>41865729 #
1. accrual ◴[] No.41865729[source]
I'm still trying to figure out which side to be on. On one hand I agree with you - there would be little modern art if it wasn't for centuries of preceding inspiration.

On the other hand, at least one suit was making headway as of 2024-08-14, about 2 months ago [0]. It seems like there must be some merit to GPs claim if this is moving forward. But again, I'm still trying to figure out where to stand.

[0] https://arstechnica.com/tech-policy/2024/08/artists-claim-bi...

replies(2): >>41867524 #>>41868069 #
2. Lerc ◴[] No.41867524[source]
Or not. They claimed a big win but it was not at all that. It was essentially not completely falling at the first hurdle. All bar one of their claims were dismissed.

The remaining claim may not be a good claim, but it isn't completely laughable.

https://cdn.arstechnica.net/wp-content/uploads/2024/08/Ander... Order-on-Motions-to-Dismiss-8-12-2024.pdf

In October 2023, I largely granted the motions to dismiss brought by defendants Stability, Midjourney and DeviantArt. The only claim that survived was the direct infringement claim asserted against Stability, based on Stability’s alleged “creation and use of ‘Training Images’ scraped from the internet into the LAION datasets and then used to train Stable Diffusion.”

I think you could have grounds for saying that construction of LAION violates copyright which would be covered by this. It doesn't necessarily mean training on LAION is copyright violation.

None of this has been decided. It might be wrong.

The rest of the case was "Not even wrong"

3. ben_w ◴[] No.41868069[source]
They can both be true.

The learning process is similar, and it isn't identical.

Humans and AI both have the intellectual capacity to violate copyright, but also human artists generally know what copyright is while image generators don't (even the LLMs which do understand copyright are easily fooled, and many of the users complain about them being "lobotomised" if they follow corporate policy rather than user instructions).

And while there's people like me who really did mean "public domain" or "MIT license" well before even GANs, it's also true that most people couldn't have given informed consent prior to knowing what these models could do.