←back to thread

1503 points participant3 | 2 comments | | HN request time: 0s | source
Show context
djoldman ◴[] No.43577414[source]
I don't condone or endorse breaking any laws.

That said, trademark laws like life of the author + 95 years are absolutely absurd. The ONLY reason to have any law prohibiting unlicensed copying of intangible property is to incentivize the creation of intangible property. The reasoning being that if you don't allow people to exclude 3rd party copying, then the primary party will assumedly not receive compensation for their creation and they'll never create.

Even in the case where the above is assumed true, the length of time that a protection should be afforded should be no more than the length of time necessary to ensure that creators create.

There are approximately zero people who decide they'll create something if they're protected for 95 years after their death but won't if it's 94 years. I wouldn't be surprised if it was the same for 1 year past death.

For that matter, this argument extends to other criminal penalties, but that's a whole other subject.

replies(18): >>43578724 #>>43578771 #>>43578899 #>>43578932 #>>43578976 #>>43579090 #>>43579150 #>>43579222 #>>43579392 #>>43579505 #>>43581686 #>>43583556 #>>43583637 #>>43583944 #>>43584544 #>>43585156 #>>43588217 #>>43653146 #
noduerme ◴[] No.43578932[source]
You're conflating trademark with copyright.

Regardless, it's not just copyright laws that are at issue here. This is reproducing human likenesses - like Harrison Ford's - and integrating them into new works.

So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him? I can imagine any court asking "how is this not simply laundering someone's likeness through a third party which claims to not have an image / filter / app / artist reproducing my client's likeness?"

All seemingly complicated scams come down to a very basic, obvious, even primitive grift. Someone somewhere in a regulatory capacity is either fooled or paid into accepting that no crime was committed. It's just that simple. This, however, is so glaring that even a child could understand the illegality of it. I'm looking forward to all of Hollywood joining the cause against the rampant abuse of IP by Silicon Valley. I think there are legal grounds here to force all of these models to be taken offline.

Additionally, "guardrails" that prevent 1:1 copies of film stills from being reprinted are clearly not only insufficient, they are evidence that the pirates in this case seek to obscure the nature of their piracy. They are the evidence that generative AI is not much more than a copyright laundering scheme, and the obsession with these guardrails is evidence of conspiracy, not some kind of public good.

replies(4): >>43579054 #>>43579108 #>>43579903 #>>43579947 #
planb ◴[] No.43579054[source]
> So if I want to make an ad for a soap company, and I get an AI to reproduce a likeness of Harrison Ford, does that mean I can use that likeness in my soap commercials without paying him?

No, you can't! But it shouldn't be the tool that prohibits this. You are not allowed to use existing images of Harrison Ford for your commercial and you also will be sued into oblivion by Disney if you paint a picture of Mickey Mouse advertising your soap, so why should it be any different if an AI painted this for you?

replies(3): >>43579113 #>>43579211 #>>43584236 #
noduerme ◴[] No.43579113[source]
Well, precisely. What then is the AI company's justification for charging money to paint a picture of Harrison Ford to its users?

The justification so far seems to have been loosely based on the idea that derivative artworks are protected as free expression. That argument loses currency if these are not considered derivative but more like highly compressed images in a novel, obfuscated compression format. Layers and layers of neurons holding a copy of Harrison Ford's face is novel, but it's hard to see why it's any different legally than running a JPEG of it through some filters and encoding it in base64. You can't just decode it and use it without attribution.

replies(4): >>43579167 #>>43579762 #>>43580023 #>>43581943 #
1. jdietrich ◴[] No.43580023{4}[source]
It's reasonably well established that large neural networks don't contain copies of the training data, therefore their outputs can't be considered copies of anything. The model might contain a conceptual representation of Harrison Ford's face, but that's very different to a verbatim representation of a particular copyrighted image of Harrison Ford. Model weights aren't copyrightable; it's plausible that model outputs aren't copyrightable, but there are some fairly complicated arguments around authorship. Training an AI model on copyrighted work is highly likely to be fair use under US law, but plausibly isn't fair dealing under British law or a permitted use under Article 5 of the EU Copyright and Information Society Directive.

All of that is entirely separate from trademark law, which would prevent you from using any representation of a trademarked character unless e.g. you can reasonably argue that you are engaged in parody.

replies(1): >>43588975 #
2. noduerme ◴[] No.43588975[source]
From the standpoint of using a human likeness, I don't see the difference between encoding a "conceptual representation" of Ford's face into a model and encoding it into any other digital or analog format from which it can later be decoded into a reasonable facsimile of the original.

I think that calling it a "conceptual representation" over-complicates the issue. At the very least, the model weights encode a process that can produce a copy of their training date. A 300x300 pixel image of Harrison Ford's face is one of what, like 1.5x10^12 possible images. Obviously, only a tiny fraction of all possible images are encoded in the model. Is encoding those particular weights into a diffuser which can select that face by a process of refinement really much different than, say, encoding the image into a set of fractal algorithms, or a set of vectors?

I'd argue that the largest models are akin to a compression method that has simply pre-encoded every word and image they've ingested, such that the "compressed file" is the prompt you give to the AI. Even with billions of weights trained on millions of texts and images, they've only encoded an infinitely tiny fraction of the entire space. Semantically you could call it something other than a "copy", but functionally how is it any different?