←back to thread

451 points croes | 3 comments | | HN request time: 0.001s | source
Show context
mattxxx ◴[] No.43962976[source]
Well, firing someone for this is super weird. It seems like an attempt to censor an interpretation of the law that:

1. Criticizes a highly useful technology 2. Matches a potentially-outdated, strict interpretation of copyright law

My opinion: I think using copyrighted data to train models for sure seems classically illegal. Despite that, Humans can read a book, get inspiration, and write a new book and not be litigated against. When I look at the litany of derivative fantasy novels, it's obvious they're not all fully independent works.

Since AI is and will continue to be so useful and transformative, I think we just need to acknowledge that our laws did not accomodate this use-case, then we should change them.

replies(19): >>43963017 #>>43963125 #>>43963168 #>>43963214 #>>43963243 #>>43963311 #>>43963423 #>>43963517 #>>43963612 #>>43963721 #>>43963943 #>>43964079 #>>43964280 #>>43964365 #>>43964448 #>>43964562 #>>43965792 #>>43965920 #>>43976732 #
palmotea[dead post] ◴[] No.43963168[source]
[flagged]
ulbu ◴[] No.43963480[source]
these comparisons of llms with human artists copying are just ridiculous. it’s saying “well humans are allowed to break twigs and damage the planet in various ways, so why not allow building a fucking DEATH STAR”.

abstracting llms from their operators and owners and possible (and probable) ends and the territories they trample upon is nothing short of eye-popping to me. how utterly negligent and disrespectful of fellow people must one be at the heart to give any credence to such arguments

replies(3): >>43964105 #>>43964159 #>>43964449 #
staticman2 ◴[] No.43964449[source]
> these comparisons of llms with human artists copying are just ridiculous.

I've come to think of this as the "Performatively failing to recognize the difference between an organism and a machine" rhetorical device that people employ here and elsewhere.

The person making the argument is capable of distinguishing the two things, they just performatively choose not to do so.

replies(1): >>43967517 #
Suppafly ◴[] No.43967517[source]
>The person making the argument is capable of distinguishing the two things, they just performatively choose not to do so.

I think that sort of assumption of insincerity is worse than what you're accusing them of. You might not like their argument, but it's not inherently incorrect for them to argue that because humans have the right to do something, humans have the right to use tools to do that something and humans have the right to group together and use those tools to do something at a large scale.

replies(1): >>43973795 #
1. staticman2 ◴[] No.43973795[source]
Anyone writing "humans can learn from art why can't machines" or something to that effect is performatively conflating an organism and a machine.

My issue is with the rhetoric, if that isn't the rhetoric you are using I am not talking about you.

replies(1): >>43976263 #
2. Suppafly ◴[] No.43976263[source]
My issue is that your rhetoric of "performatively conflating an organism and a machine" doesn't address the core issue of "humans can learn from art why can't machines". You're essentially saying that you don't like the question so you're refusing to answer it. There is nothing inherently wrong with training machines on existing data, if you want us to believe there is, you need to have some argument about what that would be the case.

Is your argument simply about your interpretation of copyright law and your mentality being that laws are good and breaking them is bad? Because that doesn't seem to be a very informed position to take.

replies(1): >>43976804 #
3. staticman2 ◴[] No.43976804[source]
My stated opinion is anyone who comes to an AI conversation and says "I can't tell the difference between organisms and computers" or some variation thereof does in fact have no trouble in practice distinguishing between between their child/ mom/ dad/ BFF and ChatGPT as is in fact questioning from a position of bad faith.

"There is nothing inherently wrong with training machines on existing data..." doesn't really conflate a machine with an organism and isn't what I'm talking about.

If you instead had written "I can read the Cat in the Hat to teach my kid to read why can't I use it to train an LLM?"

Then I do think you would be asking with a certain degree of bad faith, you are perfectly capable of distinguishing those two things, in practice, in your everyday life. You do not in fact see them as equivilent.

Your rhetorical choice to be unable to tell the difference would be performative.

You seem to think I'm arguing copyright policy. I really am discussing rhetoric.