←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 1 comments | | HN request time: 0s | source
Show context
advael ◴[] No.42960025[source]
I'm strictly speaking never going to think of model distillation as "stealing." It goes against the spirit of scientific research, and besides every tech company has lost my permission to define what I think of as theft forever
replies(3): >>42962125 #>>42963994 #>>43000776 #
eru ◴[] No.42962125[source]
At most it would be illicit copying.

Though it's poetic justice that OpenAI is complaining about someone else playing fast and loose with copyright rules.

replies(3): >>42963268 #>>42963479 #>>42966120 #
downrightmike ◴[] No.42963479[source]
The First Amendment is not just about free speech, but also the right to read, the only question is if AI has that right.
replies(4): >>42964640 #>>42965071 #>>42967832 #>>42968680 #
organsnyder ◴[] No.42964640[source]
If AI was just reading, there would be much less controversy. It would also be pretty useless. The issue is that AI is creating its own derivative content based on the content it ingests.
replies(1): >>42965772 #
boxcake ◴[] No.42965772[source]
Isn't any answer to a question which hasn't been previously answered a derivative work? Or when a human write a parody of a song, or when a new type of music is influenced by something which came before.
replies(1): >>42966652 #
nrabulinski ◴[] No.42966652[source]
This argument is so bizarre to me. Humans create new, spontaneous thoughts. AI doesn’t have that. Even if someone’s comment is influenced by all the data they have ingested over their lives, their style is distinct and deliberate, to the point where people have been doxxed before/anonymous accounts have been uncovered because someone recognized the writing style. There’s no deliberation behind AI, just statistical probabilities. There’s no new or spontaneous thoughts, at most pseudorandomness introduced by the author of the model interface.

Even if you give GenAI unlimited time, it will not develop its own writing/drawing/painting style or come up with a novel idea, because strictly by how it works it can only create „new” work by interpolating its dataset

replies(3): >>42967137 #>>42967668 #>>43001383 #
vidarh ◴[] No.42967668[source]
This argument is so bizarre to me.

There is no evidence whatsoever to support that humans create "new, spontaneous thoughts" in any materially, qualitatively different way than an AI. In other words: As a Turing-computable function over the current state. It may be that current AI's can't, but the notion that there is some fundamental barrier is a hypothesis with no evidence to support it.

> Even if you give GenAI unlimited time, it will not develop its own writing/drawing/painting style or come up with a novel idea, because strictly by how it works it can only create „new” work by interpolating its dataset

If you know of any mechanism whereby humans can do anything qualitatively different, then you'd have the basis for a Nobel Prize-winning discovery. We know of no mechanism that could allow humans to exceed the Turing computability that AI models are limited to.

We don't even know how to formalize what it would mean to "come up with a novel idea" in the sense you appear to mean, as presumably, something purely random would not satisfy you, yet something purely Turing computable would also not do, but we don't know of any computable functions that are not Turing computable.

replies(2): >>42967870 #>>42970463 #
advael ◴[] No.42970463{3}[source]
This argument, by now a common refrain from defenders of companies like OpenAI, misses the entire putative point of intellectual property, and the point of law in general. It is a distraction of a common sort - an attempt to reframe a moral and legal question into an abstract ontological one

The question of whether the mechanism of learning in a human brain and that in an artificial neural network is similar is a philosophical and perhaps technical one that is interesting, but not relevant to why intellectual property law was conceived: To economically incentivize human citizens to spend their time producing creative works. I don't actually think property law is a good way to do this. Nonetheless the question when massive capital investments are used to scrape artists' work in order to undercut their ability to make a living from that work for the benefit of private corporations that do not have their consent to do this is whether this should violate this artificial notion of intellectual property that we have constructed for this purpose, and in that sense, it's fairly obvious that the answer is yes

replies(1): >>42975186 #
vidarh ◴[] No.42975186{4}[source]
I wasn't responding to a moral and legal question. I was responding to a comment arguing that humans are some magical special case in nature.

If you want to argue it's a distraction, argue that with the person I replied to, who was the person who changed the focus.

replies(1): >>42982675 #
advael ◴[] No.42982675{5}[source]
Yea I'll give you that. But many people seem to have the argument you've made - which is dubious on its own terms, by the way, as we don't really have a complete picture of human learning and the assumption that it simply follows the mechanisms we understand from machine learning is not a null hypothesis that doesn't demand justification - loaded up for these conversations, and it needs to be addressed wherever possible that the ontological question is not what matters here
replies(1): >>42998074 #
vidarh ◴[] No.42998074{6}[source]
> which is dubious on its own terms, by the way, as we don't really have a complete picture of human learning and the assumption that it simply follows the mechanisms we understand from machine learning is not a null hypothesis that doesn't demand justification

The argument I made in no way rests on a "complete picture of human learning". The only thing they rest on is lack of evidence of computation exceeding the Turing computable set. Finding evidence of such computation would upend physics, symbolic logic, maths. It'd be a finding that'd guarantee a Nobel Prize.

I gave the justification. It's a simple one, and it stands on its own. There is no known computable function that exceeds the Turing computable, and all Turing computable functions can be computed on any Turing complete system. Per the extended Church Turing thesis this includes any natural system given the limitations of known physics. In other words: Unless you can show knew, unknown physics, human brains are computers with the same limitations as any electronic computer, and the notion of "something new" arising from humans, other than as a computation over pre-existing state, in a way an electronic computer can't also do, is an entirely unsupportable hypothesis.

> and it needs to be addressed wherever possible that the ontological question is not what matters here

It may not be what matters to you, but to me the question you clearly would prefer to discuss is largely uninteresting.

replies(1): >>43075544 #
1. advael ◴[] No.43075544{7}[source]
Baking in the assumption that cognition is equivalent to computation will tautologically lead you to this result, but this assumption itself is unjustified. Of course if you start with the premise that the brain is a computer you will come to the conclusion that the brain is a computer. You haven't justified the most important part of your argument, so I have no reason to take it seriously