←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 4 comments | | HN request time: 0.846s | source
Show context
advael ◴[] No.42960025[source]
I'm strictly speaking never going to think of model distillation as "stealing." It goes against the spirit of scientific research, and besides every tech company has lost my permission to define what I think of as theft forever
replies(3): >>42962125 #>>42963994 #>>43000776 #
eru ◴[] No.42962125[source]
At most it would be illicit copying.

Though it's poetic justice that OpenAI is complaining about someone else playing fast and loose with copyright rules.

replies(3): >>42963268 #>>42963479 #>>42966120 #
downrightmike ◴[] No.42963479[source]
The First Amendment is not just about free speech, but also the right to read, the only question is if AI has that right.
replies(4): >>42964640 #>>42965071 #>>42967832 #>>42968680 #
organsnyder ◴[] No.42964640[source]
If AI was just reading, there would be much less controversy. It would also be pretty useless. The issue is that AI is creating its own derivative content based on the content it ingests.
replies(1): >>42965772 #
boxcake ◴[] No.42965772[source]
Isn't any answer to a question which hasn't been previously answered a derivative work? Or when a human write a parody of a song, or when a new type of music is influenced by something which came before.
replies(1): >>42966652 #
nrabulinski ◴[] No.42966652[source]
This argument is so bizarre to me. Humans create new, spontaneous thoughts. AI doesn’t have that. Even if someone’s comment is influenced by all the data they have ingested over their lives, their style is distinct and deliberate, to the point where people have been doxxed before/anonymous accounts have been uncovered because someone recognized the writing style. There’s no deliberation behind AI, just statistical probabilities. There’s no new or spontaneous thoughts, at most pseudorandomness introduced by the author of the model interface.

Even if you give GenAI unlimited time, it will not develop its own writing/drawing/painting style or come up with a novel idea, because strictly by how it works it can only create „new” work by interpolating its dataset

replies(3): >>42967137 #>>42967668 #>>43001383 #
vidarh ◴[] No.42967668[source]
This argument is so bizarre to me.

There is no evidence whatsoever to support that humans create "new, spontaneous thoughts" in any materially, qualitatively different way than an AI. In other words: As a Turing-computable function over the current state. It may be that current AI's can't, but the notion that there is some fundamental barrier is a hypothesis with no evidence to support it.

> Even if you give GenAI unlimited time, it will not develop its own writing/drawing/painting style or come up with a novel idea, because strictly by how it works it can only create „new” work by interpolating its dataset

If you know of any mechanism whereby humans can do anything qualitatively different, then you'd have the basis for a Nobel Prize-winning discovery. We know of no mechanism that could allow humans to exceed the Turing computability that AI models are limited to.

We don't even know how to formalize what it would mean to "come up with a novel idea" in the sense you appear to mean, as presumably, something purely random would not satisfy you, yet something purely Turing computable would also not do, but we don't know of any computable functions that are not Turing computable.

replies(2): >>42967870 #>>42970463 #
1. eru ◴[] No.42967870[source]
> In other words: As a Turing-computable function over the current state.

You need to be a bit more expansive. Turing-computable functions need to halt and return eventually. (And they need to be proven to halt.)

> We know of no mechanism that could allow humans to exceed the Turing computability that AI models are limited to.

Depends on which AI models you are talking about? When generating content, humans have access to vastly more computational resources than current AI models. To give a really silly example: as a human I can swirl some water around in a bucket and be inspired by the sight. A current AI model does not have the computational resources to simulate the bucket of water (nor does it have a robotic arm and a camera to interact with the real thing instead.)

replies(1): >>42975238 #
2. vidarh ◴[] No.42975238[source]
> You need to be a bit more expansive. Turing-computable functions need to halt and return eventually. (And they need to be proven to halt.)

This is pedantry. Any non-halting function can be decomposed into a step function and a loop. What matters is that step function. But ignoring that, human existence halts, and so human thought processes can be treated as a singular function that halts.

> Depends on which AI models you are talking about? When generating content, humans have access to vastly more computational resources than current AI models. To give a really silly example: as a human I can swirl some water around in a bucket and be inspired by the sight. A current AI model does not have the computational resources to simulate the bucket of water (nor does it have a robotic arm and a camera to interact with the real thing instead.)

An AI model does not have computational resources. It's a bunch of numbers. The point is not the actual execution but theoretical computational power if unconstrained by execution environment.

The Church-Turing thesis also presupposes an unlimited amount of time and storage.

replies(1): >>42988516 #
3. eru ◴[] No.42988516[source]
Yes, that's why we need something stronger than the Church-Turing thesis.

See https://scottaaronson.blog/?p=735 'Why Philosophers should care about Computational Complexity'

Basically, what the brain can do in reasonable amounts of time (eg polynomial time), computers can also do in polynomial time. To make it a thesis something like this might work: "no physically realisable computing machine (including the brain) can do more in polynomial time than BQP already allows" https://en.wikipedia.org/wiki/BQP

replies(1): >>42998152 #
4. vidarh ◴[] No.42998152{3}[source]
If people were claiming that a computer might be able to, but will be to slow, that might be an angle to take, but to date, in these discussions, none of the people arguing that brains can do more have argued that they're just more efficient, but that they inherently have more capabilities, so it's an unnecessarily convoluted argument.