←back to thread

1503 points participant3 | 2 comments | | HN request time: 0.001s | source
Show context
flessner ◴[] No.43577885[source]
Everyone is talking about theft - I get it, but there's a more subtler point being made here.

Current generation of AI models can't think of anything truly new. Everything is simply a blend of prior work. I am not saying that this doesn't have economic value, but it means these AI models are closer to lossy compression algorithms than they are to AGI.

The following quote by Sam Altman from about 5 years ago is interesting.

"We have made a soft promise to investors that once we build this sort-of generally intelligent system, basically we will ask it to figure out a way to generate an investment return."

That's a statement I wouldn't even dream about making today.

replies(4): >>43577912 #>>43577991 #>>43578098 #>>43578592 #
nearbuy ◴[] No.43578592[source]
> Current generation of AI models can't think of anything truly new.

How could you possibly know this?

Is this falsifiable? Is there anything we could ask it to draw where you wouldn't just claim it must be copying some image in its training data?

replies(2): >>43580253 #>>43607964 #
mjburgess ◴[] No.43580253[source]
Novelty in one medium arises from novelty in others, shifts to the external environment.

We got brass bands with brass instruments, synth music from synths.

We know therefore, necessarily, that they can be nothing novel from an LLM -- it has no live access to novel developments in the broader environment. If synths were invented after its training, it could never produce synth music (and so on).

The claim here is trivially falsifiable, and so obviously so that credulous fans of this technology bake it in to their misunderstanding of novelty itself: have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.

Yet an artist which paints with a new kind of black pigment can, trivially so.

replies(2): >>43581453 #>>43584568 #
moffkalast ◴[] No.43581453{3}[source]
> arises from novelty in others, shifts to the external environment

> Everything is simply a blend of prior work.

I generally consider these two to be the same thing. If novelty is based on something else, then it's highly derivative and its novelty is very questionable.

A quantum random number generator is far more novel than the average human artist.

> have an LLM produce content on developments which had yet to take place at the time of its training. It obviously cannot do this.

Put someone in jail for the last 15 years, and ask them to make a smartphone. They obviously cannot do it either.

replies(1): >>43582425 #
1. mjburgess ◴[] No.43582425{4}[source]
So if your point is an LLM is something like a person kept in a coma inside solitary confinement -- sure? But I don't believe that's where we set the bar for art: we arent employing comatose inmates to do anything.

> I generally consider these two to be the same thing.

Sure words themselves bend and break under the weight of hype. Novelty is randomness. Everything is a work of art. For a work of art to be non-novel it can only incorporate randomness.

The fallacies of ambiguity abound to the point where speaking coherently disappears completely.

An artist who finds a cave half-collapsed for the first time has an opportunity to render that novel physical state of the universe into art. Every moment which passes has a near infinite amount of such novel circumstances.

Since an LLM cannot do that, we must wreck and ruin our ability to describe this plain and trivial situation. Poke our eyes and skewer our brains.

replies(1): >>43590623 #
2. satvikpendem ◴[] No.43590623[source]
> Sure words themselves bend and break under the weight of hype. Novelty is randomness. Everything is a work of art. For a work of art to be non-novel it can only incorporate randomness.

> Since an LLM cannot do that, we must wreck and ruin our ability to describe this plain and trivial situation. Poke our eyes and skewer our brains.

Change the temperature of a model, and you will have your randomness.

There is no explanation of how a biological brain can have randomness but a silicon process cannot.