←back to thread

1503 points participant3 | 2 comments | | HN request time: 0.443s | source
Show context
flessner ◴[] No.43577885[source]
Everyone is talking about theft - I get it, but there's a more subtler point being made here.

Current generation of AI models can't think of anything truly new. Everything is simply a blend of prior work. I am not saying that this doesn't have economic value, but it means these AI models are closer to lossy compression algorithms than they are to AGI.

The following quote by Sam Altman from about 5 years ago is interesting.

"We have made a soft promise to investors that once we build this sort-of generally intelligent system, basically we will ask it to figure out a way to generate an investment return."

That's a statement I wouldn't even dream about making today.

replies(4): >>43577912 #>>43577991 #>>43578098 #>>43578592 #
nearbuy ◴[] No.43578592[source]
> Current generation of AI models can't think of anything truly new.

How could you possibly know this?

Is this falsifiable? Is there anything we could ask it to draw where you wouldn't just claim it must be copying some image in its training data?

replies(2): >>43580253 #>>43607964 #
1. flashman ◴[] No.43607964[source]
Let's reverse that. "Current generation AI models can think of things that are truly new."

How could you possibly know that? Could you prove that an image wasn't copying from images in its training data?

replies(1): >>43651613 #
2. nearbuy ◴[] No.43651613[source]
No one here claimed the AI models made something truly new.

The commenter flessner asserted it couldn't, despite having no way to demonstrate it. They are passing off faith as fact.

Assuming someone did want to show AI models can make new stuff...

> How could you possibly know that? Could you prove that an image wasn't copying from images in its training data?

This isn't even that hard. You just need to know what images are your training data when you train your model. A researcher with a small grant could do this.