←back to thread

577 points simonw | 8 comments | | HN request time: 0s | source | bottom
Show context
AlexeyBrin ◴[] No.44723521[source]
Most likely its training data included countless Space Invaders in various programming languages.
replies(6): >>44723664 #>>44723707 #>>44723945 #>>44724116 #>>44724439 #>>44724690 #
quantumHazer ◴[] No.44723664[source]
and probably some synthetic data are generated copy of the games already on the dataset?

i have this feeling with LLM's generated react frontend, they all look the same

replies(4): >>44723867 #>>44724566 #>>44724902 #>>44731430 #
bayindirh ◴[] No.44723867[source]
Last time somebody asked for a "premium camera app for iOS", and the model (re)generated Halide.

Models don't emit something they don't know. They remix and rewrite what they know. There's no invention, just recall...

replies(4): >>44724102 #>>44724181 #>>44724845 #>>44726775 #
FeepingCreature ◴[] No.44724102[source]
True where trivial; where nontrivial, false.

Trivially, humans don't emit something they don't know either. You don't spontaneously figure out Javascript from first principles, you put together your existing knowledge into new shapes.

Nontrivially, LLMs can absolutely produce code for entirely new requirements. I've seen them do it many times. Will it be put together from smaller fragments? Yes, this is called "experience" or if the fragments are small enough, "understanding".

replies(2): >>44724137 #>>44724530 #
1. bayindirh ◴[] No.44724137[source]
Humans can observe ants and invent any colony optimization. AIs can’t.

Humans can explore what they don’t know. AIs can’t.

replies(5): >>44724200 #>>44724373 #>>44724567 #>>44724658 #>>44731957 #
2. falcor84 ◴[] No.44724200[source]
What makes you categorically say that "AIs can't"?

Based on my experience with present day AIs, I personally wouldn't be surprised at all that if you showed Gemini 2.5 Pro a video of an insect colony and asked it "Take a look at the way they organize and see if that gives you inspiration for an optimization algorithm", it will spit something interesting out.

replies(1): >>44725223 #
3. FeepingCreature ◴[] No.44724373[source]
What makes you categorically say that "humans can"?

I couldn't do that with an ant colony. I would have to train on ant research first.

(Oh, and AIs can absolutely explore what they don't know. Watch a Claude Code instance look at a new repository. Exploration is a convergent skill in long-horizon RL.)

4. CamperBob2 ◴[] No.44724567[source]
That's what benchmarks like ARC-AGI are designed to test. The models are getting better at it, and you aren't.

Nothing ultimately matters in this business except the first couple of time derivatives.

5. ben_w ◴[] No.44724658[source]
> Humans can observe ants and invent any colony optimization. AIs can’t.

Surely this is exactly what current AI do? Observe stuff and apply that observation? Isn't this the exact criticism, that they aren't inventing ant colonies from first principles without ever seeing one?

> Humans can explore what they don’t know. AIs can’t.

We only learned to decode Egyptian hieroglyphs because of the Rosetta Stone. There's no translation for North Sentinelese, the Voynich manuscript, or Linear A.

We're not magic.

6. sarchertech ◴[] No.44725223[source]
It will 100% have something in its training set discussing a human doing this and will almost definitely spit out something similar.
replies(1): >>44732015 #
7. numpad0 ◴[] No.44731957[source]
humans also eat
8. fc417fc802 ◴[] No.44732015{3}[source]
That's a good point but all it means is that we can't test the hypothesis one way or the other due to never being entirely certain that a given task isn't anywhere in the training data. Supposing that "AIs can't" is then just as invalid as supposing that "AIs can".