←back to thread

159 points jbredeche | 4 comments | | HN request time: 0.207s | source
Show context
cuttothechase ◴[] No.45532033[source]
The fact that we now have to write cook book about cook books kind of masks the reality that there is something that could be genuinely wrong about this entire paradigm.

Why are even experts unsure about whats the right way to do something or even if its possible to do something at all, for anything non-trivial? Why so much hesitancy, if this is the panacea? If we are so sure then why not use the AI itself to come up with a proven paradigm?

replies(7): >>45532137 #>>45532153 #>>45532221 #>>45532341 #>>45533296 #>>45534567 #>>45535131 #
galaxyLogic ◴[] No.45534567[source]
> why not use the AI itself to come up with a proven paradigm?

Because AI can only imitate the language it has seen. If there are no texts in its training materials about what is the best way to use multiple coding agents at the same time, then AI knows very little about that subject matter.

AI only knows what humans know, but it knows much more than any single human.

We don't know "what is the best way to use multiple coding agents" until we or somebody else does some experiments and records the findings. Buit AI is not there yet to be able to do such actual experiments itself.

replies(1): >>45534704 #
1. panarky ◴[] No.45534704[source]
I'm sorry, but the whole stochastic parrot thing is so thoroughly debunked at this point that we should stop repeating it as if it's some kind of rare wisdom.

AlphaGo showed that even pre-LLM models could generate brand new approaches to winning a game that human experts had never seen before, and didn't exist in any training material.

With a little thought and experimentation, it's pretty easy to show that LLMs can reason about concepts that do not exist in its training corpus.

You could invent a tiny DSL with brand-new, never-seen-before tokens, give two worked examples, then ask it to evaluate a gnarlier expression. If it solves it, it inferred and executed rules you just made up for the first time.

Or you could drop in docs for a new, never-seen-before API and ask it to decide when and why to call which tool, run the calls, and revise after errors. If it composes a working plan and improves from feedback, that’s reasoning about procedures that weren’t in the corpus.

replies(3): >>45535123 #>>45535685 #>>45536409 #
2. phs318u ◴[] No.45535123[source]
> even the pre-LLM models

You're implicitly disparaging non-LLM models at the same time as implying that LLMs are an evolution of the state of the art (in machine learning). Assuming AGI is the target (and it's not clear if we can even define it yet), LLM's or something like them, will be but one aspect. Using the example AlphaGo to laud the abilities and potential of LLM's is not warranted. They are different.

3. intended ◴[] No.45535685[source]
To build on the stochastic parrots bit -

Parrots hear parts of the sound forms we don’t.

If they riffed in the KHz we can’t hear, it would be novel, but it would not be stuff we didn’t train them on.

4. suddenlybananas ◴[] No.45536409[source]
>AlphaGo showed that even pre-LLM models could generate brand new approaches to winning a game that human experts had never seen before, and didn't exist in any training material.

AlphaGo is an entirely different kind of algorithm.