←back to thread

170 points PaulHoule | 1 comments | | HN request time: 0.001s | source
Show context
dcre ◴[] No.45119904[source]
Always fun to see a theoretical argument that something clearly already happening is impossible.
replies(2): >>45120040 #>>45120369 #
crowbahr ◴[] No.45120369[source]
Really? It sure seems like we're at the top of the S curve with LLMs. Wiring them up to talk the themselves as reasoning isn't scaling the core models, which have only made incremental gains for all the billions invested.

There's plenty more room to grow with agents and tooling, but the core models are only slightly bumping YoY rather than the rocketship changes of 2022/23.

replies(3): >>45121228 #>>45123377 #>>45125522 #
1. dangus ◴[] No.45123377[source]
And relevant to the summary of this paper, LLM incremental improvement doesn't really seem to include the described wall.

If work produced by LLMs forever has to be checked for accuracy, the applicability will be limited.

This is perhaps analogous to all the "self-driving cars" that still have to be monitored by humans, and in that case the self-driving system might as well not exist at all.