←back to thread

215 points optimalsolver | 5 comments | | HN request time: 1.141s | source
Show context
nakamoto_damacy ◴[] No.45770717[source]
LLMs falter because likelihood-driven pattern completion doesn’t enforce coherence across uncertainty (probability), representation (geometry), composition (category), and search (reasoning). To get robust reasoning, we need these layers to be explicit, typed, and mutually constraining—with verification and calibrated belief updates in the loop.

I was interviewed about this recently, and mentioned the great work of a professor of CS and Law who has been building the foundations for this approach. My own article about it was recently un-linked due to a Notion mishap (but available if anyone is interested - I have to publish it again)

https://www.forbes.com/sites/hessiejones/2025/09/30/llms-are...

replies(1): >>45770865 #
1. CuriouslyC ◴[] No.45770865[source]
Richard Sutton's interview on Dwarkesh's podcast hit at this same point. The implicit world models in LLMs are insufficient.
replies(1): >>45770934 #
2. jampekka ◴[] No.45770934[source]
Sutton still hasn't learned his own Bitter Lesson? ;)
replies(1): >>45770993 #
3. creativeSlumber ◴[] No.45770993[source]
what do you mean?
replies(2): >>45772381 #>>45772441 #
4. nakamoto_damacy ◴[] No.45772381{3}[source]
Not sure why he capitalized bitter...
5. jampekka ◴[] No.45772441{3}[source]
It was a joke referring to his essay.

https://en.wikipedia.org/wiki/Bitter_lesson