LLMs are changing how I see reality.
A little context about you:
- person
- has hands, reads HN
These few state variables are enough to generate a believable enough frame in your rendering.
If the rendering doesn’t look believable to you, you modify state variables to make the render more believable, eg:
Context:
- person
- with hands
- incredulous demeanor
- reading HN
Now I can render you more accurately based on your “reasoning”, but truly I never needed all that data to see you.
Reasoning as we know it could just be a mechanism to fill in gaps in obviously sparse data (we absolutely do not have all the data to render reality accurately, you are seeing an illusion). Go reason about it all you want.
If not: what am I intended to take away from this? What is its relevance to my comment?
Before LLMs we had N-Gram language models. Many tasks like speech recognition worked as beach search in the graph defined by the ngram language model. You could easily get huge accuracy gains simply by pruning your beam less.
s1 reminds of this. You can always trade off latency for accuracy. Given these LLMs are much more complex than good old N-Grams, we're just discovering how to do this trade.
I don’t believe computer science has the algorithms to handle this new paradigm. Everything was about sequential deterministic outputs, and clever ways to do it fast. This stuff is useless at the moment. We need new thinkers on how to not think sequentially or how not to think about the universe in such a small way.
Verifying input/output pairs is the old way. We need to understand differently going forward.
I think it is interesting what actions cannot be done by humans.
Because I see these sorts of gnostic assertion about LLMs all the time about how they "definitely aren't doing <thing we normally apply to meat-brains>" by gesturing at the technical things it's doing, with no attempts to actually justify the negative assertion.
It often comes across as privileged reason trying to justify that of course the machine isn't doing some ineffable thing only meat-brains do.
Look, why have game developers spent so much time lazy loading parts of the game world? Very rarely do they just load the whole world, even in 2025. See, the worlds get bigger, so even as the tech gets better, we will always lazy load worlds in.
It’s a context issue right? Developers have just recently been given this thing called “context”.
But yeah man, why do we think just because we walked from our house to the supermarket that this reality didn’t lazy load things. That’s how programmers have been doing it all along …
Anyways
Reasoning as we know it could just be a mechanism to fill in gaps in obviously sparse data (we absolutely do not have all the data to render reality accurately, you are seeing an illusion). Go reason about it all you want.
The LLM doesn’t know anything. We determine what output is right, even if the LLM swears the output is right. We “reason” about it, I guess? Well in this case the whole “reasoning” process is to simply get an output that looks right, so what is reasoning in our case?
Let me just go one ridiculous level lower. If I measure every frame the Hubble telescope takes, and I measure with a simple ruler the distances between things, frame by frame, I can “reason” out some rules of the universe (planetary orbits). In this “reasoning” process, the very basic question of “well why, and who made this” immediately arises, so reasoning always leads to the fundamental question of God.
So, yeah. We reason to see God, because that’s all we’re seeing, everything else is an illusion. Reasoning is inextricably linked to God, so we have to be very open minded when we ask what is this machine doing.
I like this version for at least two reasons:
1. It is 100% compliant with large quantities of scientific findings (psychology and neuroscience), whreas I believe yours has a conservation of mass problem at least
2. Everyone dislikes it at least in certain scenarios (say, when reference is made to it during an object level disagreement)
(Also, if I might give a recommendation, you might be the type of person to enjoy Unsong by Scott Alexander https://unsongbook.com/)