Most active commenters
  • bloomingkales(9)
  • mistermann(4)

←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 20 comments | | HN request time: 0.427s | source | bottom
1. bloomingkales ◴[] No.42949274[source]
This thing that people are calling “reasoning” is more like rendering to me really, or multi pass rendering. We’re just refining the render, there’s no reasoning involved.
replies(8): >>42949343 #>>42949380 #>>42949404 #>>42949507 #>>42953101 #>>42953135 #>>42956206 #>>42960595 #
2. dleslie ◴[] No.42949343[source]
That was succinct and beautifully stated. Thank-you for the "Aha!" moment.
replies(1): >>42949419 #
3. mistermann ◴[] No.42949404[source]
"...there’s no reasoning involved...wait, could I just be succumbing to my heuristic intuitions of what is (seems to be) true....let's reconsider using System 2 thinking..."
replies(1): >>42949765 #
4. bloomingkales ◴[] No.42949419[source]
Hah. You should check out my other comment on how I think we’re obviously in a simulation (remember, we just need to see a good enough render).

LLMs are changing how I see reality.

5. ddrdrck_ ◴[] No.42949507[source]
We could see it the other way around : what we call "reasoning" may actually be some kind of multipass rendering, whatever it is performed by computers or human brains.
replies(1): >>42949564 #
6. bloomingkales ◴[] No.42949564[source]
Yes, of course. The implications are awesome.
7. bloomingkales ◴[] No.42949765[source]
Or there is no objective reality (well there isn’t, check out the study), and reality is just a rendering of the few state variables that keep track of your simple life.

A little context about you:

- person

- has hands, reads HN

These few state variables are enough to generate a believable enough frame in your rendering.

If the rendering doesn’t look believable to you, you modify state variables to make the render more believable, eg:

Context:

- person

- with hands

- incredulous demeanor

- reading HN

Now I can render you more accurately based on your “reasoning”, but truly I never needed all that data to see you.

Reasoning as we know it could just be a mechanism to fill in gaps in obviously sparse data (we absolutely do not have all the data to render reality accurately, you are seeing an illusion). Go reason about it all you want.

replies(1): >>42949844 #
8. mistermann ◴[] No.42949844{3}[source]
Is this a clever rhetorical trick to make it appear that your prior claim was correct?

If not: what am I intended to take away from this? What is its relevance to my comment?

replies(1): >>42950161 #
9. bloomingkales ◴[] No.42950161{4}[source]
You made a joke about questioning reality, I simply entertained it. You can do whatever you want with it, wasn’t a slight at all.
replies(1): >>42954481 #
10. pillefitz ◴[] No.42953101[source]
Which is related to multistage/ hierarchical/coarse-to-fine optimization, which is a pretty good way to find the global optimum in many problem domains.
11. buyucu ◴[] No.42953135[source]
Yes.

Before LLMs we had N-Gram language models. Many tasks like speech recognition worked as beach search in the graph defined by the ngram language model. You could easily get huge accuracy gains simply by pruning your beam less.

s1 reminds of this. You can always trade off latency for accuracy. Given these LLMs are much more complex than good old N-Grams, we're just discovering how to do this trade.

replies(1): >>42953201 #
12. bloomingkales ◴[] No.42953201[source]
Let me carry that concept, “learning to do this trade”, it’s a new trade.

I don’t believe computer science has the algorithms to handle this new paradigm. Everything was about sequential deterministic outputs, and clever ways to do it fast. This stuff is useless at the moment. We need new thinkers on how to not think sequentially or how not to think about the universe in such a small way.

Verifying input/output pairs is the old way. We need to understand differently going forward.

13. mistermann ◴[] No.42954481{5}[source]
It may have been in the form of a joke, but I certainly wasn't joking.

I think it is interesting what actions cannot be done by humans.

replies(1): >>42958022 #
14. LordDragonfang ◴[] No.42956206[source]
How are you defining "reasoning"?

Because I see these sorts of gnostic assertion about LLMs all the time about how they "definitely aren't doing <thing we normally apply to meat-brains>" by gesturing at the technical things it's doing, with no attempts to actually justify the negative assertion.

It often comes across as privileged reason trying to justify that of course the machine isn't doing some ineffable thing only meat-brains do.

replies(1): >>42958098 #
15. bloomingkales ◴[] No.42958022{6}[source]
I wasn’t joking either. Things are just getting started with this AI stuff, and I feel like programmers will experience that “de ja vu” phenomenon that they talk about in the Matrix, that eerie feeling something isn’t right.

Look, why have game developers spent so much time lazy loading parts of the game world? Very rarely do they just load the whole world, even in 2025. See, the worlds get bigger, so even as the tech gets better, we will always lazy load worlds in.

It’s a context issue right? Developers have just recently been given this thing called “context”.

But yeah man, why do we think just because we walked from our house to the supermarket that this reality didn’t lazy load things. That’s how programmers have been doing it all along …

Anyways

replies(1): >>42963051 #
16. bloomingkales ◴[] No.42958098[source]
From my other ridiculous comment, as I do entertain simulation theory in my understanding of God:

Reasoning as we know it could just be a mechanism to fill in gaps in obviously sparse data (we absolutely do not have all the data to render reality accurately, you are seeing an illusion). Go reason about it all you want.

The LLM doesn’t know anything. We determine what output is right, even if the LLM swears the output is right. We “reason” about it, I guess? Well in this case the whole “reasoning” process is to simply get an output that looks right, so what is reasoning in our case?

Let me just go one ridiculous level lower. If I measure every frame the Hubble telescope takes, and I measure with a simple ruler the distances between things, frame by frame, I can “reason” out some rules of the universe (planetary orbits). In this “reasoning” process, the very basic question of “well why, and who made this” immediately arises, so reasoning always leads to the fundamental question of God.

So, yeah. We reason to see God, because that’s all we’re seeing, everything else is an illusion. Reasoning is inextricably linked to God, so we have to be very open minded when we ask what is this machine doing.

replies(1): >>42964405 #
17. frontalier ◴[] No.42960595[source]
sshhhh, let the money flow
18. mistermann ◴[] No.42963051{7}[source]
A more parsimonious explanation: consciousness is generative, like an LLM. And, according to cultural conditioning, this generated scenario is referred to as reality.

I like this version for at least two reasons:

1. It is 100% compliant with large quantities of scientific findings (psychology and neuroscience), whreas I believe yours has a conservation of mass problem at least

2. Everyone dislikes it at least in certain scenarios (say, when reference is made to it during an object level disagreement)

19. LordDragonfang ◴[] No.42964405{3}[source]
Honestly, I was going to nitpick, but this definition scratches an itch in my brain so nicely that I'll just complement it as beautiful. "We reason to see God", I love it.

(Also, if I might give a recommendation, you might be the type of person to enjoy Unsong by Scott Alexander https://unsongbook.com/)

replies(1): >>42966458 #
20. bloomingkales ◴[] No.42966458{4}[source]
Thank you for the suggestion and nice words. Trust me, I have to sit here and laugh at the stuff I write too, because I wasn’t always a believer. So it’s a little bit of a trip for me too, I’m still exploring my own existence.