←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 1 comments | | HN request time: 0.209s | source
Show context
bloomingkales ◴[] No.42949616[source]
If an LLM output is like a sculpture, then we have to sculpt it. I never did sculpting, but I do know they first get the clay spinning on a plate.

Whatever you want to call this “reasoning” step, ultimately it really is just throwing the model into a game loop. We want to interact with it on each tick (spin the clay), and sculpt every second until it looks right.

You will need to loop against an LLM to do just about anything and everything, forever - this is the default workflow.

Those who think we will quell our thirst for compute have another thing coming, we’re going to be insatiable with how much LLM brute force looping we will do.

replies(3): >>42955281 #>>42955806 #>>42956482 #
1. MrLeap ◴[] No.42955281[source]
This is a fantastic insight and really has my gears spinning.

We need to cluster the AI's insights on a spatial grid hash, give it a minimap with the ability to zoom in and out, and give it the agency to try and find its way to an answer and build up confidence and tests for that answer.

coarse -> fine, refine, test, loop.

Maybe a parallel model that handles the visualization stuff. I imagine its training would look more like computer vision. Mind palace generation.

If you're stuck or your confidence is low, wander the palace and see what questions bubble up.

Bringing my current context back through the web is how I think deeply about things. The context has the authority to reorder the web if it's "epiphany grade".

I wonder if the final epiphany at the end of what we're creating is closer to "compassion for self and others" or "eat everything."