←back to thread

S1: A $6 R1 competitor?

(timkellogg.me)
851 points tkellogg | 1 comments | | HN request time: 0s | source
Show context
gorgoiler ◴[] No.42959710[source]
This feels just like telling a constraint satisfaction engine to backtrack and find a more optimal route through the graph. We saw this 25 years ago with engines like PROVERB doing directed backtracking, and with adversarial planning when automating competitive games.

Why would you control the inference at the token level? Wouldn’t the more obvious (and technically superior) place to control repeat analysis of the optimal path through the search space be in the inference engine itself?

Doing it by saying “Wait” feels like fixing dad’s laptop over a phone call. You’ll get there, but driving over and getting hands on is a more effective solution. Realistically, I know that getting “hands on” with the underlying inference architecture is way beyond my own technical ability. Maybe it’s not even feasible, like trying to fix a cold with brain surgery?

replies(3): >>42960228 #>>42960308 #>>42962633 #
1. rayboy1995 ◴[] No.42962633[source]
This is the difference between science and engineering. What they have done is engineering. If the result is 90% of the way there with barely any effort, its best to move on to something else that may be low hanging fruit than to spend time chasing that 10%.