←back to thread

296 points todsacerdoti | 1 comments | | HN request time: 0.425s | source
Show context
cheesecompiler ◴[] No.44367317[source]
The reverse is possible too: throwing massive compute at a problem can mask the existence of a simpler, more general solution. General-purpose methods tend to win out over time—but how can we be sure they’re truly the most general if we commit so hard to one paradigm (e.g. LLMs) that we stop exploring the underlying structure?
replies(4): >>44367776 #>>44367991 #>>44368757 #>>44375546 #
logicchains ◴[] No.44367776[source]
We can be sure via analysis based on computational theory, e.g. https://arxiv.org/abs/2503.03961 and https://arxiv.org/abs/2310.07923 . This lets us know what classes of problems a model is able to solve, and sufficiently deep transformers with chain of thought have been shown to be theoretically capable of solving a very large class of problems.
replies(4): >>44367856 #>>44367945 #>>44369625 #>>44373799 #
1. cheesecompiler ◴[] No.44367945[source]
But this uses the transformers model to justify its own reasoning strength which might be a blindspot, which is my original point. All the above shows is that transformers can simulate solving a certain set of problems. It doesn't show that they are the best tool for the job.