←back to thread

184 points hhs | 6 comments | | HN request time: 0.287s | source | bottom
Show context
aabhay ◴[] No.41840024[source]
The ability to use automatic verification + synthetic data is basically common knowledge among practitioners. But all these organizations have also explored endlessly the different ways to overfit on such data and the conclusion is the same -- the current model architecture seems to plateau when it comes to multi-step logical reasoning. You either drift from your common knowledge pre-training too far or you never come up with the right steps in instances where there's a vast design space.

Think -- why has nobody been able to make an LLM play Go better than AlphaZero while still retaining language capabilities? It certainly would have orders of magnitude more parameters.

replies(3): >>41840256 #>>41844066 #>>41848037 #
1. danielmarkbruce ◴[] No.41840256[source]
AlphaZero is a system including models and search capabilities. This isn't a great example.
replies(2): >>41840329 #>>41845341 #
2. aabhay ◴[] No.41840329[source]
AlphaZero is not an llm though? Its primarily a convolutional network with some additional fully connected layers that guide an MCTS at inference time.
replies(1): >>41840372 #
3. danielmarkbruce ◴[] No.41840372[source]
That too! It's definitely not an LLM. It would be a bad architecture choice (almost certainly...). For math an LLM would be a good choice. Arbitrary length input, sequential data, unclear where/which symbols to pay most attention to. Math notation is a human built language just like english.

Search is a great tool for AI, but you have to have a reasonable search space (like chess or go or poker). I'm not close enough to Lean to understand if it can do something like that (i believe not), but math doesn't feel like a thing where you can reasonably "search" next steps, for most problems.

replies(1): >>41843938 #
4. danenania ◴[] No.41843938{3}[source]
> It would be a bad architecture choice (almost certainly...)

Naively, it would seem like transformers could line up nicely with turn-based games. Instead of mapping tokens to language as in an LLM, they could map to valid moves given the current game state. And then instead of optimizing the next token for linguistic coherence as LLMs do, you optimize for winning the game.

replies(1): >>41845141 #
5. danielmarkbruce ◴[] No.41845141{4}[source]
A lot of the games usually used are markov. All the state is right there in front of you, doesn't matter how you got there. As an example - chess - it matters not how you got to state X (... someone will get the edge cases..).
6. michaelnny ◴[] No.41845341[source]
One important aspect of the success of AlphaGo and its successor is the game environment is closed domain, and has a stable reward function. With this we can guide the agent to do MCTS search and planning for the best move in every state.

However, such reward system is not available for LLM in an open domain setting.