Most active commenters
  • viraptor(6)
  • chongli(3)

←back to thread

695 points crescit_eundo | 31 comments | | HN request time: 0.002s | source | bottom
Show context
niobe ◴[] No.42142885[source]
I don't understand why educated people expect that an LLM would be able to play chess at a decent level.

It has no idea about the quality of it's data. "Act like x" prompts are no substitute for actual reasoning and deterministic computation which clearly chess requires.

replies(20): >>42142963 #>>42143021 #>>42143024 #>>42143060 #>>42143136 #>>42143208 #>>42143253 #>>42143349 #>>42143949 #>>42144041 #>>42144146 #>>42144448 #>>42144487 #>>42144490 #>>42144558 #>>42144621 #>>42145171 #>>42145383 #>>42146513 #>>42147230 #
viraptor ◴[] No.42143060[source]
This is a puzzle given enough training information. LLM can successfully print out the status of the board after the given moves. It can also produce a not-terrible summary of the position and is able to list dangers at least one move ahead. Decent is subjective, but that should beat at least beginners. And the lowest level of stockfish used in the blog post is lowest intermediate.

I don't know really what level we should be thinking of here, but I don't see any reason to dismiss the idea. Also, it really depends on whether you're thinking of the current public implementations of the tech, or the LLM idea in general. If we wanted to get better results, we could feed it way more chess books and past game analysis.

replies(2): >>42143139 #>>42143871 #
1. grugagag ◴[] No.42143139[source]
LLMs like GPT aren’t built to play chess, and here’s why: they’re made for handling language, not playing games with strict rules and strategies. Chess engines, like Stockfish, are designed specifically for analyzing board positions and making the best moves, but LLMs don’t even "see" the board. They’re just guessing moves based on text patterns, without understanding the game itself.

Plus, LLMs have limited memory, so they struggle to remember previous moves in a long game. It’s like trying to play blindfolded! They’re great at explaining chess concepts or moves but not actually competing in a match.

replies(5): >>42143316 #>>42143409 #>>42143940 #>>42144497 #>>42150276 #
2. viraptor ◴[] No.42143316[source]
> but LLMs don’t even "see" the board

This is a very vague claim, but they can reconstruct the board from the list of moves, which I would say proves this wrong.

> LLMs have limited memory

For the recent models this is not a problem for the chess example. You can feed whole books into them if you want to.

> so they struggle to remember previous moves

Chess is stateless with perfect information. Unless you're going for mind games, you don't need to remember previous moves.

> They’re great at explaining chess concepts or moves but not actually competing in a match.

What's the difference between a great explanation of a move and explaining every possible move then selecting the best one?

replies(6): >>42143465 #>>42143481 #>>42143484 #>>42143533 #>>42145323 #>>42146931 #
3. jerska ◴[] No.42143409[source]
LLMs need to compress information to be able to predict next words in as many contexts as possible.

Chess moves are simply tokens as any other. Given enough chess training data, it would make sense to have part of the network trained to handle chess specifically instead of simply encoding basic lists of moves and follow-ups. The result would be a general purpose sub-network trained on chess.

4. mjcohen ◴[] No.42143465[source]
Chess is not stateless. Three repetitions of same position is a draw.
replies(1): >>42144802 #
5. cool_dude85 ◴[] No.42143481[source]
>Chess is stateless with perfect information. Unless you're going for mind games, you don't need to remember previous moves.

In what sense is chess stateless? Question: is Rxa6 a legal move? You need board state to refer to in order to decide.

replies(1): >>42143555 #
6. sfmz ◴[] No.42143484[source]
Chess is not stateless. En Passant requires last move and castling rights requires nearly all previous moves.

https://adamkarvonen.github.io/machine_learning/2024/01/03/c...

replies(1): >>42143592 #
7. ethbr1 ◴[] No.42143533[source]
> Chess is stateless with perfect information.

It is not stateless, because good chess isn't played as a series of independent moves -- it's played as a series of moves connected to a player's strategy.

> What's the difference between a great explanation of a move and explaining every possible move then selecting the best one?

Continuing from the above, "best" in the latter sense involves understanding possible future moves after the next move.

Ergo, if I looked at all games with the current board state and chose the next move that won the most games, it'd be tactically sound but strategically ignorant.

Because many of those next moves were making that next move in support of some broader strategy.

replies(2): >>42143634 #>>42144422 #
8. aetherson ◴[] No.42143555{3}[source]
They mean that you only need board position, you don't need the previous moves that led to that board position.

There are at least a couple of exceptions to that as far as I know.

replies(2): >>42143938 #>>42144645 #
9. viraptor ◴[] No.42143592{3}[source]
Ok, I did go too far. But castling doesn't require all previous moves - only one bit of information carried over. So in practice that's board + 2 bits per player. (or 1 bit and 2 moves if you want to include a draw)
replies(1): >>42143633 #
10. aaronchall ◴[] No.42143633{4}[source]
Castling requires no prior moves by either piece (King or Rook). Move the King once and back early on, and later, although the board looks set for castling, the King may not.
replies(1): >>42143643 #
11. viraptor ◴[] No.42143634{3}[source]
> it's played as a series of moves connected to a player's strategy.

That state belongs to the player, not to the game. You can carry your own state in any game you want - for example remember who starts with what move in rock paper scissors, but that doesn't make that game stateful. It's the player's decision (or bot's implementation) to use any extra state or not.

I wrote "previous moves" specifically (and the extra bits already addressed elsewhere), but the LLM can carry/rebuild its internal state between the steps.

replies(1): >>42143743 #
12. viraptor ◴[] No.42143643{5}[source]
Yes, which means you carry one bit of extra information - "is castling still allowed". The specific moves that resulted in this bit being unset don't matter.
replies(1): >>42143680 #
13. aaronchall ◴[] No.42143680{6}[source]
Ok, then for this you need minimum of two bits - one for kingside Rook and one for the queenside Rook, both would be set if you move the King. You also need to count moves since the last exchange or pawn move for the 50 move rule.
replies(1): >>42143705 #
14. viraptor ◴[] No.42143705{7}[source]
Ah, that one's cool - I've got to admit I've never heard of the 50 move rule.
replies(1): >>42143935 #
15. ethbr1 ◴[] No.42143743{4}[source]
If we're talking about LLMs, then the state belongs to it.

So even if the rules of chess are (mostly) stateless, the resulting game itself is not.

Thus, you can't dismiss concerns about LLMs having difficulty tracking state by saying that chess is stateless. It's not, in that sense.

16. User23 ◴[] No.42143935{8}[source]
Also the 3x repetition rule.
replies(1): >>42144595 #
17. User23 ◴[] No.42143938{4}[source]
The correct phrasing would be is it a Markov process?
18. zeckalpha ◴[] No.42143940[source]
Language is a game with strict rules and strategies.
19. lxgr ◴[] No.42144422{3}[source]
> good chess isn't played as a series of independent moves -- it's played as a series of moves connected to a player's strategy.

Maybe good chess, but not perfect chess. That would by definition be game-theoretically optimal, which in turn implies having to maintain no state other than your position in a large but precomputable game tree.

replies(1): >>42144634 #
20. codebolt ◴[] No.42144497[source]
> they’re made for handling language, not playing games with strict rules and strategies

Here's the opposite theory: Language encodes objective reasoning (or at least, it does some of the time). A sufficiently large ANN trained on sufficiently large amounts of text will develop internal mechanisms of reasoning that can be applied to domains outside of language.

Based on what we are currently seeing LLMs do, I'm becoming more and more convinced that this is the correct picture.

replies(1): >>42144685 #
21. chipsrafferty ◴[] No.42144595{9}[source]
And 5x repetition rule
22. chongli ◴[] No.42144634{4}[source]
Right, but your position also includes whether or not you still have the right to castle on either side, whether each pawn has the right to capture en passant or not, the number of moves since the last pawn move or capture (for tracking the 50 move rule), and whether or not the current position has ever appeared on the board once or twice prior (so you can claim a draw by threefold repetition).

So in practice, your position actually includes the log of all moves to that point. That’s a lot more state than just what you can see on the board.

23. chongli ◴[] No.42144645{4}[source]
Yes, 4 exceptions: castling rights, legal en passant captures, threefold repetition, and the 50 move rule. You actually need quite a lot of state to track all of those.
replies(1): >>42147799 #
24. wruza ◴[] No.42144685[source]
I share this idea but from the different perspective. It doesn’t develop these mechanisms, but casts a high-dimensional-enough shadow of their effect on itself. This vaguely explains why the more deep Gell-Mann-wise you are the less sharp that shadow is, because specificity cuts off “reasoning” hyperplanes.

It’s hard to explain emerging mechanisms because of the nature of generation, which is one-pass sequential matrix reduction. I say this while waving my hands, but listen. Reasoning is similar to Turing complete algorithms, and what LLMs can become through training is similar to limited pushdown automata at best. I think this is a good conceptual handle for it.

“Line of thought” is an interesting way to loop the process back, but it doesn’t show that much improvement, afaiu, and still is finite.

Otoh, a chess player takes as much time and “loops” as they need to get the result (ignoring competitive time limits).

25. Someone ◴[] No.42144802{3}[source]
Yes, there’s state there that’s not in the board position, but technically, threefold repetition is not a draw. Play can go on. https://en.wikipedia.org/wiki/Threefold_repetition:

“The game is not automatically drawn if a position occurs for the third time – one of the players, on their turn, must claim the draw with the arbiter. The claim must be made either before making the move which will produce the third repetition, or after the opponent has made a move producing a third repetition. By contrast, the fivefold repetition rule requires the arbiter to intervene and declare the game drawn if the same position occurs five times, needing no claim by the players.”

26. cowl ◴[] No.42145323[source]
> Chess is stateless with perfect information. Unless you're going for mind games, you don't need to remember previous moves.

while it can be played as stateless, remembering previous moves gives you insight into potential strategy that is being build.

27. jackcviers3 ◴[] No.42146931[source]
You can feed them whole books, but they have trouble with recall for specific information in the middle of the context window.
28. fjkdlsjflkds ◴[] No.42147799{5}[source]
It shouldn't be too much extra state. I assume that 2 bits should be enough to cover castling rights (one for each player), whatever is necessary to store the last 3 moves should cover legal en passant captures and threefold repetition, and 12 bits to store two non-overflowing 6 bit counters (time since last capture, and time since last pawn move) should cover the 50 move rule.

So... unless I'm understanding something incorrectly, something like "the three last moves plus 17 bits of state" (plus the current board state) should be enough to treat chess as a memoryless process. Doesn't seem like too much to track.

replies(1): >>42148093 #
29. chongli ◴[] No.42148093{6}[source]
Threefold repetition does not require the three positions to occur consecutively. So you could conceivably have a position repeat itself for first on the 1st move, second time on the 25th move, and the third time on the 50th move of a sequence and then players could claim a draw by threefold repetition or 50 move rule at the same time!

This means you do need to store the last 50 board positions in the worst case. Normally you need to store less because many moves are irreversible (pawns cannot go backwards, pieces cannot be un-captured).

replies(1): >>42150660 #
30. nemomarx ◴[] No.42150276[source]
just curious, was this rephrased by an llm or is that your writing style?
31. fjkdlsjflkds ◴[] No.42150660{7}[source]
Ah... gotcha. Thanks for the clarification.