←back to thread

365 points lawrenceyan | 4 comments | | HN request time: 0.59s | source
1. bob1029 ◴[] No.41880706[source]
https://arxiv.org/abs/2402.04494

> Board states s are encoded as FEN strings which we convert to fixed-length strings of 77 characters where the ASCII-code of each character is one token. A FEN string is a description of all pieces on the board, whose turn it is, the castling availability for both players, a potential en passant target, a half-move clock and a full-move counter. We essentially take any variable-length field in the FEN string, and convert it into a fixed-length sub-string by padding with ‘.’ if needed. We never flip the board; the FEN string always starts at rank 1, even when it is the black’s turn. We store the actions in UCI notation (e.g., ‘e2e4’ for the well-known white opening move). To tokenize them we determine all possible legal actions across games, which is 1968, sort them alphanumerically (case-sensitive), and take the action’s index as the token, meaning actions are always described by a single token (all details in Section A.1).

I am starting to notice a pattern in these papers - Writing hyper-specific tokenizers for the target problem.

How would this model perform if we made a small change to the rules of chess and continued using the same tokenizer? If we find we need to rewrite the tokenizer for every problem variant, then I argue this is just ordinary programming in a very expensive disguise.

replies(3): >>41881003 #>>41881108 #>>41881117 #
2. kidintech ◴[] No.41881003[source]
// personal opinion: I think machine learning as it currently stands is widely overhyped

How is this the top comment?

> I am starting to notice a pattern in these papers - Writing hyper-specific tokenizers for the target problem.

This is merely expressing what they consider as part of a game state, which is entirely needed for what they set out to do.

> I argue this is just ordinary programming

"Ordinary programming" (what does that mean?) for such a task implies extraordinary chess intuition, capable of conjuring rules and heuristics for the task of comparing two game states and saying which one is "better" (what does better mean?).

> How would this model perform if we made a small change to the rules of chess and continued using the same tokenizer?

If by "small change" you are implying i.e. removing the ability to castle, then sure, the tokenizer would need to be rewritten. At the same time, the entire training dataset would need to be changed, such that the games are valid under your new ruleset. How is this controversial or unexpected?

It feels like you are expecting that state of the art technology allows us to input an arbitrary ruleset and the mighty computer immediately plays an arbitrary game optimally. Unfortunately, this is not the case, but that does not take anything away from this paper.

3. BurningFrog ◴[] No.41881108[source]
If you change to the rules, you have a different game than chess.

Since there is no training data for that game, I don't know you get this kind of AI to do anything?

4. mewpmewp2 ◴[] No.41881117[source]
I don't know much about this space, but it seems like this could be solved by leaving a good amount of empty tokens that you would only start using when they arise. Or leave tokens which you can use together to combine anything for various edge cases. Because if you have all the characters as tokens you can combine them into anything.