←back to thread

365 points lawrenceyan | 1 comments | | HN request time: 0.208s | source
Show context
amoss ◴[] No.41879199[source]
If you solve chess then you have a tree that is too large for us to currently compute (about 10^80 although my memory may be way off). Annotating that tree with win / loss / draw would allow an optimal player without search. The two obvious approaches to compression / optimization are to approximate the tree, and to approximate the annotations. How well those two approaches would work depends a lot on the structure of the tree.

This result seems to tell us less about the power of the training approach (in absolute terms) and more about how amenable the chess game tree is to those two approaches (in relative terms). What I would take away is that a reasonable approximation of that tree can be made in 270M words of data.

replies(1): >>41879508 #
timerol ◴[] No.41879508[source]
Note that the exact version of this technique is used in chess for the endgame, referred to as a tablebase. Chess is solved once there are 7 pieces on the board, in an 18.4TB database, described here: https://lichess.org/@/lichess/blog/7-piece-syzygy-tablebases...
replies(1): >>41880006 #
7373737373 ◴[] No.41880006[source]
Makes me wonder what % of games end with <=7 pieces
replies(1): >>41881275 #
1. adgjlsfhk1 ◴[] No.41881275[source]
at a high level, almost all of them that aren't draws. if you have pieces in the right place, you should've be checkmateable. Without blunders, wins occur after a small advantage is followed by slightly favorable trades into a winning position