←back to thread

688 points crescit_eundo | 1 comments | | HN request time: 0.673s | source
Show context
jrecursive ◴[] No.42142229[source]
i think this has everything to do with the fact that learning chess by learning sequences will get you into more trouble than good. even a trillion games won't save you: https://en.wikipedia.org/wiki/Shannon_number

that said, for the sake of completeness, modern chess engines (with high quality chess-specific models as part of their toolset) are fully capable of, at minimum, tying every player alive or dead, every time. if the opponent makes one mistake, even very small, they will lose.

while writing this i absently wondered if you increased the skill level of stockfish, maybe to maximum, or perhaps at least an 1800+ elo player, you would see more successful games. even then, it will only be because the "narrower training data" (ie advanced players won't play trash moves) at that level will probably get you more wins in your graph, but it won't indicate any better play, it will just be a reflection of less noise; fewer, more reinforced known positions.

replies(4): >>42142365 #>>42142427 #>>42142915 #>>42143959 #
1. astrea ◴[] No.42143959[source]
Since we're mentioning Shannon... What is the minimum representative sample size of that problem space? Is it close enough to the number of freely available chess moves on the Internet and in books?