Currently there's a very interesting war between small neural networks on the CPU with high search depth alpha-beta pruning (stockfish NNUE) and big neural networks on a GPU with Monte Carlo search and lower depth (lc0).
So, while machines beating humans is "solved", chess is very far from solved (just ask the guys who have actually solved chess endgames with 8 or less pieces).
even in human chess people sometimes mistaken draw frequency to reflect both sides playing optimally, but there are many games where a winning advantage slips away into a draw
No computers now or in the foreseeable future will be capable of solving chess. It has an average branching factor over 30 and games can be over 100 moves.
NNUE already tries to distill a subtree eval into a neural net, but it’s optimized for CPU rather than GPU.
What you’re discussing sounds like intuition with checking, which is pretty close to how humans with a moderate degree of skill behave. I haven’t known enough Chess or Go masters to have any claim on how they think. But most of us don’t want an opponent at that level and if we did, we would certainly find a human, or just play against ourselves.
If you want a computer that plays like a human, you will probably need to imitate the way that a human thinks about the game. This means for example thinking about the interactions between pieces and the flow of the game rather than stateless evaluations.
Source?
https://arstechnica.com/information-technology/2023/02/man-b...
- The Leela open source community had already used transformer architecture to train Lc0 long before the paper (and published it, too!) and got much better result than new DeepMind massive model
- The top engines with with search (Stockfish NNUE, Lc0) beat DeepMind’s model by margins under normal competition’s conditions
- Speaking about efficiency, Stockfish NNUE can run on a commodity PC with only slightly lower ELO. AlphaZero or DeepMind’s new model can not even run to begin with.