Most active commenters
  • Davidzheng(6)
  • KK7NIL(3)
  • janalsncm(3)

←back to thread

365 points lawrenceyan | 27 comments | | HN request time: 1.655s | source | bottom
1. joelthelion ◴[] No.41873554[source]
I wonder if you could creatively combine this model with search algorithms to advance the state of the art in computer chess? I wouldn't be surprised to see such a bot pop up on tcec in a couple years.
replies(3): >>41873666 #>>41873900 #>>41900388 #
2. alfalfasprout ◴[] No.41873666[source]
The thing is classical chess (unlike eg; go) is essentially "solved" when run on computers capable of extreme depth. Modern chess engines play essentially flawlessly.
replies(5): >>41873728 #>>41873731 #>>41873743 #>>41873853 #>>41873911 #
3. KK7NIL ◴[] No.41873728[source]
The developers of stockfish and lc0 (and the many weaker engines around) would disagree, we've seen their strength improve considerably over the last few years.

Currently there's a very interesting war between small neural networks on the CPU with high search depth alpha-beta pruning (stockfish NNUE) and big neural networks on a GPU with Monte Carlo search and lower depth (lc0).

So, while machines beating humans is "solved", chess is very far from solved (just ask the guys who have actually solved chess endgames with 8 or less pieces).

replies(2): >>41873849 #>>41880153 #
4. solveit ◴[] No.41873731[source]
We really have no way to know this. But I would be very surprised if modern chess engines didn't regularly blunder into losing (from the perspective of a hypothetical 32-piece tablebase) positions, and very very surprised if modern chess engines perfectly converted tablebase-winning positions.
replies(4): >>41873753 #>>41874074 #>>41874713 #>>41877588 #
5. __s ◴[] No.41873743[source]
compared to humans yes, but between themselves in TCEC progress continues. TCEC has AIs play both sides of random openings, rather than stick to playing chess's initial position. The same happens for checkers amongst humans, where opening positions are randomized
6. __s ◴[] No.41873753{3}[source]
not only blunder into losing positions, but also blunder from winning positions into draws

even in human chess people sometimes mistaken draw frequency to reflect both sides playing optimally, but there are many games where a winning advantage slips away into a draw

7. GaggiX ◴[] No.41873849{3}[source]
Stockfish and lc0 would always draw if they are not put in unbalanced starting positions, the starting position will be swapped in the next game to make it fair.
replies(1): >>41874064 #
8. janalsncm ◴[] No.41873853[source]
Chess is not “solved”. Solved doesn’t mean computers can beat humans, it means for any chess board position we can tell whether white wins, black wins, or the game is drawn with perfect play. We would know if the starting position was drawn, for example.

No computers now or in the foreseeable future will be capable of solving chess. It has an average branching factor over 30 and games can be over 100 moves.

replies(1): >>41877582 #
9. janalsncm ◴[] No.41873900[source]
The advantage of this flavor of engine is that it might make parallel position evaluation extremely efficient. Calculate 1024 leaf positions and batch them to the model, take the top 10% and explore their sub-trees either via further GPU batching or minimax eval.

NNUE already tries to distill a subtree eval into a neural net, but it’s optimized for CPU rather than GPU.

replies(1): >>41875457 #
10. primitivesuave ◴[] No.41873911[source]
This is accurate for endgames only. In complicated positions, there is still room for improvement - the recent game of lc0 vs stockfish where lc0 forced a draw against an impending checkmate is a good example. There is currently no way for a chess engine searching a massive game tree can see how an innocuous pawn move enables a forced stalemate 40 moves down the line.
replies(1): >>41877605 #
11. KK7NIL ◴[] No.41874064{4}[source]
In classical controls (what TCEC mainly uses), yes. They can play pretty exciting bullet chess without a forced opening though.
12. janalsncm ◴[] No.41874074{3}[source]
The fact that TCEC games aren’t all draws suggests that computers aren’t perfect. Stockfish loses to Leela sometimes for example.
replies(2): >>41874621 #>>41877589 #
13. grumpopotamus ◴[] No.41874621{4}[source]
Tcec games are deliberately played from imbalanced opening positions. The draw rate would be much higher for the top participants if this wasn't forced. However, I agree that engines are not perfect. I have heard this claim many times before a new engine came along that showed just how beatable the state of the art engines still were at the time.
14. KK7NIL ◴[] No.41874713{3}[source]
We do know this, there are many positions (primarily sharp middle game one's) where SF/lc0 will significantly change their evaluation as they go deeper. This problem gets better the more time they spend on one position but it's an inevitable consequence of the horizon effect and it's why (except for 8 pieces or less), chess is far from solved.
replies(1): >>41877595 #
15. hinkley ◴[] No.41875457[source]
As a game player I want to play an opponent that behaves like a human. Otherwise I’m always looking for the flaw in the design that I can exploit, which wins me the game but is less fun.

What you’re discussing sounds like intuition with checking, which is pretty close to how humans with a moderate degree of skill behave. I haven’t known enough Chess or Go masters to have any claim on how they think. But most of us don’t want an opponent at that level and if we did, we would certainly find a human, or just play against ourselves.

replies(1): >>41875947 #
16. salamo ◴[] No.41875947{3}[source]
The issue is that humans and computers don't evaluate board positions in the same way. A computer will analyze every possible move, and then every possible response to each of those moves, etc. Human grandmasters will typically only analyze a handful of candidate moves, and a few possible replies to those moves. This means human search is much narrower and shallower.

If you want a computer that plays like a human, you will probably need to imitate the way that a human thinks about the game. This means for example thinking about the interactions between pieces and the flow of the game rather than stateless evaluations.

replies(1): >>41876175 #
17. hinkley ◴[] No.41876175{4}[source]
Grandparent was suggesting the hybrid approach where you select a handful of good candidate positions and then explore them (DFS) as far as possible. Which is pretty much how humans work.
18. Davidzheng ◴[] No.41877582{3}[source]
There's strong solved and weak latter only needs to be unbeatable from starting. Now it's definitely not probable that SF isn't beatable from initial but honestly it's not impossible. The drawing margin is pretty big for engines
19. Davidzheng ◴[] No.41877588{3}[source]
Ofc they do but the more interesting question for weak solved is whether they do in mainline positions (like mainline Berlin, mainline petroff, etc) where you can hold equality in many ways and engines are printing 0.0 everywhere
20. Davidzheng ◴[] No.41877589{4}[source]
Most TCEC starting positions are borderline lost
21. Davidzheng ◴[] No.41877595{4}[source]
Far from strongly solved but i would wager current SF will not lose half of its white games against any future engine
replies(1): >>41889158 #
22. Davidzheng ◴[] No.41877605{3}[source]
Honestly SF plays better in middle game positions on average I would guess. I think usually there's a bigger draw margin in middle games
replies(1): >>41884464 #
23. KolmogorovComp ◴[] No.41880153{3}[source]
> ask the guys who have actually solved chess endgames with 8 or less pieces

Source?

24. primitivesuave ◴[] No.41884464{4}[source]
I think you are correct as I recall there is a set of middle game chess puzzles where Stockfish outperformed the other chess engines by a wide margin - I can't find the link as it was years ago. Not sure how the state-of-the-art has progressed since then. But I do believe the "horizon effect" plays a role in whether an engine decides to forcibly draw a game, which afforded Stockfish + NNUE a distinct advantage (at least at the time).
25. Kstarek ◴[] No.41889158{5}[source]
hah, disagree strongly tbh, there may be an adversarial engine that specifically finds positions that stockfish doesn't evaluate well. they did the same to AlphaGo:

https://arstechnica.com/information-technology/2023/02/man-b...

replies(1): >>41892837 #
26. Davidzheng ◴[] No.41892837{6}[source]
True that's a reasonable possibility. But in go the top engines are far from perfection but it's not clear in chess
27. sinuhe69 ◴[] No.41900388[source]
Leela, the open source model already (and always) does that and is already much better then the new DeepMind model. No, neural networks are basically curve fitting. You can only do so much approximation without overfitting and there are always positions, which are different enough from the “mainstay” positions that NN can not learn them. DeepMind as always want to impress the public by creating artificial conditions to show its product under better light. But the reality is:

- The Leela open source community had already used transformer architecture to train Lc0 long before the paper (and published it, too!) and got much better result than new DeepMind massive model

- The top engines with with search (Stockfish NNUE, Lc0) beat DeepMind’s model by margins under normal competition’s conditions

- Speaking about efficiency, Stockfish NNUE can run on a commodity PC with only slightly lower ELO. AlphaZero or DeepMind’s new model can not even run to begin with.