←back to thread

688 points crescit_eundo | 1 comments | | HN request time: 0.209s | source
Show context
snickerbockers ◴[] No.42144943[source]
Does it ever try an illegal move? OP didn't mention this and I think it's inevitable that it should happen at least once, since the rules of chess are fairly arbitrary and LLMs are notorious for bullshitting their way through difficult problems when we'd rather they just admit that they don't have the answer.
replies(2): >>42145004 #>>42145793 #
sethherr ◴[] No.42145004[source]
Yes, he discusses using a grammar to restrict to only legal moves
replies(4): >>42147380 #>>42148708 #>>42150800 #>>42152205 #
topaz0 ◴[] No.42147380[source]
Still an interesting direction of questioning. Maybe could be rephrased as "how much work is the grammar doing"? Are the results with the grammar very different than without? If/when a grammar is not used (like in the openai case), how many illegal moves does it try on average before finding a legal one?
replies(3): >>42147422 #>>42150017 #>>42151815 #
1. gs17 ◴[] No.42151815[source]
I'd be more interested in what the distribution of grammar-restricted predictions looks like compared to moves Stockfish says are good.