←back to thread

688 points crescit_eundo | 1 comments | | HN request time: 0.406s | source
Show context
underlines ◴[] No.42142255[source]
Can you try increasing compute in the problem search space, not in the training space? What this means is, give it more compute to think during inference by not forcing any model to "only output the answer in algebraic notation" but do CoT prompting: "1. Think about the current board 2. Think about valid possible next moves and choose the 3 best by thinking ahead 3. Make your move"

Or whatever you deem a good step by step instruction of what an actual good beginner chess player might do.

Then try different notations, different prompt variations, temperatures and the other parameters. That all needs to go in your hyper-parameter-tuning.

One could try using DSPy for automatic prompt optimization.

replies(2): >>42142533 #>>42143035 #
pavel_lishin ◴[] No.42143035[source]
> 1. Think about the current board 2. Think about valid possible next moves and choose the 3 best by thinking ahead 3.

Do these models actually think about a board? Chess engines do, as much as we can say that any machine thinks. But do LLMs?

replies(1): >>42143281 #
1. TZubiri ◴[] No.42143281[source]
Can be forced through inference with CoT type of stuff. Spend tokens at each stage to draw the board for example, then spend tokens restating the rules of the game, then spend token restating the heuristics like piece value, and then spend tokens doing a minmax n-ply search.

Wildly inefficient? Probably. Could maybe generate some python to make more efficient? Maybe, yeah.

Essentially user would have to teach gpt to play chess, or training would fine tune chess towards these CoT, fine tuning, etc...