←back to thread

204 points warrenm | 2 comments | | HN request time: 0.407s | source
Show context
AnotherGoodName ◴[] No.45106653[source]
I’ve been working on board game ai lately.

Fwiw nothing beats ‘implement the game logic in full (huge amounts of work) and with pruning on some heuristics look 50 moves ahead’. This is how chess engines work and how all good turn based game ai works.

I’ve tried throwing masses of game state data at latest models in pytorch. Unusable. It Makes really dumb moves. In fact one big issue is that it often suggests invalid moves and the best way to avoid this is to implement the board game logic in full to validate it. At which point, why don’t i just do the above scan ahead X moves since i have to do the hard parts of manually building the world model anyway?

One area where current ai is helping is on the heuristics themselves for evaluating best moves when scanning ahead. You can input various game states and whether the player won the game or not in the end to train the values of the heuristics. You still need to implement the world model and look ahead to use those heuristics though! When you hear of neural networks being used for go or chess this is where they are used. You still need to build the world model and brute force scan ahead.

One path i do want to try more: In theory coding assistants should be able to read rulebooks and dynamically generate code to represent those rules. If you can do that part the rest should be easy. Ie. it could be possible to throw rulebooks at ai and it play the game. It would generate a world model from the rulebook via coding assistants and scan ahead more moves than humanly possible using that world model, evaluating to some heuristics that would need to be trained through trial and error.

Of course coding assistants aren’t at a point where you can throw rulebooks at them to generate an internal representation of game states. I should know. I just spent weeks building the game model even with a coding assistant.

replies(12): >>45106842 #>>45106945 #>>45106986 #>>45107761 #>>45107771 #>>45108876 #>>45109332 #>>45109904 #>>45110225 #>>45112651 #>>45113553 #>>45114494 #
PeterStuer ◴[] No.45112651[source]
"Elephants don't play chess" ;)

You have a tiny, completely known, deterministic rule based 'world'. 'Reasoning' forwards for that is trivial.

Now try your approach for much more 'fuzzy', incomletely and ill defined environments, e.g. natural language production, and watch it go down in flames.

Different problems need different solutions. While current frontier llm's show surprising results in emergent shallow and linguistic reasoning, they are far away from deep abstract logical reasoning. A sota theorem prover otoh, can excel at that, but can still struggle to produce a coherent sentence.

I think most have always agreed that for certain tasks, an abstraction over which one can 'reason' is required. People differ in opinion over wether this faculty is to be 'crafted' in or wether it is possible to have it emerge implicitly and more robust from observations and interactions.

https://people.csail.mit.edu/brooks/papers/elephants.pdf

replies(1): >>45116636 #
1. AnotherGoodName ◴[] No.45116636[source]
What seems bizarre though is that the language problem was fully solved first (where fully solved means AI can learn it through pure observation with no human intervention at all).

As in language today is learnt by basically throwing raw data at an LLM. Board games such as chess still require a human to manually build a world model for the state space search to work on. They are indeed totally different problems but it's still shocking to me which one was fully solved first.

replies(1): >>45118645 #
2. tim333 ◴[] No.45118645[source]
>Board games such as chess still require a human to manually build a world model for the state space search to work on

That's not so. Deepmind MuZero can learn most board games without even being told the rules.