Playing is a thing strongly related to abstract representation of the game in game states. Even if player does not realize it, with chess it’s really about shallow or beam search within the possible moves.
LLMs don’t do reasoning or exploration, but they write text based on precious text. So to us it may seem playing, but is really a smart guesswork based on previous games. It’s like Kasparov writing moves without imagining the actual placement.
What would be interesting is to see whether a model, given only the rules, will play. I bet it won’t.
At this moment it’s replaying by memory but definitely not chasing goals. There’s no such think as forward attention yet, and beam search is expensive enough, so one would prefer to actually fallback to classic chess algos.