I feel like the article neglects one obvious possibility: that OpenAI decided that chess was a benchmark worth "winning", special-cases chess within gpt-3.5-turbo-instruct, and then neglected to add that special-case to follow-up models since it wasn't generating sustained press coverage.
This seems quite likely to me, but did they special case it by reinforcement training it into the LLM (which would be extremely interesting in how they did it and what its internal representation looks like) or is it just that when you make an API call to OpenAI, the machine on the other end is not just a zillion-parameter LLM but also runs an instance of Stockfish?
That's easy to test, invent a new chess variant and see how the model does.
Both an LLM and Stockfish would fail that test.
Nobody is claiming that Stockfish is learning generalizable concepts that can one day meaningfully replace people in value creating work.
The point was such a question could not be used to tell whether the llm was calling a chess engine