Suggesting otherwise is intellectually on the same level as trying to make up a small consistent per-sale loss with volume.
Suggesting otherwise is intellectually on the same level as trying to make up a small consistent per-sale loss with volume.
Prolog is actually pretty difficult to do right, even if you are skilled. It actually requires reasoning. You don't just write out facts and have the system do the work. And many of the examples in the training set will be wrong, naturally simplistic or be full of backtracking that is itself difficult for a person to comprehend at a glance; why should an LLM be better at it? There can't even be that much data in the training set.
Ultimately, though: stop believing in magical solutions to fundamental problems. This is nuts.
Or it's a thing people can write papers about, and chase reproducibility on afterwards, as the shell game of claiming LLM reasoning continues.