←back to thread

Use Prolog to improve LLM's reasoning

(shchegrikovich.substack.com)
379 points shchegrikovich | 4 comments | | HN request time: 0.001s | source
1. sgdfhijfgsdfgds ◴[] No.41879964[source]
This is magical thinking. If an LLM can’t reason it isn’t going to be able to express itself clearly in Prolog.

Suggesting otherwise is intellectually on the same level as trying to make up a small consistent per-sale loss with volume.

replies(1): >>41880690 #
2. sgdfhijfgsdfgds ◴[] No.41880690[source]
I know we're not supposed to comment on downvotes but I really question the logic of anyone who thinks that a thing that cannot reason can write a prolog program that is really going to be much more successful.

Prolog is actually pretty difficult to do right, even if you are skilled. It actually requires reasoning. You don't just write out facts and have the system do the work. And many of the examples in the training set will be wrong, naturally simplistic or be full of backtracking that is itself difficult for a person to comprehend at a glance; why should an LLM be better at it? There can't even be that much data in the training set.

Ultimately, though: stop believing in magical solutions to fundamental problems. This is nuts.

replies(1): >>41880757 #
3. shchegrikovich ◴[] No.41880757[source]
I have another example - just a few people believed that you can apply 'a simple next token prediction algorithm' and achieve what we know as LLM. From my perspective, in the past few years, we've tried a lot of different approaches to improve LLM reasoning; some of them were good, others not so good. We need to keep trying and researching. 'Prolog + LLM' is not the answer to all questions, but it looks like a good step to move us forward.
replies(1): >>41880879 #
4. sgdfhijfgsdfgds ◴[] No.41880879{3}[source]
> 'Prolog + LLM' is not the answer to all questions, but it looks like a good step to move us forward.

Or it's a thing people can write papers about, and chase reproducibility on afterwards, as the shell game of claiming LLM reasoning continues.