←back to thread

Use Prolog to improve LLM's reasoning

(shchegrikovich.substack.com)
379 points shchegrikovich | 2 comments | | HN request time: 0.423s | source
Show context
a1j9o94 ◴[] No.41873281[source]
I tried an experiment with this using a Prolog interpreter with GPT-4 to try to answer complex logic questions. I found that it was really difficult because the model didn't seem to know Prolog well enough to write a description of any complexity.

It seems like you used an interpreter in the loop which is likely to help. I'd also be interested to see how o1 would do in a task like this or if it even makes sense to use something like prolog if the models can backtrack during the "thinking" phase

replies(2): >>41873561 #>>41873700 #
hendler ◴[] No.41873700[source]
I also wrote wrote an LLM to Prolog interpreter for a hackathon called "Logical". With a few hours effort I'm sure it could be improved.

https://github.com/Hendler/logical

I think while LLMs may approach completeness here, it's good to have an interpretable system to audit/verify and reproduce results.

replies(1): >>41876898 #
1. shchegrikovich ◴[] No.41876898[source]
This is really cool!
replies(1): >>41909074 #
2. hendler ◴[] No.41909074[source]
Thanks! Feel free to reach out.