←back to thread

Learn Prolog Now

(lpn.swi-prolog.org)
207 points rramadass | 1 comments | | HN request time: 0.228s | source
Show context
disambiguation ◴[] No.45902807[source]
I am once again shilling the idea that someone should find a way to glue Prolog and LLMs together for better reasoning agents.

https://news.ycombinator.com/context?id=43948657

Thesis:

1. LLMs are bad at counting the number of r's in strawberry.

2. LLMs are good at writing code that counts letters in a string.

3. LLMs are bad at solving reasoning problems.

4. Prolog is good at solving reasoning problems.

5. ???

6. LLMs are good at writing prolog that solves reasoning problems.

Common replies:

1. The bitter lesson.

2. There are better solvers, ex. Z3.

3. Someone smart must have already tried and ruled it out.

Successful experiments:

1. https://quantumprolog.sgml.net/llm-demo/part1.html

replies(14): >>45903080 #>>45903178 #>>45903192 #>>45903204 #>>45903228 #>>45903263 #>>45903361 #>>45903376 #>>45903458 #>>45903841 #>>45904155 #>>45904166 #>>45904490 #>>45906435 #
1. nextos ◴[] No.45903376[source]
We've done this, and it works. Our setup is to have some agents that synthesize Prolog and other types of symbolic and/or probabilistic models. We then use these models to increase our confidence in LLM reasoning and iterate if there is some mismatch. Making synthesis work reliably on a massive set of queries is tricky, though.

Imagine a medical doctor or a lawyer. At the end of the day, their entire reasoning process can be abstracted into some probabilistic logic program which they synthesize on-the-fly using prior knowledge, access to their domain-specific literature, and observed case evidence.

There is a growing body of publications exploring various aspects of synthesis, e.g. references included in [1] are a good starting point.

[1] https://proceedings.neurips.cc/paper_files/paper/2024/file/8...