←back to thread

Use Prolog to improve LLM's reasoning

(shchegrikovich.substack.com)
232 points shchegrikovich | 5 comments | | HN request time: 0.799s | source
Show context
z5h ◴[] No.41873798[source]
i've come to appreciate, over the past 2 years of heavy Prolog use, that all coding should be (eventually) be done in Prolog.

It's one of few languages that is simultaneously a standalone logical formalism, and a standalone representation of computation. (With caveats and exceptions, I know). So a Prolog program can stand in as a document of all facts, rules and relations that a person/organization understands/declares to be true. Even if AI writes code for us, we should expect to have it presented and manipulated as a logical formalism.

Now if someone cares to argue that some other language/compiler is better at generating more performant code on certain architectures, then that person can declare their arguments in a logical formalism (Prolog) and we can use Prolog to translate between language representations, compile, optimize, etc.

replies(7): >>41874164 #>>41874229 #>>41874594 #>>41874985 #>>41875196 #>>41875236 #>>41876524 #
1. larodi ◴[] No.41874985[source]
Been shouting here and many places for quite a while that CoT and all similar stuff eventually leads to logic programming. So happy I’m not crazy.
replies(3): >>41875037 #>>41875543 #>>41876509 #
2. burntcaramel ◴[] No.41875037[source]
COT = Chain-of-Thought

https://arxiv.org/abs/2201.11903

3. bbor ◴[] No.41875543[source]
You’re in good company — the most influential AI academic of all time, the cooky grandfather of AI who picked up right where (when!) Turing left off, the man hated by both camps yet somehow in charge of them, agrees with you. I’m talking about Marvin Minsky, of course. See: Logical vs. Analogical (Minsky, 1991) https://ojs.aaai.org/aimagazine/index.php/aimagazine/article...

  …the limitations of current machine intelligence largely stem from seeking unified theories or trying to repair the deficiencies of theoretically neat but conceptually impoverished ideological positions. 
  Our purely numeric connectionist networks are inherently deficient in abilities to reason well; our purely symbolic logical systems are inherently deficient in abilities to represent the all-important heuristic connections between things—the uncertain, approximate, and analogical links that we need for making new hypotheses. The versatility that we need can be found only in larger-scale architectures that can exploit and manage the advantages of several types of representations at the same time.
  Then, each can be used to overcome the deficiencies of the others. To accomplish this task, each formally neat type of knowledge representation or inference must be complemented with some scruffier kind of machinery that can embody the heuristic connections between the knowledge itself and what we hope to do with it.
He phrases it backwards here in comparison to what you’re talking about (probably because no one in their right mind would have predicted the feasibility of LLMs), but I think the parallel argument should be clear. Talking about “human reasoning” like Simon & Newell or LeCun & Hinton do in terms of one single paradigm is like talking about “human neurons”. There’s tons of different neuronal architectures at play in our brains, and only through the ad-hoc minimally-centralized combination of all of them do we find success.

Personally, I’m a big booster of the term Unified Artificial Intelligence (UAI) for this paradigm; isn’t it fetch? ;)

replies(1): >>41875691 #
4. dleink ◴[] No.41875691[source]
Just throwing this out there for someone, "Scruffier Kind of Machinery" is a good name for a book, company or band.
5. ianandrich ◴[] No.41876509[source]
You are not crazy. Logic programming is the future