←back to thread

170 points PaulHoule | 1 comments | | HN request time: 0.218s | source
Show context
CuriouslyC ◴[] No.45120876[source]
This article is accurate. That's why I'm investigating a bayesian symbolic lisp reasoner. It's incapable of hallucinating, it provides auditable traces which are actual programs and it kicks the crap out of LLMs at stuff like Arc-Agi, symbolic reasoning, logic programs, game playing, etc. I'm working on a paper where I show that the same model can break 80 on arc-agi, run the house by counting cards at blackjack, and solve complex mathematical word problems.
replies(1): >>45121147 #
1. leptons ◴[] No.45121147[source]
LLMs are also incapable of "hallucinating", so maybe that isn't the buzzword you should be using.