←back to thread

214 points optimalsolver | 1 comments | | HN request time: 0.21s | source
Show context
js8 ◴[] No.45770542[source]
I think the explanation is pretty simple, as I said in my earlier comment: https://news.ycombinator.com/item?id=44904107

I also believe the problem is we don't know what we want: https://news.ycombinator.com/item?id=45509015

If we could make LLMs to apply a modest set of logic rules consistently, it would be a win.

replies(1): >>45770939 #
1. Sharlin ◴[] No.45770939[source]
That's a pretty big "if". LLMs are by design entirely unlike GoFAI reasoning engines. It's also very debatable whether it makes any sense to try and hack LLMs into reasoning engines when you could just... use a reasoning engine. Or have the LLM to defer to one, which would play to their strength as translators.