←back to thread

549 points orcul | 1 comments | | HN request time: 0.193s | source
1. SecuredMarvin ◴[] No.41894529[source]
Thanks, dang.

I think that using a LLM as the referred telepathy device to a wolfram-alpha/mathematica like general reasoning module is the way to AGI. The reasoning modules we have today are still much to narrow because of the very broad and deep search trees exploding in complexity. There is the need for a kind of pathfinder which could come from common knowledge already encoded in LLMs, like in o1. An system playing with real factual reasoning but exploring in directions coming from world knowledge.

What is still missing is the dialectic between possible and right, a physics engine, the motivation of analysed agents, the effects of emergent behavior and a lot of other -isms. But they may be encoded in the reasoning-explorer. And of course loops, more loops, refinement, working hypotheses and escaping cul-de-sacs.

There are people with great language skills and next to no reasoning skills. Some of them have general knowledge. If you ever talked to them, for a at least an hour freely meandering topics you will know. They seem intelligent for a couple of minutes but after a while you realise that they can refer fact, even interpret metaphors, but they will not find an elegant one, to navigate abstraction levels, even to differentiate root cause from effect or motivation and culture from cold logic. Some of them even ace IQ or can program but none did math so far. They hate, fear or despise rational results violation their learned rules. Sorry, chances are if you hate reading this, maybe you are one (or my English is annoyingly bad).

I love talking to people outside my bubble. They have an incredible broad diversity in abilities and experiences.