←back to thread

176 points lnyan | 6 comments | | HN request time: 1.203s | source | bottom
Show context
a3w[dead post] ◴[] No.42175961[source]
[flagged]
1. SubiculumCode ◴[] No.42176413[source]
Can you provide an example or link?

I'm not so confident that humans reason in a fundamentally different way than pattern matching. Perhaps paradigms focused on predicting the next token is too limiting. Reasoning plausibly involves pattern matching relevant schema representations, then executing along that schema. The ability to intuit that an existing schema is applicable to a certain situation is a good measure of intelligence, IMO. Could even make a good llm metric.

replies(1): >>42176906 #
2. mdp2021 ◴[] No.42176906[source]
> humans reason in a fundamentally different way

After having formulated an idea, do you put it on your intellectual bench and re-examine it, purposefully, analytically? Well, that is more than plain pattern matching over intellectual keys - it is procedural.

And what about those intellectual keys or «schemas», how are they generated? Through a verification, consolidation that is further to the original (pattern matching) intuition.

replies(1): >>42178329 #
3. stevenhuang ◴[] No.42178329[source]
> After having formulated an idea, do you put it on your intellectual bench and re-examine it, purposefully, analytically?

Can you show conclusively that LLMs can't do this or don't already do this to some degree?

replies(1): >>42178373 #
4. mdp2021 ◴[] No.42178373{3}[source]
Not "anatomically": only from the results.

I have skimmed through another relevant piece today: it seems we are not proceeding with adequate pace with the interpretation of the internals, with the gained "transparency" of the architecture...

replies(1): >>42178573 #
5. stevenhuang ◴[] No.42178573{4}[source]
Precisely. The architecture is transparent but the latent representations within and the operations performed by LLMs are not.

It's a subject of active research the extent LLM "reasoning" really is reasoning similar to humans, or something of a strictly weaker class entirely.

Personally I'm of the opinion human reasoning is really just "pattern matching", but we're also still waiting for the cognitive scientists to give us an answer on that one.

replies(1): >>42181987 #
6. mdp2021 ◴[] No.42181987{5}[source]
> I'm of the opinion human reasoning is really just "pattern matching"

There are more interpretations of "pattern matching".

Of course it seems a fundamental component of generating ideas, but then those ideas are put - by intellectuals - on a bench and criticized actively. The two activities have important differences. First you look and go "they seem four", but then you count to be sure.

The second part is absolutely critical to determine a well working reasoner.