I watched a professor lecture on the likely candidates for what the open source llm community think is going on in o1[0] and I'm not convinced it's still simple pattern matching. [0] https://youtu.be/6PEJ96k1kiw
I watched a professor lecture on the likely candidates for what the open source llm community think is going on in o1[0] and I'm not convinced it's still simple pattern matching. [0] https://youtu.be/6PEJ96k1kiw
I'm not so confident that humans reason in a fundamentally different way than pattern matching. Perhaps paradigms focused on predicting the next token is too limiting. Reasoning plausibly involves pattern matching relevant schema representations, then executing along that schema. The ability to intuit that an existing schema is applicable to a certain situation is a good measure of intelligence, IMO. Could even make a good llm metric.
And let's also be fair, it would take a lot of effort for a human to generalize to a previously unseen pattern as well, so I always wonder just how useful it is to try to make such binary statements as "models don't reason" or they're "stochastic parrots". But maybe it's to counterweigh the statements that they are sentient, AGI is here, etc?
After having formulated an idea, do you put it on your intellectual bench and re-examine it, purposefully, analytically? Well, that is more than plain pattern matching over intellectual keys - it is procedural.
And what about those intellectual keys or «schemas», how are they generated? Through a verification, consolidation that is further to the original (pattern matching) intuition.
Can you show conclusively that LLMs can't do this or don't already do this to some degree?
I have skimmed through another relevant piece today: it seems we are not proceeding with adequate pace with the interpretation of the internals, with the gained "transparency" of the architecture...
It's a subject of active research the extent LLM "reasoning" really is reasoning similar to humans, or something of a strictly weaker class entirely.
Personally I'm of the opinion human reasoning is really just "pattern matching", but we're also still waiting for the cognitive scientists to give us an answer on that one.
There are more interpretations of "pattern matching".
Of course it seems a fundamental component of generating ideas, but then those ideas are put - by intellectuals - on a bench and criticized actively. The two activities have important differences. First you look and go "they seem four", but then you count to be sure.
The second part is absolutely critical to determine a well working reasoner.