←back to thread

124 points alphadelphi | 2 comments | | HN request time: 0.452s | source
Show context
csdvrx ◴[] No.43594425[source]
> Returning to the topic of the limitations of LLMs, LeCun explains, "An LLM produces one token after another. It goes through a fixed amount of computation to produce a token, and that's clearly System 1—it's reactive, right? There's no reasoning," a reference to Daniel Kahneman's influential framework that distinguishes between the human brain's fast, intuitive method of thinking (System 1) and the method of slower, more deliberative reasoning (System 2).

Many people believe that "wants" come first, and are then followed by rationalizations. It's also a theory that's supported by medical imaging.

Maybe the LLM are a good emulation of system-2 (their perfomance sugggest it is), and what's missing is system-1, the "reptilian" brain, based on emotions like love, fear, aggression, (etc.).

For all we know, the system-1 could use the same embeddings, and just work in parallel and produce tokens that are used to guide the system-2.

Personally, I trust my "emotions" and "gut feelings": I believe they are things "not yet rationalized" by my system-2, coming straight from my system-1.

I know it's very unpopular among nerds, but it has worked well enough for me!

replies(4): >>43594452 #>>43594494 #>>43594520 #>>43594544 #
1. sho_hn ◴[] No.43594452[source]
Re the "medical imaging" reference, many of those are built on top of one famous study recording movement before conscious realization that isn't as clear-cut as it entered popular knowledge as: https://www.theatlantic.com/health/archive/2019/09/free-will...

I know there are other examples, and I'm not attacking your post; mainly it's a great opportunity to link this IMHO interesting article that interacts with many debates on HN.

replies(1): >>43594639 #
2. csdvrx ◴[] No.43594639[source]
> IMHO interesting article that interacts with many debates on HN.

It's paywalled