←back to thread

108 points bertman | 1 comments | | HN request time: 0.226s | source
1. turtlethink ◴[] No.43824287[source]
Most people in the tech space talking about AI, not only misunderstand AI - but they usually have a far greater misconception of the human mind/brain.

The basic argument in the article above (and in most of this comment thread) is that LLMs could never reason because they can't do what humans are doing when we reason.

This whole thread is amusingly a rebuttal of itself. I would argue it's humans that can't reason, because of what we do when we "reason", the proof being this article which is a silly output of human reasoning. In other words, the above argument for why LLMs can't reason are so obviously fallacious in a multiple ways, the first of which is that human reasoning is a golden standard of reasoning, (and are a good example of how bad humans are at reasonin.

LLMs use naive statistical models to find the probability of a certain output, like "what's the most likely next word". Humans use equally rationally-irrelevant models that are something along the lines of "what's the most likely next word that would have the best internal/external consequence in terms of dopamine or more indirectly social standing, survival, etc."

We have very weak rational and logic circuits that arrive at wrong conclusions far more often than right conclusions, as long as it's beneficial to whatever goal our mind thinks is subconsciously helpful to survival. Often that is simple nonsense output that just sounds good to the listener (e.g. most human conversation)

Think how much nonsense you have seen output by the very "smartest" of humans. That is human reasoning. We are woefully ignorant of the actual mechanics of our own reasoning. The brain is a marvelous machine, but it's not what you think it is.