←back to thread

250 points lewq | 9 comments | | HN request time: 1.045s | source | bottom
1. lastdong ◴[] No.42142169[source]
Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they? They generate lines of code that appear to fit together well, which is effective for simple scripts. However, when it comes to larger or more complex languages or projects, they (in my experience) often fall short.
replies(2): >>42142693 #>>42147155 #
2. underwater ◴[] No.42142693[source]
But humans aren’t either. We have to install programs in people for even basic mathematical and analytical tasks. This takes about 12 years, and is pretty ineffective.
replies(2): >>42142774 #>>42142779 #
3. onlyrealcuzzo ◴[] No.42142774[source]
It seemingly built the modern world.

Ineffective seems harsh.

4. krapp ◴[] No.42142779[source]
No we don't. Humans were capable of mathematics and analytical tasks long before the establishment of modern twelve year education. We don't require "installing" a program to do basic mathematics, as we would never have figured out basic agriculture or developed civilization or seafaring if that were the case. I mean, Eratosthenes worked out the circumference of the Earth to a reasonable degree of accuracy in the third century BC. Even primitive hunter gather societies had concepts of counting, grouping and sequence that are beyond LLMs.

If humans were as bad as LLMs at basic math and logic, we would consider them developmentally challenged. Yet this constant insistence that humans are categorically worse than, or at best no better than, LLMs persists. It's a weird, almost religious belief in the superiority of the machine even in spite of obvious evidence to the contrary.

replies(1): >>42146287 #
5. kmmlng ◴[] No.42146287{3}[source]
I think you both have a point. Clearly, something about the way we build LLMs today makes them inept at these kinds of tasks. It's unclear how fundamental this problem is as far as I'm concerned.

Clearly, humans also need training in these things and almost no one figures out how to do basic things like long division by themselves. Some people sometimes figure things out, and more importantly, they do so by building on what came before.

The difference between humans and LLMs is that even after being trained and given access to near everything that came before, LLMs are terrible at this stuff.

6. meiraleal ◴[] No.42147155[source]
> Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they?

They do. But not 100%, at least not all times. Like a normal person.

replies(1): >>42149647 #
7. namaria ◴[] No.42149647[source]
No, they don't. They capture underlying patterns in training data and use them to generate text that fits the context.
replies(1): >>42149886 #
8. meiraleal ◴[] No.42149886{3}[source]
That's what humans do, too. And the rate of hallucination is probably higher.
replies(1): >>42151957 #
9. namaria ◴[] No.42151957{4}[source]
You should study some neuroscience.