←back to thread

251 points lewq | 3 comments | | HN request time: 0.602s | source
Show context
lastdong ◴[] No.42142169[source]
Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they? They generate lines of code that appear to fit together well, which is effective for simple scripts. However, when it comes to larger or more complex languages or projects, they (in my experience) often fall short.
replies(2): >>42142693 #>>42147155 #
meiraleal ◴[] No.42147155[source]
> Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they?

They do. But not 100%, at least not all times. Like a normal person.

replies(1): >>42149647 #
1. namaria ◴[] No.42149647[source]
No, they don't. They capture underlying patterns in training data and use them to generate text that fits the context.
replies(1): >>42149886 #
2. meiraleal ◴[] No.42149886[source]
That's what humans do, too. And the rate of hallucination is probably higher.
replies(1): >>42151957 #
3. namaria ◴[] No.42151957[source]
You should study some neuroscience.