←back to thread

250 points lewq | 2 comments | | HN request time: 0s | source
Show context
lastdong ◴[] No.42142169[source]
Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they? They generate lines of code that appear to fit together well, which is effective for simple scripts. However, when it comes to larger or more complex languages or projects, they (in my experience) often fall short.
replies(2): >>42142693 #>>42147155 #
meiraleal ◴[] No.42147155[source]
> Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they?

They do. But not 100%, at least not all times. Like a normal person.

replies(1): >>42149647 #
namaria ◴[] No.42149647[source]
No, they don't. They capture underlying patterns in training data and use them to generate text that fits the context.
replies(1): >>42149886 #
1. meiraleal ◴[] No.42149886{3}[source]
That's what humans do, too. And the rate of hallucination is probably higher.
replies(1): >>42151957 #
2. namaria ◴[] No.42151957[source]
You should study some neuroscience.