←back to thread

250 points lewq | 1 comments | | HN request time: 0s | source
Show context
lastdong ◴[] No.42142169[source]
Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they? They generate lines of code that appear to fit together well, which is effective for simple scripts. However, when it comes to larger or more complex languages or projects, they (in my experience) often fall short.
replies(2): >>42142693 #>>42147155 #
underwater ◴[] No.42142693[source]
But humans aren’t either. We have to install programs in people for even basic mathematical and analytical tasks. This takes about 12 years, and is pretty ineffective.
replies(2): >>42142774 #>>42142779 #
1. onlyrealcuzzo ◴[] No.42142774[source]
It seemingly built the modern world.

Ineffective seems harsh.