←back to thread

169 points mattmarcus | 1 comments | | HN request time: 0s | source
Show context
kazinator ◴[] No.43613916[source]
LLMs "understand" nullability to the extent that texts they have been trained on contain examples of nullability being used in code, together with remarks about it in natural language. When the right tokens occur in your query, other tokens get filled in from that data in a clever way. That's all there is to it.

The LLM will not understand, and is incapable of developing an understanding, of a concept not present in its training data.

If try to teach it the basics of the misunderstood concept in your chat, it will reflect back a verbal acknowledgement, restated in different words, with some smoothly worded embellishments which looks like the external trappings of understanding. It's only a mirage though.

The LLM will code anything, no matter how novel, if you give it detailed enough instructions and clarifications. That's just a a language translation task from pseudo-code to code. Being a language model, it's designed for that.

LLM is like the bar waiter who has picked up on economics and politics talk, and is able to interject with something clever sounding, to the surprise of the patrons. Gee, how does he or she understand the workings of the international monetary fund, and what the hell are they doing working in this bar?

replies(1): >>43621625 #
1. ghc ◴[] No.43621625[source]
Great analogy at the end! I'm going to have to steal this, because it hits right at the heart of the problem with relying on LLMs to do things outside of what they were designed for.