←back to thread

169 points mattmarcus | 2 comments | | HN request time: 0.001s | source
Show context
sega_sai ◴[] No.43613291[source]
One thing that is exciting in the text is an attempt to go away from describing whether LLM 'understands' which I would argue an ill posed question, but instead rephrase it in terms of something that can actually be measured.

It would be good to list a few possible ways of interpreting 'understanding of code'. It could possibly include: 1) Type inference for the result 2) nullability 3) runtime asymptotics 4) What the code does

replies(2): >>43613930 #>>43613939 #
empath75 ◴[] No.43613939[source]
Is there any way you can tell whether a human understands something other than by asking them a question and judging their answer?

Nobody interrogates each other's internal states when judging whether someone understands a topic. All we can judge it based on are the words they produce or the actions they take in response to a situation.

The way that systems or people arrive at a response is sort of an implementation detail that isn't that important when judging whether a system does or doesn't understand something. Some people understand a topic on an intuitive, almost unthinking level, and other people need to carefully reason about it, but they both demonstrate understanding by how they respond to questions about it in the exact same way.

replies(1): >>43614028 #
cess11 ◴[] No.43614028[source]
No, most people absolutely use non-linguistic, involuntary cues when judging the responses of other people.

To not do that is commonly associated with things like being on the spectrum or cognitive deficiencies.

replies(3): >>43614102 #>>43614398 #>>43616453 #
1. empath75 ◴[] No.43614102{3}[source]
On a message board? Do you have theories about whether people on this thread understand or don't understand what they're talking about?
replies(1): >>43618604 #
2. cess11 ◴[] No.43618604[source]
Why would you add this constraint now?