>> (Of course, maybe you could argue that's a famous example in its training set and it's just regurgitating, but then you could try making modifications, asking other questions, etc, and the LLM would continue to respond sensibly. So to me it seems to understand...)
Yes, well, that's the big confounder that has to be overcome by any claim of understanding (or reasoning etc) by LLMs, isn't it? They've seen so much stuff in training that it's very hard to know what they're simply reproducing from their corpus and what not. My opinion is that LLMs are statistical models of text and we can expect them to learn the surface statistical regularities of text in their corpus, which can be very powerful, but that's all. I don't see how they can learn "understanding" from text. The null hypothesis should be that they can't and, Sagan-like, we should expect to see extraordinary evidence before accepting they can. I do.
>> Regarding computation and understanding: I just though it was interesting that you presented a true fact about the computational limitations of NNs, which could easily/naturally/temptingingly -- yet incorrectly (I think!) -- be extended into a statement about the limitations of understanding of NNs (whatever understanding means -- no technical definition that I know of, but still, it does mean something, right?).
For humans it means something- because understanding is a property we assume humans have. Sometimes we use it metaphorically ("my program understands when the customer wants to change their pants") but in terms of computation... again I have no clue.
I generally have very few clues :)