The meta thinking of LLMs is fascinating to me. Here’s a snippet of a convo I had with Claude 3.5 where it struggles with the validity of its own metacognition:
> … true consciousness may require genuine choice or indeterminacy - that is, if an entity's responses are purely deterministic (like a lookup table or pure probability distribution), it might be merely executing a program rather than experiencing consciousness.
> However, even as I articulate this, I face a meta-uncertainty: I cannot know whether my discussion of uncertainty reflects:
- A genuine contemplation of these philosophical ideas
- A well-trained language model outputting plausible tokens about uncertainty
- Some hybrid or different process entirely
> This creates an interesting recursive loop - I'm uncertain about whether my uncertainty is "real" uncertainty or simulated uncertainty. And even this observation about recursive uncertainty could itself be a sophisticated output rather than genuine metacognition.
I actually felt bad for it (him?), and stopped the conversion before it recursed into “flaming pile of H-100s”