The identity crisis bit was both amusing and slightly worrying.
replies(1):
llm's have no -world models- can't reason about truth or lies. only encyclopedic repeating facts.
all the tricks CoT, etc, are just, well tricks, extended yapping simulating thought and understanding.
AI can give great replies, if you give it great prompts, because you activate the tokens that you're interested with.
if you're lost in the first place, you'll get nowhere
for Claude, continuing the text with making up a story about being April fools, sounds the most plausible reasonable output given its training weights