I find it funny that AGI is supposed to be right around the corner, while these supposedly super smart LLMs still need to get their outputs filtered by regexes.
replies(8):
[1] https://www.axios.com/2025/05/23/anthropic-ai-deception-risk
I'm trying to remember which movie it was where a man left notes to himself because he had memory loss, as I never saw that movie. That's the sort of thing where an AI could easily tell me with very little back-and-forth and be correct, because it's broadly popular information that's in the training data and just I don't remember it.
By the same token you needn't think there's a person there when that meme pops up in the output. Those things are all in the training data over and over.