I mean, when you give an LLM good input, it seems to have a good chance of creating a good result. However, when you ask an LLM to retrieve facts, it often fails. And when you look at the inner workings of an LLMs that should not surprise us. After all, they are designed to apply logical relationships between input nodes. However, this is more akin to applying broad concepts than recalling detailed facts.
So if you want LLMs to succeed with their task, provide them with the knowledge they need for their task (or at least the tools to obtain the knowledge themself).