The model has taken the input passages from its training data and tokenised it into weights. Don't humanise it by saying it has "remembered" anything. It does not and cannot remember sequences.
Yes, if you reduce temperature to zero and set the same random seed, you should get the same output tokens for a given set of input tokens.
However, there is no guarantee the output for a given seed will be the correct expected output.
For example, there logically must be a model and seed where providing the lord's prayer as input for completion produces a Metallica song as output, because that's a viable set of input tokens: https://genius.com/Metallica-enter-sandman-lyrics
That seed is no more or less valid than any other seed which completes the actual lord's prayer or which provides something completely different. All those seeds are just predicting their next token.
If people want that sort of exact reliable retrieval of sequences, and for the sequences to be "correct", then an LLM is the wrong tool for the job.