One should distinguish between one instance and a mechanism/process for producing them. We could take randomness and entropy as an analogy: Shannon entropy quantifies randomness of a sequence generator, not the randomness/complexity of individual instances (which would be more akin to Kolmogorov complexity).
Similarly, the real interesting stuff regards the reliability and predictive power of knowledge-producing mechanisms, not individual pieces produced by it.
Another analogy is confidence intervals, which are defined through a collective property, a confidence interval is an interval produced by a confidence process and the meat of the definition concerns the confidence process, not its output.
I always found the Gettier problems unimpressive and mainly a distraction and a language game. Watching out for smoke-like things to infer whether there is a fire is a good survival tool in the woods and advisable behavior. Neither it nor anything else is a 100% surefire way to obtain bulletproof capital-letter Truth. We are never 100% justified ("what if you're in a simulation?", "you might be a Boltzmann brain!"). Even stuff like math is uncertain and we may make a mistake when mentally adding 7454+8635, we may even have a brainfart when adding 2+2, it's just much less likely, but I'm quite certain that at least one human manages to mess up 2+2 in real life every day.
It's a dull and uninteresting question whether it's knowledge. What do you want to use the fact of it being knowledge or not for? Will you trust stuff that you determine to be knowledge and not other things? Or is it about deciding legal court cases? Because then it's better to cut the middle man and directly try to determine whether it's good to punish something or not, without reference to terms like "having knowledge".