So can a traditional encyclopedia.
We haven't reached the stage yet where the majority of people are as sceptical of chatbots as they are of Wikipedia.
I get that even if people know not to trust a wiki, they might anyway, because, meh, good enough, but I still like us to move into a stage where the majority is at least somewhat aware that the chatbot might be wrong.
Wikipedia can also lie, obviously, but it at least requires sources to be cited, and I can dig deeper into topics at my leisure or need in order to improve my knowledge.
I cannot do either with an LLM. It is not obligated to cite sources, and even if it is it can just make shit up that’s impossible to follow or leads back to AI-generated slop - self-referencing, in other words. It also doesn’t teach you (by default, and my opinions of its teaching skills are an entirely different topic), but instead gives you an authoritative answer in tone, but not in practice.
Normalizing LLMs as “lossy encyclopedias” is a dangerous trend in my opinion, because it effectively handwaves the need for critical thinking skills associated with research and complex task execution, something in sore supply in the modern, Western world.
Giving LLMs credibility as “lossless encyclopedias” is tacit approval of further dumbing-down of humanity through answer engines instead of building critical thinking skills.
Calling them "lossy encyclopedias" isn't intended as a compliment! The whole point of the analogy is to emphasize that using them in place of an encyclopedia is a bad way to apply them.
So long as people are dumb enough to gleefully cede their expertise and sovereignty to a chatbot, I’ll keep desperately screaming into the void that they’re idiots for doing so.