Thanks for pointing out the elephant in the room with LLMs.
The basic design is non-deterministic. Trying to extract "facts" or "truth" or "accuracy" is an exercise in futility.
Thanks for pointing out the elephant in the room with LLMs.
The basic design is non-deterministic. Trying to extract "facts" or "truth" or "accuracy" is an exercise in futility.
Yes, I hate people. But usually whenever there's a critique of LLMs, I can find a parallel issue in people. The extension is that "if people can produce economic value despite their flaws, then so do LLMs, because the flaws are very similar at their core". I feel like HackerNews discussions keep circling around "LLMs bad", which gets very tiresome very fast. I wish there was more enthusiasm. Sure, LLMs have a lot of problems, but they also solve a lot of them too.
It's the dissonance between endless critique of AI on one hand and evergrowing ubiquity on the other. Feels like talking to my dad who refuses to use a GPS and always takes paper maps, and doesn't see the fact that he always arrives late, and keeps citing that one woman who rode into a lake when following GPS.
However, I do dispute your central claim that the issues with LLMs parallel the issues with people. I think that's a very dehumanizing and self-defeating perspective. The only ethical system that is rational is one in which humans have more than instrumental value to each other.
So when critics divide LLMs and humans, sure, there is a descriptive element of trying to be precise about what human thought is, and how it is different than LLMs, etc. But there is also a prescriptive argument that people are embarrassed to make, which is that human beings have to be afforded a certain kind of dignity and there is no reason to extend that to an LLM based on everything we understand about how they function. So if a person screws up your order at a restaurant, or your coworker makes a mistake when coding, you should treat them with charitability and empathy.
I'm sure this sounds silly to you, but it shouldn't. The bedrock of the Enlightenment project was that scientific inquiry would lead to human flourishing. That's humanism. If we've somehow strayed so far from that, such that appeals to human dignity don't make sense anymore, I don't know what to say.
Instead of "humanism", where "human" is at the centre, I'd like to propose a view where loosely defined intelligence is at the centre. In pre-AI world that view was consistent with humanism because humans were the only entity that displayed advanced intelligence, with the added bonus that it explains why people tend to value complex life forms more than simple ones. When AI enters the picture, it places sufficiently advanced AI above humans. Which is fine, because AI is nothing but the next step of evolution. It's like placing "homo sapiens" above "homo erectus" except AI is "homo sapiens" and we are "homo erectus". Makes a lot of sense IMO.