←back to thread

416 points floverfelt | 5 comments | | HN request time: 0.674s | source
Show context
sebnukem2 ◴[] No.45056066[source]
> hallucinations aren’t a bug of LLMs, they are a feature. Indeed they are the feature. All an LLM does is produce hallucinations, it’s just that we find some of them useful.

Nice.

replies(7): >>45056284 #>>45056352 #>>45057115 #>>45057234 #>>45057503 #>>45057942 #>>45061686 #
nine_k ◴[] No.45057115[source]
I'd rather say that LLMs live in a world that consists entirely of stories, nothing but words and their combinations. Thy have no other reality. So they are good at generating more stories that would sit well with the stories they already know. But the stories are often imprecise, and sometimes contradictory, so they have to guess. Also, LLMs don't know how to count, but they know that two usually follows one, and three is usually said to be larger than two, so they can speak in a way that mostly does not contradict this knowledge. They can use tools to count, like a human who knows digits would use a calculator.

But much more than an arithmetic engine, the current crop of AI needs an epistemic engine, something that would help follow logic and avoid contradictions, to determine what is a well-established fact, and what is a shaky conjecture. Then we might start trusting the AI.

replies(3): >>45057276 #>>45061104 #>>45061385 #
1. gnerd00 ◴[] No.45057276[source]
this was true, but then it wasn't... the research world several years ago, had a moment when the machinery could reliably solve multi-step problems.. there had to be intermediary results; and machinery could solve problems in a domain where they were not trained specifically.. this caused a lot of excitement, and several hundred billion dollars in various investments.. Since no one actually knows how all of it works, not even the builders, here we are.
replies(2): >>45058433 #>>45058566 #
2. utyop22 ◴[] No.45058433[source]
"Since no one actually knows how all of it works, not even the builders, here we are."

To me this is the most bizarre part. Have we ever had a technology deployed at this scale without a true understanding of its inner workings?

My fear is that the general public perception of AI will be damaged since for most LLMs = AI.

replies(2): >>45059437 #>>45059539 #
3. achierius ◴[] No.45058566[source]
Are you sure you're talking about LLMs? These sound more like traditional ML systems like AlphaFold or AlphaProof.
4. riwsky ◴[] No.45059437[source]
Humanity used fire for like a bazillion years before figuring out thermodynamics
5. SillyUsername ◴[] No.45059539[source]
This is a misconception, we absolutely do know how LLMs work, that's how we can write them and publish research papers.

The idea we don't is tabloid journalism, it's simply because the output is (usually) randomised - taken to mean, by those who lack the technical chops, that programmers "don't know how it works" because the output is indeterministic.

This is not withstanding we absolutely can repeat the output by using not randomisation (temperature 0).