←back to thread

321 points distantprovince | 4 comments | | HN request time: 0.611s | source
1. zer00eyz ◴[] No.44617467[source]
This is an interesting take.

Cause all an LLM is, is a reflection of its input.

Garbage in garbage out.

If we're going to have this rule about AI, maybe we should have it about... everything. From your mom's last Facebook post, to what is said by influencers to this post...

Say less. Do more.

replies(3): >>44617499 #>>44617528 #>>44617550 #
2. echelon ◴[] No.44617499[source]
Previously there was some requirement for novel synthesis. You at least had to string your thoughts together into some kind of argument.

Now that's no longer the case and there are lazy or unthoughtful people that simply pass along AI outputs, raw and completely unprocessed, as cognitive work for other human beings to deal with.

3. z3c0 ◴[] No.44617528[source]
An LLM's output being a reflection of its output would imply determinism, which is the opposite of their value prop. "Garbage in, garbage out" is an addage born from traditional data pipelines. "Anything in, generic slop, possibly garbage, out" is the new status quo.
4. majormajor ◴[] No.44617550[source]
That's not really true at all, at least at the end user level.

You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.

You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.

And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.