←back to thread

321 points distantprovince | 1 comments | | HN request time: 0s | source
Show context
zer00eyz ◴[] No.44617467[source]
This is an interesting take.

Cause all an LLM is, is a reflection of its input.

Garbage in garbage out.

If we're going to have this rule about AI, maybe we should have it about... everything. From your mom's last Facebook post, to what is said by influencers to this post...

Say less. Do more.

replies(3): >>44617499 #>>44617528 #>>44617550 #
1. majormajor ◴[] No.44617550[source]
That's not really true at all, at least at the end user level.

You can have a very thoughtful LLM prompt and get a garbage response if the model fails to generate a solid, sound answer to your prompt. Hard questions with verifiable but obscure answers, for instance, where it generates fake citations.

You can have a garbage prompt and get not-garbage output if you are asking in a well-understood area with a well-understood problem.

And the current generation of company-provided LLMs are VERY highly trained to make the answer look non-garbage in all cases, increasing the cognitive load on you to figure out.