←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.25s | source
1. gcanyon ◴[] No.44489437[source]
In some contexts it's super-important to remember that LLMs are stochastic word generators.

Everyday use is not (usually) one of those contexts. Prompting an LLM works much better with an anthropomorphized view of the model. It's a useful abstraction, a shortcut that enables a human to reason practically about how to get what they want from the machine.

It's not a perfect metaphor -- as one example, shame isn't much of a factor for LLMs, so shaming them into producing the right answer seems unlikely to be productive (I say "seems" because it's never been my go-to, I haven't actually tried it).

As one example, that person a few years back who told the LLM that an actual person would die if the LLM didn't produce valid JSON -- that's not something a person reasoning about gradient descent would naturally think of.