←back to thread

54 points tudorizer | 1 comments | | HN request time: 0.205s | source
Show context
felineflock ◴[] No.44371711[source]
It is a new nature of abstraction, not a new level.

UP: It lets us state intent in plain language, specs, or examples. We can ask the model to invent code, tests, docs, diagrams—tasks that previously needed human translation from intention to syntax.

BUT SIDEWAYS: Generation is a probability distribution over tokens. Outputs vary with sampling temperature, seed, context length, and even with identical prompts.

replies(2): >>44403418 #>>44403438 #
dcminter ◴[] No.44403418[source]
Surely given an identical prompt with a clean context and the same seed the outputs will not vary?
replies(2): >>44403454 #>>44404213 #
diggan ◴[] No.44403454[source]
+ temperature=0.0 would be needed for reproducible outputs. And even with that, if it's actually reproducible or not depends on the model/weights themselves, not all of them are even when all those things are static. And then finally depends on the implementation of the model architecture as well.

I think the tricky part is that we tend to think that prompts with similar semantic meaning will give the same outputs (like a human), while LLMs can give vastly different outputs if you have one spelling mistake for example, or used "!" instead of "?", the effect varies greatly per model.

replies(2): >>44403803 #>>44403989 #
1. smokel ◴[] No.44403989[source]
> I think the tricky part is that we tend to think that prompts with similar semantic meaning will give the same outputs (like a human)

Trust me, this response would have been totally different if I were in a different mood.