←back to thread

54 points tudorizer | 1 comments | | HN request time: 0.244s | source
Show context
felineflock ◴[] No.44371711[source]
It is a new nature of abstraction, not a new level.

UP: It lets us state intent in plain language, specs, or examples. We can ask the model to invent code, tests, docs, diagrams—tasks that previously needed human translation from intention to syntax.

BUT SIDEWAYS: Generation is a probability distribution over tokens. Outputs vary with sampling temperature, seed, context length, and even with identical prompts.

replies(2): >>44403418 #>>44403438 #
dcminter ◴[] No.44403418[source]
Surely given an identical prompt with a clean context and the same seed the outputs will not vary?
replies(2): >>44403454 #>>44404213 #
1. furyofantares ◴[] No.44404213[source]
You can make these things deterministic for sure, and so you could also store prompts plus model details instead of code if you really wanted to. Lots of reasons this would be a very very poor choice but you could do it.

I don't think that's how you should think about these things being non-deterministic though.

Let's call that technical determinism, and then introduce a separate concept, practical determinism.

What I'm calling practical determinism is your ability as the author to predict (determine) the results. Two different prompts that mean the same thing to me will give different results, and my ability to reason about the results from changes to my prompt is fuzzy. I can have a rough idea, I can gain skill in this area, but I can't gain anything like the same precision as I have reasoning about the results of code I author.