Most active commenters
  • postalcoder(3)

←back to thread

559 points Gricha | 14 comments | | HN request time: 0.7s | source | bottom
1. postalcoder ◴[] No.46233143[source]
One of my favorite personal evals for llms is testing its stability as a reviewer.

The basic gist of it is to give the llm some code to review and have it assign a grade multiple times. How much variance is there in the grade?

Then, prompt the same llm to be a "critical" reviewer with the same code multiple times. How much does that average critical grade change?

A low variance of grades across many generations and a low delta between "review this code" and "review this code with a critical eye" is a major positive signal for quality.

I've found that gpt-5.1 produces remarkably stable evaluations whereas Claude is all over the place. Furthermore, Claude will completely [and comically] change the tenor of its evaluation when asked to be critical whereas gpt-5.1 is directionally the same while tightening the screws.

You could also interpret these results to be a proxy for obsequiousness.

Edit: One major part of the eval i left out is "can an llm converge on an 'A'?" Let's say the llm gives the code a 6/10 (or B-). When you implement its suggestions and then provide the improved code in a new context, does the grade go up? Furthermore, can it eventually give itself an A, and consistently?

It's honestly impressive how good, stable, and convergent gpt-5.1 is. Claude is not great. I have yet to test it on Gemini 3.

replies(4): >>46233792 #>>46233975 #>>46234427 #>>46234966 #
2. guluarte ◴[] No.46233792[source]
my experience reviewing pr is that sometimes it says it is perfect with some nipicks and other times the same pr that it is trash and need a lot of work
replies(1): >>46233901 #
3. ◴[] No.46233901[source]
4. adastra22 ◴[] No.46233975[source]
You mean literally assign a grade, like B+? This is unlikely to work based on how token prediction & temperature works. You're going to get a probability distribution in the end that is reflective of the model runtime parameters, not the intelligence of the model.
replies(2): >>46234085 #>>46241773 #
5. ◴[] No.46234085[source]
6. OsrsNeedsf2P ◴[] No.46234427[source]
How is this different than testing the temperature?
replies(2): >>46235040 #>>46238450 #
7. lemming ◴[] No.46234966[source]
I agree, I mostly use Claude for writing code, but I always get GPT5 to review it. Like you, I find it astonishingly consistent and useful, especially compared to Claude. I like to reset my context frequently, so I’ll often paste the problems from GPT into Claude, then get it to review those fixes (going around that loop a few times), then reset the context and get it to do a new full review. It’s very reassuring how consistent the results are.
8. smt88 ◴[] No.46235040[source]
It isn't, and it reflects how deeply LLMs are misunderstood, even by technical people
replies(3): >>46241319 #>>46241466 #>>46241736 #
9. itishappy ◴[] No.46238450[source]
How does temperature explain the variance in response to the inclusion of the word "critical"?
10. swid ◴[] No.46241319{3}[source]
It surely is different. If you set the temp to 0 and do the test with slightly different wording, there is no guarantee at all the scores would be consistent.

And if an LLM is consistent, even with a high temp, it could give the same PR the same grade while choosing different words to say.

The tokens are still chosen from the distribution, so a higher probability of the same grade will result in the same grade being chosen regardless of the temp set.

replies(1): >>46243038 #
11. stevenhuang ◴[] No.46241466{3}[source]
The irony is strong here.
12. postalcoder ◴[] No.46241736{3}[source]
gpt-5* reasoning models do not have an adjustable temperature parameter. It seems like we may have a different understanding of these models.

And, like the other commenter said, the temperature may change the distribution of the next token, but the reasoning tends to reel those things in, which is why reasoning models are notoriously poor at creative writing.

You are free to run these experiments for yourself. Perhaps, with your deeper understanding, you'll shed new light on this behavior.

13. postalcoder ◴[] No.46241773[source]
the gpt-5 reasoning models do not have a configurable temperature.

There's a reason why reasoning models are bad for creative writing. The thinking constrains the output.

14. smt88 ◴[] No.46243038{4}[source]
I think you're restating (in a longer and more accurate way) what I understood the original criticism to be, that this grading test isn't testing what's it's supposed to, partly because a grade is too few tokens.

The model could "assess" the code qualitatively the same and still give slightly different letter grades.