←back to thread

674 points peterkshultz | 1 comments | | HN request time: 0s | source
Show context
joshvm ◴[] No.45636243[source]
One really important factor is the grading curve, if used. At my university, I think the goal was to give the average student 60%, or a mid 2.1) with some formula for test score adjustment to compensate for particularly tough papers. The idea is that your score ends up representing your ability with respect to the cohort and the specific tests that you were given.

https://warwick.ac.uk/fac/sci/physics/current/teach/general/...

There were several courses that were considered easy, and as a consequence were well attended. You had to do significantly better in those classes to get a high grade, versus a low-attendance hard course where 50% in the test was curved up to 75%.

replies(5): >>45636312 #>>45636394 #>>45636437 #>>45636823 #>>45639950 #
airstrike ◴[] No.45636312[source]
I don't think I'll ever understand/accept the idea of curving grades.
replies(2): >>45636554 #>>45639490 #
buildbot ◴[] No.45636554[source]
It makes sense when applied across multiple instances of a test, if one cohort does terribly curve up, one really well curve them down relative to the overall distribution of scores.

But yeah within a single assignment it makes no sense to force a specific distribution. (People do this maybe because they don’t understand?)

replies(1): >>45638564 #
airstrike ◴[] No.45638564[source]
Even in that case it doesn't make sense. Why should the underperforming cohort be rewarded for doing poorly?
replies(4): >>45639623 #>>45639669 #>>45639925 #>>45641550 #
supersour ◴[] No.45639925[source]
I think the prior probability in the bayesian sense is that the two entering cohorts are equally skilled (assuming students were randomly split into two sections as opposed to different sections being composed of different student bodies). If this were the case, the implication is that performance differences in standardized tests between cohorts are due to the professor (maybe one of the profs didn't cover the right material), so then normalization could be justified.

However if that prior is untrue for any reason whatsoever, the normalization would penalize higher performing cohorts (if it were a math course, maybe an engineering student dominated section vs an arts dominated cohort).

So I guess.. it depends

replies(1): >>45640427 #
1. airstrike ◴[] No.45640427{3}[source]
Right, and if it depends, maybe we just don't do it then?

Intuitively and in my experience, course content and exams are generally stable over many years, with only minor modifications as it evolves. Even different professors can sometimes have nearly identical exams for a given course, precisely so as to allow for better comparison.