Most active commenters
  • airstrike(5)

←back to thread

674 points peterkshultz | 11 comments | | HN request time: 0.491s | source | bottom
Show context
joshvm ◴[] No.45636243[source]
One really important factor is the grading curve, if used. At my university, I think the goal was to give the average student 60%, or a mid 2.1) with some formula for test score adjustment to compensate for particularly tough papers. The idea is that your score ends up representing your ability with respect to the cohort and the specific tests that you were given.

https://warwick.ac.uk/fac/sci/physics/current/teach/general/...

There were several courses that were considered easy, and as a consequence were well attended. You had to do significantly better in those classes to get a high grade, versus a low-attendance hard course where 50% in the test was curved up to 75%.

replies(5): >>45636312 #>>45636394 #>>45636437 #>>45636823 #>>45639950 #
airstrike ◴[] No.45636312[source]
I don't think I'll ever understand/accept the idea of curving grades.
replies(2): >>45636554 #>>45639490 #
1. buildbot ◴[] No.45636554[source]
It makes sense when applied across multiple instances of a test, if one cohort does terribly curve up, one really well curve them down relative to the overall distribution of scores.

But yeah within a single assignment it makes no sense to force a specific distribution. (People do this maybe because they don’t understand?)

replies(1): >>45638564 #
2. airstrike ◴[] No.45638564[source]
Even in that case it doesn't make sense. Why should the underperforming cohort be rewarded for doing poorly?
replies(4): >>45639623 #>>45639669 #>>45639925 #>>45641550 #
3. joshvm ◴[] No.45639623[source]
The idea is to identify if there is a particularly easy/hard exam and the average score of the cohort is significantly different to how they perform in other classes. "Doing poorly" is quite hard to define when none of the tests, perhaps outside of the core 1st and 2nd year modules, are standard.
replies(1): >>45640435 #
4. vlovich123 ◴[] No.45639669[source]
Did the cohort due poorly or were the tests given to that cohort harder than in previous years? Or was the teacher a more difficult grader than others? You're jumping to the conclusion that the cohort was underperforming just because the grades were lower when other things out of their control could have been involved.
replies(1): >>45640431 #
5. supersour ◴[] No.45639925[source]
I think the prior probability in the bayesian sense is that the two entering cohorts are equally skilled (assuming students were randomly split into two sections as opposed to different sections being composed of different student bodies). If this were the case, the implication is that performance differences in standardized tests between cohorts are due to the professor (maybe one of the profs didn't cover the right material), so then normalization could be justified.

However if that prior is untrue for any reason whatsoever, the normalization would penalize higher performing cohorts (if it were a math course, maybe an engineering student dominated section vs an arts dominated cohort).

So I guess.. it depends

replies(1): >>45640427 #
6. airstrike ◴[] No.45640427{3}[source]
Right, and if it depends, maybe we just don't do it then?

Intuitively and in my experience, course content and exams are generally stable over many years, with only minor modifications as it evolves. Even different professors can sometimes have nearly identical exams for a given course, precisely so as to allow for better comparison.

7. airstrike ◴[] No.45640431{3}[source]
Tests are generally almost identical YoY where as humans are all very different! I think I'm making the simpler argument here
replies(1): >>45651305 #
8. airstrike ◴[] No.45640435{3}[source]
Tests can be consistent over time without being a true standard. Student competency can vary much more greatly than test content.
replies(1): >>45644529 #
9. johnnyanmac ◴[] No.45641550[source]
Depends on the rigor. The typical grade school curriculum is expecting you to keep up and get 80-90% of the content on a first go. Colleges can experiment with a variety of other kinds of methods. It's college, so there's no sense of "standaridized" content at this point.

For some, there's the idea of pushing a student to their limit and breaking their boundaries. A student getting 50% on a hard course may learn more and overall perform better in their career than if they were an A student in an easy course. Should they be punished because they didn't game the course and try to get the easy one?

And of course, someone getting 80% in such a course is probably truly the cream of the crop which would go unnoticed in an easy course.

10. lan321 ◴[] No.45644529{4}[source]
Not really since then all students can learn the exam as a template after 2-3 exams leak.

The curving I know at uni was targeting to exmatriculate 45% by the 3rd semester and another 40% of that by the end so the grades were adjusted to where X% would fail each exam. Then your target wasn't understanding the material but being better than half of the students taking it. The problems were complicated and time was severely limited so it wasn't like you could really have a perfect score. Literally 1-2 people would get a perfect score in an exam taken by 1000 people with many exams not having a perfect score.

I was one of the exmatriculated and moving to more standard tests made things much easier since you can learn templates with no real understanding. For example an exam with 5 tasks would have a pool of 10 possible tasks, each with 3-4 variations and after a while the possibilities for variation would become clear so you could make a good guess on what this semesters slight difference will likely be.

11. vlovich123 ◴[] No.45651305{4}[source]
The university I went to had student run test banks of previous exams that the administration sanctioned. If the following year you get the same question as the previous year, then you’re going to do better than the year that got the first version of that question.

You’re also ignoring the human element of grading particularly in subjective parts of an exam.