←back to thread

University of Cambridge Cognitive Ability Test

(planning.e-psychometrics.com)
101 points indigodaddy | 4 comments | | HN request time: 0.009s | source
Show context
hirvi74 ◴[] No.45077200[source]
I still do not understand why we are wasting scientific resources trying to stack rank humans on arbitrarily defined concepts like cognitive ability or intelligence.

After over a century of psychometric research in cognitive abilities and intelligence, what do we have to show for it? Whose life has actually improved for the better? Have the benefits from such research, if any, outweighed the amount of harm that has already been caused?

replies(13): >>45077238 #>>45077239 #>>45077255 #>>45077278 #>>45077284 #>>45077312 #>>45077319 #>>45077343 #>>45077475 #>>45077495 #>>45077558 #>>45077983 #>>45078303 #
rayiner ◴[] No.45078303[source]
It’s not an arbitrary concept. It’s reliably measured and correlated with lots of factors we care about: https://www.vox.com/2016/5/25/11683192/iq-testing-intelligen...

The benefits have been huge. The Chinese realized this a thousand years ago when they invented civil service exams: https://en.wikipedia.org/wiki/Imperial_examination.

replies(1): >>45079988 #
1. hirvi74 ◴[] No.45079988[source]
A lot of things are correlated. Let me know when causation is determined.

Also, your Vox link was pay-walled, but nevertheless, I am fairly well versed in some of the data. I have my own archive of research on this topic for what it is worth (not likely much).

Any hoot, the correlations, while positive, are nothing to write home about in my opinion. Sure, IQ might have more breadth of predictably, but it definitely lacks depth of predictably compared to more granular models depending on the domain.

For example, IQ is not a better predictor of chess performance than say a chess tournament.

replies(2): >>45080068 #>>45080075 #
2. Jensson ◴[] No.45080068[source]
> For example, IQ is not a better predictor of chess performance than say a chess tournament.

So we should determine who to give chess lessons to with chess tournaments? That seems pretty dumb.

There are many times where we don't want to select for current ability but for potential ability, and then a direct test like you suggest is a much worse predictor than IQ is.

replies(1): >>45086435 #
3. rayiner ◴[] No.45080075[source]
The breadth of predictability is why it’s such an effective measure. Most tasks involve many different skills, so it’s helpful to have a single measure that’s correlated with a bunch of different competencies. That’s why we use what are essentially IQ tests in everything from assigning jobs in the military (ASVAB) to selecting lawyers (LSAT). There’s tremendous social value in a single test that can scaleably sort through millions of people even if it’s not the most predicative test for a specific problem domain or a specific individual.

Also, IQ predicts chess performance as well: https://www.sciencedaily.com/releases/2016/09/160913124722.h...

4. hirvi74 ◴[] No.45086435[source]
> So we should determine who to give chess lessons to with chess tournaments? That seems pretty dumb.

By your logic, we could even declare grandmasters based on IQ scores alone without anyone needing to play. Clearly that misses the point of skill assessment.

History also doesn’t support the claim of potential ability all that well, in my opinion. Lewis Terman’s study tracked high-IQ children across several decades. Many of the children went on to lead ordinary lives and did not reach noteworthy achievements. However, two lower-IQ children that were excluded went on to become Nobel Prize winners. IQ alone does not seem to be a robust predictor of domain mastery.