Taking old, resolved scandals - slapping a coat of culture war paint on it - and then selling it as a new scandal is already a popular MO for state-sponsored propoganda, so we should be extra wary of stories like this being massaged.
Taking old, resolved scandals - slapping a coat of culture war paint on it - and then selling it as a new scandal is already a popular MO for state-sponsored propoganda, so we should be extra wary of stories like this being massaged.
Respectfully, thats not accurate.
The article actually shows that dei considerations were central to the original changes, not just recent framing. The FOIA requests show explicit discussions about "diversity vs performance tradeoffs" from the beginning. The NBCFAE role and the "barrier analysis" were both explicitly focused on diversity outcomes in 2013.
The article provides primary sources (internal FAA documents, recorded messages, investigation reports) showing that racial considerations were explicitly part of the decision making process from the start. This is documented in realtime communications.
The scandal involved both improper hiring practices (cheating) AND questionable DEI implementation. These aren't mutually exclusive; they're interrelated aspects of the same event.
> Taking old, resolved scandals
In what way do you consider this resolved?
The class action lawsuit hasn't even gone to trial yet (2026).
The FAA is still dealing with controller shortages. (facilities are operating understaffed,controllers are working 6-day weeks due to staffing shortages, training pipelines remain backed up)
The relationship between the FAA and CTI schools remains damaged, applicant numbers have declined significantly since 2014.
For example, here's an FAA slide from 2013 which explicitly publishes the ambition to place DEI as the core issue ("- How much of a change in jo performance is acceptable to achieve what diversity goals?"):
The evidence in this source does not discuss cronyism, although I believe you that it could have been relevant to your personal experience; it's just false to claim the issue as a whole was unrelated to DEI.
Performance on the AT-SAT is not job performance.
If you have a qualification test that feels useful but also turns out to be highly non-predictive of job performance (as, for example, most college entrance exams turn out to be for college performance), you could change the qualification threshold for the test without any particular expectation of losing job performance.
In fact, it is precisely this logic that led many universities to stop using admissions tests - they just failed to predict actual performance very well at all.
No, but it was the best predictor of job performance and academy pass rate there was.
https://apps.dtic.mil/sti/pdfs/ADA566825.pdf
https://www.faa.gov/sites/faa.gov/files/data_research/resear... (page 41)
There are a fixed number of seats at the ATC academy in OKC, so it's critical to get the highest quality applicants possible to ensure that the pass rate is as high as possible, especially given that the ATC system has been understaffed for decades.
> "The empirically-keyed, response-option scored biodata scale demonstrated incremental validity over the computerized aptitude test battery in predicting scores representing the core technical skills of en route controllers."
I.e the aptitude test battery is WORSE than the biodata scale.
The second citation you offered merely notes that the AT-SAT battery is a better predictor than the older OPM battery, not that is the best.
I'd also say at a higher level that both of those papers absolutely reek of non-reproduceability and low N problems that plague social and psychological research. I'm not saying they're wrong. They are just not obviously definitive.
You're mistaken, it's the opposite. The first one found that AT-SAT performance was the best measure, with the biodata providing a small enhancement:
> AT-SAT scores accounted for 27% of variance in the criterion measure (β=0.520, adjusted R2=.271,p<.001). Biodata accounted for an additional 2% of the variance in CBPM (β=0.134; adjusted ΔR2=0.016,ΔF=5.040, p<.05).
> In other words, after taking AT-SAT into account, CBAS accounted for just a bit more of the variance in the criterion measure
Hence, "incremental validity."
> The second citation you offered merely notes that the AT-SAT battery is a better predictor than the older OPM battery, not that is the best.
You're right, and I can't remember which study it was that explicitly said that it was the best measure. I'll post it here if I find it. However, given that each failed applicant costs the FAA hundreds of thousands of dollars, we can safely assume that there was no better measure readily available at the time, or it would have been used instead of the AT-SAT. Currently they use the ATSA instead of the AT-SAT, which is supposed to be a better predictor, and they're planning on replacing the AT-SAT in a year or two; it's an ongoing problem with ongoing research.
> I'd also say at a higher level that both of those papers absolutely reek of non-reproduceability and low N problems that plague social and psychological research. I'm not saying they're wrong. They are just not obviously definitive.
Given the limited number of controllers, this is going to be an issue in any study you find on the topic. You can only pull so many people off the boards to take these tests, so you're never going to have an enormous sample size.