←back to thread

308 points tangjurine | 1 comments | | HN request time: 0.203s | source
Show context
Aurornis ◴[] No.43529859[source]
I'm all for installing air filters in classrooms for a number of reasons, but I also think the extreme results from this study aren't going to hold up to further research.

From the paper:

> To do so, I leverage a unique setting arising from the largest gas leak in United States history, whereby the offending gas company installed air filters in every classroom, office and common area for all schools within five miles of the leak (but not beyond). This variation allows me to compare student achievement in schools receiving air filters relative to those that did not using a spatial regression discontinuity design.

In other words, the paper looked at test scores at different schools in different areas on different years and assumed that the only change was the air filters. Anyone who has worked with school kids knows that the variations between classes from year to year can be extreme, as can differences produced by different teachers or even school policies.

Again, I think air filtration is great indoors, but expecting test scores to improve dramatically like this is not realistic. This feels like another extremely exaggerated health claim, like past claims made about fish oil supplements. Fish oil was briefly thought to have extreme positive health benefits from a number of very small studies like this, but as sample sizes became larger and studies became higher quality, most of the beneficial effects disappeared.

replies(13): >>43529891 #>>43529985 #>>43530174 #>>43530203 #>>43530314 #>>43530415 #>>43530679 #>>43530828 #>>43530901 #>>43531102 #>>43531116 #>>43532636 #>>43538480 #
1. stdbrouw ◴[] No.43532636[source]
I would also expect the estimated magnitude of the effect to go down over time, but that's just my general attitude to these kinds of things, the fact is that the discontinuity design that they use already accounts for variations between classes, teachers, schools, years. The way it works is that some unexpected event that applies to some people but not others is taken to represent a natural experiment, and then variation between groups before the event is compared to variation between groups after the event. The comparison is never against no variation.

The smoking gun is really in Table 3 and Table 4, where you can see that the effects that were observed are compatible with a population effect of 0, or alternatively you can look at Figure 2 and note that you could draw a straight line (no effect) within the confidence bands. Doesn't mean the effect is not there, but that there's insufficient evidence that it is, and that we should indeed be very careful about taking the estimates at face value.