←back to thread

896 points tux3 | 3 comments | | HN request time: 0.598s | source
Show context
jerf ◴[] No.43546861[source]
One of my Core Memories when it comes to science, science education, and education in general was in my high school physics class, where we had to do an experiment to determine the gravitational acceleration of Earth. This was done via the following mechanism: Roll a ball off of a standard classroom table. Use a 1990s wristwatch's stopwatch mechanism to start the clock when the ball rolls of the table. Stop the stopwatch when the ball hits the floor.

Anyone who has ever had a wristwatch of similar tech should know how hard it is to get anything like precision out of those things. It's a millimeter sized button with a millimeter depth of press and could easily need half a second of jabbing at it to get it to trigger. It's for measuring your mile times in minutes, not fractions of a second fall times.

Naturally, our data was total, utter crap. Any sensible analysis would have error bars that, if you treat the problem linearly, would have put 0 and negative numbers within our error bars. I dutifully crunched the numbers and determined that the gravitational constant was something like 6.8m/s^2 and turned it in.

Naturally, I got a failing grade, because that's not particularly close, and no matter how many times you are solemnly assured otherwise, you are never graded on whether you did your best and honestly report what you observe. From grade school on, you are graded on whether or not the grading authority likes the results you got. You might hope that there comes some point in your career where that stops being the case, but as near as I can tell, it literally never does. Right on up to professorships, this is how science really works.

The lesson is taught early and often. It often sort of baffles me when other people are baffled at how often this happens in science, because it more-or-less always happens. Science proceeds despite this, not because of it.

(But jerf, my teacher... Yes, you had a wonderful teacher who didn't only give you an A for the equivalent but called you out in class for your honesty and I dunno, flunked everyone who claimed they got the supposed "correct" answer to three significant digits because that was impossible. There are a few shining lights in the field and I would never dream of denying that. Now tell me how that idealism worked for you going forward the next several years.)

replies(45): >>43546960 #>>43547056 #>>43547079 #>>43547302 #>>43547336 #>>43547355 #>>43547446 #>>43547723 #>>43547735 #>>43547819 #>>43547923 #>>43548145 #>>43548274 #>>43548463 #>>43548511 #>>43548631 #>>43548831 #>>43549160 #>>43549199 #>>43549233 #>>43549287 #>>43549330 #>>43549336 #>>43549418 #>>43549581 #>>43549631 #>>43549681 #>>43549726 #>>43549824 #>>43550069 #>>43550308 #>>43550776 #>>43550923 #>>43551016 #>>43551519 #>>43552066 #>>43552407 #>>43552473 #>>43552498 #>>43553305 #>>43554349 #>>43554595 #>>43555018 #>>43555061 #>>43555827 #
1. sobriquet9 ◴[] No.43546960[source]
I think if you showed not only the point estimate, but also some measure of uncertainty like standard deviation, it should have given you a passing grade. It's hard to say why an answer like 6.8 +- 5 is wrong.

Even if you don't yet have formal statistical chops, it should be at least possible to show cumulative distribution function of results that will convey the story better than a single answer with overly optimistic implied precision.

replies(1): >>43547018 #
2. jerf ◴[] No.43547018[source]
This is early high school. We didn't have error bars yet, we just took an average. I just used that as a convenient way to describe how erratic our numbers were. If 6.8 is the average you know we had some low numbers in there. And some nice high ones, too.

You're certainly correct that the true value would have been in our error bars, and one of those good teachers I acknowledge the existence of in my large paragraph, sarcastic as it may be, could conceivably have had us run such a garbage experiment and shown that as bad as it was, our error bars still did contain the correct value for probably all but one student or something like that. There's some valuable truth in that result too. Cutting edge science is often in some sense equivalently the result of bodging together a lot of results that in 30 year's hindsight will also be recognized as garbage methodology and experiments, not because the cutting edge researchers are bad people but because they were the ones pushing the frontier and building the very tools that later people would use to do those precision experiments with later. I always try to remember the context of early experiments when reading about them decades later.

It would also have been interesting to combine all the data together and see what happened. There's a decent chance that would have been at least reasonably close to the real value despite all the garbage data, which again would have been an interesting and vivid lesson.

This is part of the reason this is something that stuck with me. There were so many better things to do than just fail someone for not lying about having gotten the "correct" result. I'm not emotional about anything done to me over 30 years ago, but I'm annoyed in the here and now that this is still endemic to the field and the educational process, and this is some small effort to help push that along to being fixed.

replies(1): >>43547189 #
3. throwway120385 ◴[] No.43547189[source]
It's honestly kind of bullshit because the bedrock of a lot of my work is being realistic, and if I had such a piece of crap equipment I would have gladly reported the 6.8 meters per second squared and then turned around and identified all of the problems with my setup right down to characterizing the lag time on the stopwatch start.

In fact one of the trickiest problems I had to resolve once was to show that the reason a piece of equipment couldn't accurately accumulate a volume from a very small flow was because of the fixed-point decimal place they chose. And part of how I did that was by optimizing a measurement device for the compliance of a fixed tube until I got really good, consistent results. Because I knew that those numbers were actually really good it came down to how we were doing math in the computer and then I just had to do an analysis of all of the accumulation and other math to determine what the accumulated error was. It turned out to be in really good agreement with what the device was doing.

All of that came from our initial recognition that the measured quantity was wrong for some reason.