←back to thread

247 points rntn | 1 comments | | HN request time: 0.254s | source
Show context
addoo ◴[] No.43797584[source]
This doesn’t really surprise me at all. It’s an unrelated field, but part of the reason I got completely disillusioned with research to the point I switched out of a program with a thesis was because I started noticing reproducibility problems in published work. My field is CS/CE, generally papers reference publicly available datasets and can be easily replicated… except I kept finding papers with results I couldn’t recreate. It’s possible I made mistakes (what does a college student know, after all), but usually there were other systemic problems on top of reproducibility. A secondary trait I would often notice is a complete exclusion of [easily intuited] counter-facts because they cut into the paper’s claim.

To my mind there is a nasty pressure that exists for some professions/careers, where publishing becomes essential. Because it’s essential, standards are relaxed and barriers lowered, leading to the lower quality work being published. Publishing isn’t done in response to genuine discovery or innovation, it’s done because boxes need to be checked. Publishers won’t change because they benefit from this system, authors won’t change because they’re bound to the system.

replies(4): >>43797800 #>>43798199 #>>43799570 #>>43802103 #
svachalek ◴[] No.43797800[source]
The state of CS papers is truly awful, as they're uniquely poised to be 100% reproducible. And yet my experience aligns with yours in that they very rarely are.
replies(2): >>43798084 #>>43798087 #
1. justinnk ◴[] No.43798084[source]
I can second this, even availability of the code is still a problem. However, I would not say CS results are rarely reproducible, at least from the few experineces I had so far, but I heard of problematic cases from others. I guess it also differs between fields.

I want to note there is hope. Contrary to what the root comment says, some publishers try to endorse reproducible results. See for example the ACM reproducibility initiative [1]. I have participated in this before and believe it is a really good initiative. Reproducing results can be very labor intensive though, loading a review system already struggling under massive floods of papers. And it is also not perfect, most of the time it is only ensured that the author-supplied code produces the presented results, but I still think more such initiatives are healthy. When you really want to ensure the rigor of a presented method, you have to replicate it, i.e., using a different programming language or so, which is really its own research endeavor. And there is also a place to publish such results in CS already [2]! (although I haven‘t tried this one). I imagine this may be especially interesting for PhD students just starting out in a new field, as it gives them the opportunity to learn while satisfying the expectation of producing papers.

[1] https://www.acm.org/publications/policies/artifact-review-an... [2] https://rescience.github.io