←back to thread

159 points picture | 5 comments | | HN request time: 0.001s | source
Show context
owlninja ◴[] No.42728486[source]
I guess I'll bite - what am I looking at here?
replies(3): >>42728535 #>>42728569 #>>42728865 #
IshKebab ◴[] No.42728535[source]
Faked scientific results.
replies(1): >>42728866 #
sergiotapia ◴[] No.42728866[source]
what happens to people who do this? are they shunned forever from scientific endeavors? isn't this the ultimate betrayal of what a scientist is supposed to do?
replies(1): >>42729149 #
Palomides ◴[] No.42729149[source]
if caught and it's unignorable, usually they say "oops, we made a minor unintentional mistake while preparing the data for publication, but the conclusion is still totally valid"

generally, no consequences

replies(2): >>42729334 #>>42743257 #
1. dylan604 ◴[] No.42729334[source]
There's a difference of having your results on your black plastic cookware being off by several factors in an "innocent" math mistake vs deliberately reusing results to fraudulently mislead people by faking the data.

Most people only remember the initial publication and the noise it makes. The updated/retractions generally are not remembered resulting in the same "generally, no consequences" but the details matter

replies(1): >>42729522 #
2. gus_massa ◴[] No.42729522[source]
The people in the area remember (probably because they wasted 3 months trying to extend/reproduce the result [1]). They may stop citing them.

In my area we have a few research groups that are very trustworthy and it's safe to try to combine their result with one of our ideas to get a new result. Other groups have a mixed history of dubious results, they don't lie but they cherry pick too much, so their result may not be generalizable to use as a foundation for our research.

[1] Exact reproduction are difficult to publish, but if you reproduce a result and make a twist, it may be good enough to be published.

replies(1): >>42735870 #
3. rcxdude ◴[] No.42735870[source]
This is a general issue with interpreting scientific papers: the people who specialize in the area will generally have a good idea about the plausibility of the result and the general reputation of the authors, but outsiders often lack that completely, and it's hard to think of a good way to really make that information accessible.

(And I think part of the general blowback against the credibility of science amongst the public is because there's been a big emphasis in popular communication that "peer reviewed paper == credible", which is an important distortion from the real message "peer reviewed paper is the minimum bar for credible", and high-profile cases of incorrect results or fraud are obvious problems with the first statement)

replies(1): >>42736922 #
4. gus_massa ◴[] No.42736922{3}[source]
I completely agree. When I see a post here I had no idea if i's a good journal or a crackpot journal [1]. The impact factor is sometimes useful, but the level in each area is very different. (In math, a usual values is about 1, but in biology it's about 5.)

Also, many sites just copy&paste the press release from the university that many times has a lot of exaggerations, and sometimes they ad a few more.

[1] If the journal has too many single author articles, it's a big red flag.

replies(1): >>42737128 #
5. rcxdude ◴[] No.42737128{4}[source]
Yes, I think science communication is also a big part of the problem. It's a hard one to do right, but easy to do wrong and few journalists especially care or have the resources to do it right (and the end results tends to be less appealing, because there's a lot less certainty involved)