←back to thread

851 points swyx | 1 comments | | HN request time: 0.22s | source
Show context
dkarl ◴[] No.25827610[source]
I'm kind of concerned that people read this as satire but miss the most important part, namely the absence of concern about the safety and validity of the results. You know, the part where he stuck some statistical software in front of a database populated by a "motley crew" of contractors and wanted doctors to use it as a shortcut for making patient care decisions. The part where he implicitly compares the HTML spit out by his system to peer-reviewed work by professional researchers. The part where he is proud of "beating" a "record" for least discriminating meta-analysis.

Reading this story and talking about his marketing and product development process feels like watching Lovecraft Country and then then only talking about the time travel physics of it. There's something real and awful here, hopefully presented in a fictionalized or highly exaggerated form. The people in my social circles who mistrust tech and despise startup culture -- this is exactly how they see us.

replies(4): >>25828178 #>>25828883 #>>25829746 #>>25831567 #
fastball ◴[] No.25828178[source]
But... this is peer-reviewed work by professional researchers[1]. He just tabulated them / made them searchable.

[1] Given the reproducibility crisis, I'm not sure we should trust these much either, but that's a discussion for another time.

replies(3): >>25828823 #>>25829295 #>>25833307 #
1. dkarl ◴[] No.25833307[source]
They're already searchable. He's claiming to extract usable information from them in a way that can be instantly used by doctors, so that they don't have to read the studies or find an up-to-date summary of the literature by a skilled professional.

He even called what his software did a meta-analysis. My understanding is limited, but a meta-analysis is difficult and laborious to do correctly. You have to design the standards you use to include and exclude studies and examine each study to decide whether including it will improve or harm the quality of your results. For example, maybe a method to evaluate outcomes that used to be common has been discredited. Maybe a study was done under a principal researcher who has been caught fabricating results in other studies. Maybe a study claims to be double-blind, but when you read it carefully, it turns out that it isn't. Maybe a study is well-designed in every way, but it was designed for a slightly different purpose, so the data can't be used the way you want to.

Factors like these result in the exclusion of a lot of studies from a meta-analysis, after painstaking examination by researchers who have the expertise to design and run the studies they're reading. It's scientifically difficult and important work. The author is bragging about not doing this work:

> On July 2, 2018, GlacierMD powered the world's largest depression meta-analysis, using data from 846 trials, beating Cipriani's previous record of 522.