←back to thread

851 points swyx | 3 comments | | HN request time: 0.703s | source
Show context
nickjj ◴[] No.25826835[source]
That was a fun read. I wish the author mentioned how much he was trying to sell the service for. It could have been $59 a month or $599 a month and with doctors you could potentially expect the same answer.

I'm not a psychologist but some of the author's quoted text came off extremely demeaning in written form. If the author happens to read this, did you really say those things directly to them?

For example, Susan (psychologist) was quoted as saying:

> "Oh sure! I mean, I think in many cases I'll just prescribe what I normally do, since I'm comfortable with it. But you know it's possible that sometimes I'll prescribe something different, based on your metastudies."

To which you replied:

> "And that isn't worth something? Prescribing better treatments?"

Imagine walking into the office of someone who spent the last ~10 years at school and then potentially 20 years practicing their craft as a successful psychologist and then you waltz in and tell them what they prescribe is wrong and your automated treatment plan is better.

replies(15): >>25826991 #>>25827042 #>>25827090 #>>25827136 #>>25827163 #>>25827304 #>>25827783 #>>25827796 #>>25828236 #>>25828791 #>>25829250 #>>25829290 #>>25830742 #>>25830838 #>>25832379 #
Gatsky ◴[] No.25829290[source]
This article was posted before several years ago. The whole premise is bumptious - "I can copy data out of a bunch of papers [which I am in no position to screen for quality or relevance], run a canned 'gold standard' analysis in R [the idea that there is one true way to generate valid data is ridiculous], and then go tell the professionals what they are doing wrong." He even brags that his meta-analysis for depression had more papers than the published one, as if this was a valid metric. The Cipriani meta-analysis he cites was publised in February 2018. His meta-analysis was done in July 2018, and had 324 more papers - what explains this difference, other than obviously sloppy methodology. A proper meta-analysis is a lot of work, researchers spend years on one meta-analysis. The whole concept is ill conceived, and the author is too caught up in themselves to even realise why.

Meta-analyses are a good idea, but the mere presence of a meta-analysis does not denote a useful undertaking. The literature is polluted with thousands of meta-analyses. As far as I can see this is mainly because there is software available which lets almost anyone do it, and once someone else has done a meta-analysis it is much easier to do another one because they have already found all the papers for you. The publication rate of meta-analyses far outstrips the publication rate of all papers, and shows some unusual geographic variation (Fig 2) [1].

[1] https://systematicreviewsjournal.biomedcentral.com/articles/...

replies(2): >>25829520 #>>25830586 #
sova ◴[] No.25829520[source]
Statistically speaking, isn't it sound to throw all the papers into the mulcher and see what comes out the other end? We do use the term "outliers" a lot in statistics, do we not? I understand that the quality might not be up to snuff for some, but won't the law of averages take care of that?
replies(5): >>25829647 #>>25829958 #>>25830011 #>>25830684 #>>25831075 #
1. haser_au ◴[] No.25829958[source]
This assumes all papers are of equal quality, peer-review and accuracy of results. Which we know they are not. Some studies should have more weight than others. Which has been mentioned in a previous comment; there is no 'right' answer, just a variety of ways to allocate different weights to papers based on various metrics.
replies(1): >>25831016 #
2. brabel ◴[] No.25831016[source]
You misinterpret the law of large numbers. What the law says is that if you have a large amount of samples, and assuming there's no pervasive bias in the samples, then any large enough sample (and often that's much smaller than you think - the classic example being election voters, with a group of only a few thousand representative voters being enough to predict the outcome of an election over a large country with millions of voters) will look identical to any other... that is, over a large enough sample, in the case of this article, the conclusion of many papers should converge to the same answer, with outliers being marked out as likely "bad" papers.

The only assumption you may reject here is that there's no systematic bias in the papers. Perhaps there is... or perhaps most papers are just very unreliable, in which case there should also be no convergence... but if you find convergence, there's a good chance the result is "real".

replies(1): >>25832055 #
3. cycomanic ◴[] No.25832055[source]
But the crucial bit here is the "large" in "large numbers". I expect that even for quite popular drugs the number of studies are maybe in the hundreds, which depending on statistics could well be quite a way from large enough. In particular if a significant fraction are crap studies.