←back to thread

851 points swyx | 10 comments | | HN request time: 1.066s | source | bottom
Show context
nickjj ◴[] No.25826835[source]
That was a fun read. I wish the author mentioned how much he was trying to sell the service for. It could have been $59 a month or $599 a month and with doctors you could potentially expect the same answer.

I'm not a psychologist but some of the author's quoted text came off extremely demeaning in written form. If the author happens to read this, did you really say those things directly to them?

For example, Susan (psychologist) was quoted as saying:

> "Oh sure! I mean, I think in many cases I'll just prescribe what I normally do, since I'm comfortable with it. But you know it's possible that sometimes I'll prescribe something different, based on your metastudies."

To which you replied:

> "And that isn't worth something? Prescribing better treatments?"

Imagine walking into the office of someone who spent the last ~10 years at school and then potentially 20 years practicing their craft as a successful psychologist and then you waltz in and tell them what they prescribe is wrong and your automated treatment plan is better.

replies(15): >>25826991 #>>25827042 #>>25827090 #>>25827136 #>>25827163 #>>25827304 #>>25827783 #>>25827796 #>>25828236 #>>25828791 #>>25829250 #>>25829290 #>>25830742 #>>25830838 #>>25832379 #
Gatsky ◴[] No.25829290[source]
This article was posted before several years ago. The whole premise is bumptious - "I can copy data out of a bunch of papers [which I am in no position to screen for quality or relevance], run a canned 'gold standard' analysis in R [the idea that there is one true way to generate valid data is ridiculous], and then go tell the professionals what they are doing wrong." He even brags that his meta-analysis for depression had more papers than the published one, as if this was a valid metric. The Cipriani meta-analysis he cites was publised in February 2018. His meta-analysis was done in July 2018, and had 324 more papers - what explains this difference, other than obviously sloppy methodology. A proper meta-analysis is a lot of work, researchers spend years on one meta-analysis. The whole concept is ill conceived, and the author is too caught up in themselves to even realise why.

Meta-analyses are a good idea, but the mere presence of a meta-analysis does not denote a useful undertaking. The literature is polluted with thousands of meta-analyses. As far as I can see this is mainly because there is software available which lets almost anyone do it, and once someone else has done a meta-analysis it is much easier to do another one because they have already found all the papers for you. The publication rate of meta-analyses far outstrips the publication rate of all papers, and shows some unusual geographic variation (Fig 2) [1].

[1] https://systematicreviewsjournal.biomedcentral.com/articles/...

replies(2): >>25829520 #>>25830586 #
1. sova ◴[] No.25829520[source]
Statistically speaking, isn't it sound to throw all the papers into the mulcher and see what comes out the other end? We do use the term "outliers" a lot in statistics, do we not? I understand that the quality might not be up to snuff for some, but won't the law of averages take care of that?
replies(5): >>25829647 #>>25829958 #>>25830011 #>>25830684 #>>25831075 #
2. hn_throwaway_99 ◴[] No.25829647[source]
Have you ever used a mulcher to chop up some yard waste, only to accidentally put in some dog shit, and then the whole thing stinks to high heaven?

In all seriousness, with meta-analyses it's still "garbage in, garbage out". It only takes one or a few egregiously bad studies to throw off your results if that study has a large sample size but something fundamentally wrong with its methodology or implementation.

3. haser_au ◴[] No.25829958[source]
This assumes all papers are of equal quality, peer-review and accuracy of results. Which we know they are not. Some studies should have more weight than others. Which has been mentioned in a previous comment; there is no 'right' answer, just a variety of ways to allocate different weights to papers based on various metrics.
replies(1): >>25831016 #
4. ufmace ◴[] No.25830011[source]
I've dealt with enough types of data that I feel super skeptical that you can just dump numbers from hundreds of studies into some data store programmatically, do statistical calculations, and get valid results. It's very difficult to believe that there aren't a ton of variations in how the data is gathered, filtered, and presented that need to be accounted for before any comparisons can be made. I'm not going to trust the law of averages to negate the effect of completely out of whack data when peoples' health is on the line.
5. flatline ◴[] No.25830684[source]
You get some numbers, they look good - fine, but at best it’s grounds for a proper study, at worst wildly misleading. You can easily fool yourself with statistics, and other people too.

For a good read about studies with solid statistics and bogus results, see [0].

[0] https://slatestarcodex.com/2014/04/28/the-control-group-is-o...

6. brabel ◴[] No.25831016[source]
You misinterpret the law of large numbers. What the law says is that if you have a large amount of samples, and assuming there's no pervasive bias in the samples, then any large enough sample (and often that's much smaller than you think - the classic example being election voters, with a group of only a few thousand representative voters being enough to predict the outcome of an election over a large country with millions of voters) will look identical to any other... that is, over a large enough sample, in the case of this article, the conclusion of many papers should converge to the same answer, with outliers being marked out as likely "bad" papers.

The only assumption you may reject here is that there's no systematic bias in the papers. Perhaps there is... or perhaps most papers are just very unreliable, in which case there should also be no convergence... but if you find convergence, there's a good chance the result is "real".

replies(1): >>25832055 #
7. Karrot_Kream ◴[] No.25831075[source]
You mean the Law of Large Numbers (LLN), not the Law of Averages, right? Both the Weak LLN and the Strong LLN presume all samples are independent and identically distributed. If we make a hierarchical model on the data of each paper, we can bind all the data into a single distribution, but assuming that each of these studies is independent is a _long_ shot. WLLN and SLLN _only_ apply to, roughly, sampling from the same process. Its scope is more applicable to things like sensor readings.
replies(1): >>25832569 #
8. cycomanic ◴[] No.25832055{3}[source]
But the crucial bit here is the "large" in "large numbers". I expect that even for quite popular drugs the number of studies are maybe in the hundreds, which depending on statistics could well be quite a way from large enough. In particular if a significant fraction are crap studies.
9. jacobion ◴[] No.25832569[source]
The Law of Large Numbers is an actual math theorem. The Law of Averages is a non-technical name for various informal reasoning strategies, some fallacious (like the gamblers fallacy), but mostly just types of estimation that are justified by more formal probability theory.
replies(1): >>25833222 #
10. pfdietz ◴[] No.25833222{3}[source]
More generally, see "concentration of measure".

https://en.wikipedia.org/wiki/Concentration_of_measure