←back to thread

817 points dynm | 2 comments | | HN request time: 0.501s | source
Show context
mg ◴[] No.43307263[source]
This is great. The author defines their own metrics, is doing their own A/B tests and publishes their interpretation plus the raw data. Imagine a world where all health blogging was like that.

Personally, I have not published any results yet, but I have been doing this type of experiments for 4 years now. And collected 48874 data points so far. I built a simple system to do it in Vim:

https://www.gibney.org/a_syntax_for_self-tracking

I also built a bunch of tooling to analyze the data.

I think that mankind could greatly benefit from more people doing randomized studies on their own. Especially if we find a way to collectively interpret the data.

So I really applaud the author for conducting this and especially for providing the raw data.

Reading through the article and the comments here on HN, I wish there was more focus on the interpretation of the experiment. Pretty much all comments here seem to be anecdotal.

Let's look at the author's interpretation. Personally, I find that part a bit short.

They calculated 4 p-values and write:

    Technically, I did find two significant results.
I wonder what "Technically" means here. Are there "significant results" that are "better" than just "technically significant results"?

Then they continue:

    Of course, I don’t think this
    means I’ve proven theanine is harmful.
So what does it mean? What was the goal of collecting the data? What would the interpretation have been if the data would show a significant positive effect of Theanine?

It's great that they offer the raw data. I look forward to taking a look at it later today.

replies(14): >>43307304 #>>43307775 #>>43307806 #>>43307937 #>>43308201 #>>43308318 #>>43308320 #>>43308521 #>>43308854 #>>43309271 #>>43310099 #>>43320433 #>>43333903 #>>43380374 #
kortilla ◴[] No.43307806[source]
Well the issue is that an experiment with 1 person can’t prove much because you can’t have a control group.

Too much other stuff is changing in a single persons life that could account for all observed side effects.

You also have latent side effect issues. A person could smoke for 10 years, not smoke for another 10, and then conclude that smoking doesn’t cause cancer. Then they get lung cancer 20 years later.

Excellent data and statistics is not sufficient for a good experiment

replies(3): >>43307871 #>>43309673 #>>43316721 #
mg ◴[] No.43307871[source]
An experiment of 1 person can very well produce useful data.

It depends on the setup of the experiment.

Imagine an experiment where a person's thumb gets randomly hit with either a hammer or a feather once per day. And they then subjectively rate the experience. After 1000 days of collecting data, I doubt that we would wrongly come to the conclusion that the hammer treatment leads to the nicer outcome.

The setup of the Theanine experiment which is the basis of this thread looks good on first sight. I have the feeling that the interpretation could use more thought though.

replies(8): >>43307925 #>>43308004 #>>43308014 #>>43308106 #>>43308117 #>>43308394 #>>43309398 #>>43322506 #
1. tomalbrc ◴[] No.43308004[source]
What a weird take, in 99.999% of cases you don’t have such a black/white contrast
replies(1): >>43308048 #
2. mg ◴[] No.43308048[source]
Sure. But even when you add noise to the described experiment, you get useful data.

That is the point I am making: Experiments of a single person can be useful.

The critics of single person experiments usually come up with examples vastly different than the Theanine experiment described here. With long term experiments which are only conducted once. But the Theanine experiment was looking for a short term effect and can be conducted many times. The hammer experiment I made up would be an extreme example of this type of experiment which leads itself well to be conducted by a single person.

What I am trying to point out is that if you are a skeptic, it would be better to try and find weaknesses in the experiment at hand. Not making up completely different experiments.