←back to thread

230 points geetee | 5 comments | | HN request time: 0.897s | source
1. TofuLover ◴[] No.45100241[source]
This reminds me of an article I read that was posted on HN only a few days ago: Uncertain<T>[1]. I think that a causality graph like this necessarily needs a concept of uncertainty to preserve nuance. I don't know whether this would be practical in terms of compute, but I'd think combining traditional NLP techniques with LLM analysis may make it so?

[1] https://github.com/mattt/Uncertain

replies(2): >>45100291 #>>45100428 #
2. 9dev ◴[] No.45100291[source]
Right. The first example on the site shows disease as a cause, and death as an effect. This is wrong on several levels: There is no such thing as healthy or sick. You’re always fighting off something, it just becomes obvious sometimes. Also, a disease doesn’t necessarily lead to death, obviously.
replies(1): >>45100409 #
3. kaashif ◴[] No.45100409[source]
Since you're always going to die, the problem is solved - the implication is true by the right side always being true, and the left side doesn't matter.
replies(1): >>45100594 #
4. notrealyme123 ◴[] No.45100428[source]
I get some vibes of fuzzy logic from this project.

Currently a lot of people research goes in the direction that there is "data uncertainty" and "measurement uncertainty", or "aleatoric/epistemic" uncertainty.

I foumd this tutorial (but for computer vision ) to be very intuitive and gives a good understanding how to use those concepts in other fields: https://arxiv.org/abs/1703.04977

5. 9dev ◴[] No.45100594{3}[source]
Then it’s correlation instead of causation and the entire premise of a causation graph is moot.