This reminds me of an article I read that was posted on HN only a few days ago: Uncertain<T>[1]. I think that a causality graph like this necessarily needs a concept of uncertainty to preserve nuance. I don't know whether this would be practical in terms of compute, but I'd think combining traditional NLP techniques with LLM analysis may make it so?
[1] https://github.com/mattt/Uncertain
Right. The first example on the site shows disease as a cause, and death as an effect. This is wrong on several levels: There is no such thing as healthy or sick. You’re always fighting off something, it just becomes obvious sometimes. Also, a disease doesn’t necessarily lead to death, obviously.
Since you're always going to die, the problem is solved - the implication is true by the right side always being true, and the left side doesn't matter.