←back to thread

229 points geetee | 1 comments | | HN request time: 0.204s | source
Show context
thicknavyrain ◴[] No.45100075[source]
I know it's a reductive take to point to a single mistake and act like the whole project might be a bit futile (maybe it's a rarity) but this example in their sample is really quite awful if the idea is to give AI better epistemics:

    {
        "causal_relation": {
            "cause": {
                "concept": "vaccines"
            },
            "effect": {
                "concept": "autism"
            }
        }
    },
... seriously? Then again, they do say these are just "causal beliefs" expressed on the internet, but seems like some stronger filtering of which beliefs to adopt ought to be exercised for an downstream usecase.
replies(2): >>45100168 #>>45100176 #
1. kykat ◴[] No.45100176[source]
In the precision dataset, there are the sentences that led to this, some are:

>> "Even though the article was fraudulent and was retracted, 1 in 4 parents still believe vaccines can cause autism."

>> On 28 February 1998 Horton published a controversial paper by Dr. Andrew Wakefield and 12 co-authors with the title "Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children" suggesting that vaccines could cause autism.

>> He was opposed by vaccine critics, many of whom believe vaccines cause autism, a belief that has been rejected by major medical journals and professional societies.

All that I've seen don't actually say that vaccines cause autism