←back to thread

YC: Requests for Startups

(www.ycombinator.com)
514 points sarimkx | 5 comments | | HN request time: 0.808s | source
1. webel0 ◴[] No.39373159[source]
(Tangentially related to, “A WAY TO END CANCER”)

After seeing how my doctor iteratively ordered up different sets of tests for me over the course of a few months, I got to thinking about improving decision trees for blood testing (and maybe others).

However, when I spoke to a (first year) med student about this he suggested that doctors actually don’t want something like this. I don't think I followed the thought process completely but it was something along the lines of, “we’ll always find something.”

Would be interested if someone could elaborate on this line of thinking.

replies(3): >>39373251 #>>39374089 #>>39374182 #
2. nonethewiser ◴[] No.39373251[source]
Can you elaborate? Im not following but it sounds interesting. What problem did you see and what alternative did you propose? I take it that the doctor was performing an inefficient search.
3. OJFord ◴[] No.39374089[source]
I've had similar conversations (another one is classifying ECGs as normal or whatever variety of abnormal rhythm, for example) with my wife, who's a doctor, and it's always some combination of 'yeah, that kinda does happen' (just more manually/lower tech, or human-driven, etc. than we're imagining) and 'we don't want that' like you say.

What they do want afaict is more fundamental, should-be-so-much-easier stuff like case management software that doesn't suck, and like, a chair to sit on while using that computer.

4. learn_more ◴[] No.39374182[source]
I think he was describing the fact that they already operate with decision framework that they already understand. Implicit in the results received from a particular test is the fact that there was a particular observation made that suggested they get such a test.

If they get results from a test, but without the compelling observation, they're then operating outside their well established statistical framework, and they can't confidently evaluate the meaningfulness of the test results.

To me, this doesn't mean the extra information is bad, or unhelpful, it's just they are not yet properly calibrated to use it properly.

I've heard this sentiment from medical professionals before and this was my conclusion.

replies(1): >>39374984 #
5. webel0 ◴[] No.39374984[source]
That makes sense. Explainability would be a big issue/requirement with any attempted automated decision framework. I don't know if I would want my doctor to just order up tests based on the output of some app without understanding why they're ordering them up.