I was thinking to train a convnet to accurately classify pictures of moles as normal vs abnormal. The user can take a photo and upload it to a diagnostic website and get a diagnosis.
It doesn’t seem like an overly complex model to develop and there is plenty of data referring to photos that show normal vs abnormal moles.
I wonder why a product hasn’t been developed, where we are using image detection on our phones to actively screen for skin cancer. Seems like a no brainer.
My thinking is there are not enough deaths to motivate the work. Dying from melanoma is nasty.