From
junkcharts:
The Wall Street Journal hyped a research article about mobile apps that supposedly "detect skin cancer". While the tone of the article is quite balanced, I cringed when the reporter wrote: "the best-performing app accurately identified cancerous moles 98.1% of the time."
A nicely misleading bit of marketing.
Those who have been reading this blog hopefully will immediately wonder... if the app produces few false negatives, would it produce lots of false positives? It's almost guaranteed because there is a trade-off between those two types of errors.
Another question you might have is: assuming the app tells me the mole is malign, what is the probability that I have skin cancer? Notice this is the reverse of sensitivity. Sensitivity is the probability that the app tells me the mole is malign assuming that I have skin cancer.
Sorry to pop the bubble. The so-called positive predictive value is between 33 and 42%. This means that of those people whom the app claims have skin cancer, less than half of them actually does.
The 98% number is pretty much useless. It's the 40% number that we need to be worrying about.
No comments:
Post a Comment