It would seem to me that as sensitivity increases, your PPV would decrease since the likelihood of your test getting a false positive increases as sensitivity increases. Is this a true assumption or does the relationship even exist like that?
This was from my reply:
The flaw in your thinking is that the likelihood of getting a false positive test result increases as specificity decreases, but not necessarily as sensitivity increases. If you do the math, you’ll see this is so. For example, try working through 2 scenarios that both have 20% prevalence and 90% specificity, but that have respective sensitivities of 75% and 95% -- you’ll find that the PPV actually goes UP in the scenario with higher sensitivity.
If specificity is 100%, then PPV = 100%. Why?
Because all unaffected individuals test neg, so any pos results are true
positives (an especially desirable trait in a confirmatory test).
Then it occurred to me why the student might have equated an increase in sensitivity with a decrease in specificity, so I added to my response:
The reasoning I gave you earlier refers to a scenario where sensitivity varies but specificity and prevalence remain constant. However, as I talked about in class, if you are using a "cut point" value as the threshold for determining whether a test result is positive or negative, then sensitivity and specificity are inversely related – that is, as sensitivity goes up, specificity goes down, and vice versa.
If you're in the VM888 class and you follow the logic of this post, then you'll probably do fine on any questions related to this topic on Friday's exam!
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.