Monday, October 13, 2014

Method Comparison and Medical Decision Limits

In her lecture on Friday, Dr. Flatland talked about (among other things) analytical error, breaking it down into random error (imprecision) and systematic error (bias).  She also discussed the difference between constant and proportional bias when comparing two different methods of measuring the same analyte.  These are certainly important issues to consider when interpreting laboratory data.  Another way to compare methods is error grid analysis, which focuses on the clinical relevance of error.  Sometimes differences in results of two methods have no practical effect on clinical decision-making; on the other end of the spectrum, differences in lab results can alter clinical decision-making with dangerous consequences; and there are lots of scenarios that fall between the two extremes.  If you're interested in learning more about a specific application of error grid analysis in veterinary medicine, see this article (especially Figure 3):  http://avmajournals.avma.org/doi/pdfplus/10.2460/javma.235.11.1309

Wednesday, October 1, 2014

Sensitivity, Specificity, and Predictive Value

When we covered this topic in class, I emphasized how the prevalence of a disease influences the predictive value of a test for the disease:  i.e, PPV increases as prevalence increases, and NPV increases as prevalence decreases.  More recently, I had a good question from a student about the relationship between a test's sensitivity and its predictive value:

It would seem to me that as sensitivity increases, your PPV would decrease since the likelihood of your test getting a false positive increases as sensitivity increases.  Is this a true assumption or does the relationship even exist like that?

This was from my reply:

The flaw in your thinking is that the likelihood of getting a false positive test result increases as specificity decreases, but not necessarily as sensitivity increases.  If you do the math, you’ll see this is so.  For example, try working through 2 scenarios that both have 20% prevalence and 90% specificity, but that have respective sensitivities of 75% and 95% -- you’ll find that the PPV actually goes UP in the scenario with higher sensitivity.

If specificity is 100%, then PPV = 100%.  Why?  Because all unaffected individuals test neg, so any pos results are true positives (an especially desirable trait in a confirmatory test).

If sensitivity is 100%, then NPV = 100%.  Why?  Because all affected individuals test pos (an especially desirable trait in a screening test), so any neg results are true negatives.

Then it occurred to me why the student might have equated an increase in sensitivity with a decrease in specificity, so I added to my response:

The reasoning I gave you earlier refers to a scenario where sensitivity varies but specificity and prevalence remain constant.  However, as I talked about in class, if you are using a "cut point" value as the threshold for determining whether a test result is positive or negative, then sensitivity and specificity are inversely related – that is, as sensitivity goes up, specificity goes down, and vice versa.
   
If you're in the VM888 class and you follow the logic of this post, then you'll probably do fine on any questions related to this topic on Friday's exam!