In evaluating a clinical prediction model, which metric measures its ability to correctly separate individuals with and without the outcome?

Master CRINQ's Descriptive, Inferential, and Clinical Statistics with our practice test. Tackle multiple choice questions, each with detailed explanations, to ensure you're fully prepared. Ready for your exam!

Multiple Choice

In evaluating a clinical prediction model, which metric measures its ability to correctly separate individuals with and without the outcome?

Explanation:
Discrimination—the ability of a model to separate those with the outcome from those without—is what this question is about. The metric that measures this separation is the AUC, the area under the ROC curve (also called the C-statistic). It reflects the probability that, for a random pair consisting of one person with the outcome and one without, the model assigns a higher predicted risk to the person who has the outcome. This measure is threshold-independent, so it assesses ranking rather than any specific cut-off. Calibration plots and calibration slope focus on calibration—how well predicted probabilities match observed frequencies—not on the model’s ability to distinguish between groups. The Brier score evaluates overall accuracy of the predicted probabilities, blending calibration and discrimination, but it doesn’t isolate the discrimination capacity as cleanly as AUC does.

Discrimination—the ability of a model to separate those with the outcome from those without—is what this question is about. The metric that measures this separation is the AUC, the area under the ROC curve (also called the C-statistic). It reflects the probability that, for a random pair consisting of one person with the outcome and one without, the model assigns a higher predicted risk to the person who has the outcome. This measure is threshold-independent, so it assesses ranking rather than any specific cut-off.

Calibration plots and calibration slope focus on calibration—how well predicted probabilities match observed frequencies—not on the model’s ability to distinguish between groups. The Brier score evaluates overall accuracy of the predicted probabilities, blending calibration and discrimination, but it doesn’t isolate the discrimination capacity as cleanly as AUC does.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy