Which statement describes calibration and discrimination, and how would you improve them?

Master CRINQ's Descriptive, Inferential, and Clinical Statistics with our practice test. Tackle multiple choice questions, each with detailed explanations, to ensure you're fully prepared. Ready for your exam!

Multiple Choice

Which statement describes calibration and discrimination, and how would you improve them?

Explanation:
The key idea tested here is how calibration and discrimination describe different aspects of a risk prediction model and how to improve them in practice. Calibration refers to how well the predicted probabilities match the actual observed outcomes across the range of risk. If you predict a 15% risk for a group, about 15% of that group should experience the event; calibration checks that alignment, often with calibration plots or calibration-in-the-large and calibration slope. Discrimination is about the model’s ability to separate those who will have the event from those who will not, typically summarized by how well higher predicted risks correspond to who actually experiences the outcome (for example, using AUC/ROC). Improving calibration and discrimination typically involves: recalibration to align predicted probabilities with observed frequencies (adjusting the model’s intercept and slope), adding predictors to provide more information and improve the model’s ability to distinguish risk, and regularization to reduce overfitting and stabilize estimates so predictions generalize better to new data. This combination helps predictions both align with real outcomes and better rank individuals by risk. The other descriptions mix up the concepts or rely on metrics that measure the wrong property (for instance, treating discrimination as about agreement and calibration as about separation, or misusing AUC and calibration slope). They don’t accurately capture what calibration and discrimination assess, or how we typically improve them.

The key idea tested here is how calibration and discrimination describe different aspects of a risk prediction model and how to improve them in practice. Calibration refers to how well the predicted probabilities match the actual observed outcomes across the range of risk. If you predict a 15% risk for a group, about 15% of that group should experience the event; calibration checks that alignment, often with calibration plots or calibration-in-the-large and calibration slope. Discrimination is about the model’s ability to separate those who will have the event from those who will not, typically summarized by how well higher predicted risks correspond to who actually experiences the outcome (for example, using AUC/ROC).

Improving calibration and discrimination typically involves: recalibration to align predicted probabilities with observed frequencies (adjusting the model’s intercept and slope), adding predictors to provide more information and improve the model’s ability to distinguish risk, and regularization to reduce overfitting and stabilize estimates so predictions generalize better to new data. This combination helps predictions both align with real outcomes and better rank individuals by risk.

The other descriptions mix up the concepts or rely on metrics that measure the wrong property (for instance, treating discrimination as about agreement and calibration as about separation, or misusing AUC and calibration slope). They don’t accurately capture what calibration and discrimination assess, or how we typically improve them.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy