Which statement about calibration plots and discrimination is true?

Master CRINQ's Descriptive, Inferential, and Clinical Statistics with our practice test. Tackle multiple choice questions, each with detailed explanations, to ensure you're fully prepared. Ready for your exam!

Multiple Choice

Which statement about calibration plots and discrimination is true?

Explanation:
In predictive modeling, you’re comparing two different ways to assess how well a model performs. Calibration is about how well the predicted probabilities line up with what actually happens. A calibration plot lets you see this: you group individuals by their predicted risk and then check the observed event rate in each group. If the model is well calibrated, the observed outcomes match the predictions across the range, often shown as points hugging a 45-degree line. This is about agreement between what the model says could happen and what actually occurs. Discrimination, on the other hand, is about the model’s ability to distinguish those who will experience the event from those who will not. It’s a ranking property: higher predicted risk should go to those who do have the event. This is typically summarized by measures like the c-statistic or AUC, which reflect how well the model separates cases from non-cases. The statement correctly captures these ideas by saying calibration focuses on agreement between predicted and observed risk, while discrimination focuses on separation of outcomes. It’s not required to involve time-to-event data—calibration and discrimination can apply to plain binary outcomes as well as to survival data (with time-to-event measures available in that context). Calibration plots do not replace external validation; external validation is still needed to assess how well the model generalizes to new data.

In predictive modeling, you’re comparing two different ways to assess how well a model performs. Calibration is about how well the predicted probabilities line up with what actually happens. A calibration plot lets you see this: you group individuals by their predicted risk and then check the observed event rate in each group. If the model is well calibrated, the observed outcomes match the predictions across the range, often shown as points hugging a 45-degree line. This is about agreement between what the model says could happen and what actually occurs.

Discrimination, on the other hand, is about the model’s ability to distinguish those who will experience the event from those who will not. It’s a ranking property: higher predicted risk should go to those who do have the event. This is typically summarized by measures like the c-statistic or AUC, which reflect how well the model separates cases from non-cases.

The statement correctly captures these ideas by saying calibration focuses on agreement between predicted and observed risk, while discrimination focuses on separation of outcomes. It’s not required to involve time-to-event data—calibration and discrimination can apply to plain binary outcomes as well as to survival data (with time-to-event measures available in that context). Calibration plots do not replace external validation; external validation is still needed to assess how well the model generalizes to new data.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy