What is a consequence of multicollinearity among predictors?

Master CRINQ's Descriptive, Inferential, and Clinical Statistics with our practice test. Tackle multiple choice questions, each with detailed explanations, to ensure you're fully prepared. Ready for your exam!

Multiple Choice

What is a consequence of multicollinearity among predictors?

Explanation:
Multicollinearity happens when predictors are highly correlated, so the information each predictor adds overlaps with what another provides. This makes the estimated coefficients unstable across samples and, crucially, inflates their standard errors. With larger standard errors, the t-statistics drop, which leads to higher p-values and less evidence that a predictor has a unique effect while holding the others constant. In practice, you can still have a model that fits the data reasonably well overall, but you struggle to interpret the individual contributions of correlated predictors because their effects can trade off with one another. The residual variance—the portion of outcome variability not explained by the model—doesn’t get reduced by multicollinearity in itself. Similarly, R-squared, which reflects overall fit, can stay the same or even rise when adding correlated predictors, so it’s not a reliable indicator of multicollinearity’s presence. The key consequence, then, is the inflation of standard errors and the resulting difficulty in detecting and interpreting unique predictor effects. If needed, addressing it can involve removing or combining predictors or using regularization methods like ridge.

Multicollinearity happens when predictors are highly correlated, so the information each predictor adds overlaps with what another provides. This makes the estimated coefficients unstable across samples and, crucially, inflates their standard errors. With larger standard errors, the t-statistics drop, which leads to higher p-values and less evidence that a predictor has a unique effect while holding the others constant. In practice, you can still have a model that fits the data reasonably well overall, but you struggle to interpret the individual contributions of correlated predictors because their effects can trade off with one another. The residual variance—the portion of outcome variability not explained by the model—doesn’t get reduced by multicollinearity in itself. Similarly, R-squared, which reflects overall fit, can stay the same or even rise when adding correlated predictors, so it’s not a reliable indicator of multicollinearity’s presence. The key consequence, then, is the inflation of standard errors and the resulting difficulty in detecting and interpreting unique predictor effects. If needed, addressing it can involve removing or combining predictors or using regularization methods like ridge.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy