Home > Research > Publications & Outputs > Model Misspecification and Robustness of Observ...

Electronic data

Links

Text available via DOI:

View graph of relations

Model Misspecification and Robustness of Observed-Score Test Equating Using Propensity Scores

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>31/10/2023
<mark>Journal</mark>Journal of Educational and Behavioral Statistics
Issue number5
Volume48
Number of pages33
Pages (from-to)603-635
Publication StatusPublished
Early online date9/05/23
<mark>Original language</mark>English

Abstract

This study explores the usefulness of covariates on equating test scores from nonequivalent test groups. The covariates are captured by an estimated propensity score, which is used as a proxy for latent ability to balance the test groups. The objective is to assess the sensitivity of the equated scores to various misspecifications in the propensity score model. The study assumes a parametric form of the propensity score and evaluates the effects of various misspecification scenarios on equating error. The results, based on both simulated and real testing data, show that (1) omitting an important covariate leads to biased estimates of the equated scores, (2) misspecifying a nonlinear relationship between the covariates and test scores increases the equating standard error in the tails of the score distributions, and (3) the equating estimators are robust against omitting a second-order term as well as using an incorrect link function in the propensity score estimation model. The findings demonstrate that auxiliary information is beneficial for test score equating in complex settings. However, it also sheds light on the challenge of making fair comparisons between nonequivalent test groups in the absence of common items. The study identifies scenarios, where equating performance is acceptable and problematic, provides practical guidelines, and identifies areas for further investigation.