Showing 1 - 3 of 3 Research Library Publications
Posted: | Thai Ong, Becky Krumm, Margaret Wells, Susan Read, Linda Harris, Andrea Altomare, Miguel Paniagua

Academic Medicine: Volume 99 - Issue 7 - Pages 778-783

 

This study examined score comparability between in-person and remote proctored administrations of the 2020 Internal Medicine In-Training Examination (IM-ITE) during the COVID-19 pandemic. Analysis of data from 27,115 IM residents revealed statistically significant but educationally nonsignificant differences in predicted scores, with slightly larger variations observed for first-year residents. Overall, performance did not substantially differ between the two testing modalities, supporting the continued use of remote proctoring for the IM-ITE amidst pandemic-related disruptions.

Posted: | Daniel Jurich, Chunyan Liu

Applied Measurement Education: Volume 36, Issue 4, Pages 326-339

 

This study examines strategies for detecting parameter drift in small-sample equating, crucial for maintaining score comparability in high-stakes exams. Results suggest that methods like mINFIT, mOUTFIT, and Robust-z effectively mitigate drifting anchor items' effects, while caution is advised with the Logit Difference approach. Recommendations are provided for practitioners to manage item parameter drift in small-sample settings.
 

Posted: | D. Jurich, S.A. Santen, M. Paniagua, A. Fleming, V. Harnik, A. Pock, A. Swan-Sein, M.A. Barone, M. Daniel

Academic Medicine: Volume 95 - Issue 1 - p 111-121

 

This paper investigates the effect of a change in the United States Medical Licensing Examination Step 1 timing on Step 2 Clinical Knowledge (CK) scores, the effect of lag time on Step 2 CK performance, and the relationship of incoming Medical College Admission Test (MCAT) score to Step 2 CK performance pre and post change.