Showing 1 - 4 of 4 Research Library Publications
Posted: | Thai Ong, Becky Krumm, Margaret Wells, Susan Read, Linda Harris, Andrea Altomare, Miguel Paniagua

Academic Medicine: Volume 99 - Issue 7 - Pages 778-783

 

This study examined score comparability between in-person and remote proctored administrations of the 2020 Internal Medicine In-Training Examination (IM-ITE) during the COVID-19 pandemic. Analysis of data from 27,115 IM residents revealed statistically significant but educationally nonsignificant differences in predicted scores, with slightly larger variations observed for first-year residents. Overall, performance did not substantially differ between the two testing modalities, supporting the continued use of remote proctoring for the IM-ITE amidst pandemic-related disruptions.

Posted: | Daniel Jurich, Chunyan Liu

Applied Measurement Education: Volume 36, Issue 4, Pages 326-339

 

This study examines strategies for detecting parameter drift in small-sample equating, crucial for maintaining score comparability in high-stakes exams. Results suggest that methods like mINFIT, mOUTFIT, and Robust-z effectively mitigate drifting anchor items' effects, while caution is advised with the Logit Difference approach. Recommendations are provided for practitioners to manage item parameter drift in small-sample settings.
 

Posted: | Jonathan D. Rubright, Michael Jodoin, Stephanie Woodward, Michael A. Barone

Academic Medicine: Volume 97 - Issue 5 - Pages 718-722

 

The purpose of this 2019–2020 study was to statistically identify and qualitatively review USMLE Step 1 exam questions (items) using differential item functioning (DIF) methodology.

Posted: | L. E. Peterson, J. R. Boulet, B. E. Clauser

Academic Medicine: Volume 95 - Issue 9 - p 1396-1403

 

The objective of this study was to evaluate the associations of all required standardized examinations in medical education with ABFM certification examination scores and eventual ABFM certification.