Showing 1 - 3 of 3 Research Library Publications
Posted: | Victoria Yaneva, Peter Baldwin, Daniel P. Jurich, Kimberly Swygert, Brian E. Clauser

Academic Medicine: Volume 99 - Issue 2 - p 192-197

 

This report investigates the potential of artificial intelligence (AI) agents, exemplified by ChatGPT, to perform on the United States Medical Licensing Examination (USMLE), following reports of its successful performance on sample items. 

Posted: | Thai Ong, Becky Krumm, Margaret Wells, Susan Read, Linda Harris, Andrea Altomare, Miguel Paniagua

Academic Medicine: Volume 99 - Issue 7 - Pages 778-783

 

This study examined score comparability between in-person and remote proctored administrations of the 2020 Internal Medicine In-Training Examination (IM-ITE) during the COVID-19 pandemic. Analysis of data from 27,115 IM residents revealed statistically significant but educationally nonsignificant differences in predicted scores, with slightly larger variations observed for first-year residents. Overall, performance did not substantially differ between the two testing modalities, supporting the continued use of remote proctoring for the IM-ITE amidst pandemic-related disruptions.

Posted: | Daniel Jurich, Chunyan Liu

Applied Measurement Education: Volume 36, Issue 4, Pages 326-339

 

This study examines strategies for detecting parameter drift in small-sample equating, crucial for maintaining score comparability in high-stakes exams. Results suggest that methods like mINFIT, mOUTFIT, and Robust-z effectively mitigate drifting anchor items' effects, while caution is advised with the Logit Difference approach. Recommendations are provided for practitioners to manage item parameter drift in small-sample settings.