library bookshelves

RESEARCH LIBRARY

View the latest publications from members of the NBME research team

Showing 1 - 10 of 115 Research Library Publications
Posted: | John Norcini, Irina Grabovsky, Michael A. Barone, M. Brownell Anderson, Ravi S. Pandian, Alex J. Mechaber

Academic Medicine: Volume 99 - Issue 3 - p 325-330

 

This retrospective cohort study investigates the association between United States Medical Licensing Examination (USMLE) scores and outcomes in 196,881 hospitalizations in Pennsylvania over 3 years.

Posted: | Victoria Yaneva, Peter Baldwin, Daniel P. Jurich, Kimberly Swygert, Brian E. Clauser

Academic Medicine: Volume 99 - Issue 2 - p 192-197

 

This report investigates the potential of artificial intelligence (AI) agents, exemplified by ChatGPT, to perform on the United States Medical Licensing Examination (USMLE), following reports of its successful performance on sample items. 

Posted: | Daniel Jurich, Chunyan Liu

Applied Measurement Education: Volume 36, Issue 4, Pages 326-339

 

This study examines strategies for detecting parameter drift in small-sample equating, crucial for maintaining score comparability in high-stakes exams. Results suggest that methods like mINFIT, mOUTFIT, and Robust-z effectively mitigate drifting anchor items' effects, while caution is advised with the Logit Difference approach. Recommendations are provided for practitioners to manage item parameter drift in small-sample settings.
 

Posted: | Janet Mee, Ravi Pandian, Justin Wolczynski, Amy Morales, Miguel Paniagua, Polina Harik, Peter Baldwin, Brian E. Clauser

Advances in Health Sciences Education

 

Recent advancements enable replacing MCQs with SAQs in high-stakes assessments, but prior research often used small samples under low stakes and lacked time data. This study assesses difficulty, discrimination, and time in a large-scale high-stakes context

Posted: | Daniel P. Jurich, Matthew J. Madison

Educational Assessment

 

This study proposes four indices to quantify item influence and distinguishes them from other available item and test measures. We use simulation methods to evaluate and provide guidelines for interpreting each index, followed by a real data application to illustrate their use in practice. We discuss theoretical considerations regarding when influence presents a psychometric concern and other practical concerns such as how the indices function when reducing influence imbalance.

Posted: | King Yiu Suen, Victoria Yaneva, Le An Ha, Janet Mee, Yiyun Zhou, Polina Harik

Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023), Pages 443-447

 

This paper presents the ACTA system, which performs automated short-answer grading in the domain of high-stakes medical exams. The system builds upon previous work on neural similarity-based grading approaches by applying these to the medical domain and utilizing contrastive learning as a means to optimize the similarity metric. 

Posted: | Irina Grabovsky, Jerusha J. Henderek, Ulana A. Luciw-Dubas, Brent Pierce, Soren Campbell, Katherine S. Monroe

Journal of Medical Education and Curricular Development: Volume 10

In-training examinations (ITEs) are a popular teaching tool for certification programs. This study examines the relationship between examinees’ performance on the National Commission for Certification of Anesthesiologist Assistants (NCCAA) ITE and the high-stakes NCCAA Certification Examination.

Posted: | Shana D. Stites, Jonathan D. Rubright, Kristin Harkins, Jason Karlawish

International Journal of Geriatric Psychiatry: Volume 38 - Issue 6, e5939

 

This observational study examined how awareness of diagnosis predicted changes in cognition and quality of life (QOL) 1 year later in older adults with normal cognition and dementia diagnoses.

Posted: | Victoria Yaneva (editor), Matthias von Davier (editor)

Advancing Natural Language Processing in Educational Assessment

 

This book examines the use of natural language technology in educational testing, measurement, and assessment. Recent developments in natural language processing (NLP) have enabled large-scale educational applications, though scholars and professionals may lack a shared understanding of the strengths and limitations of NLP in assessment as well as the challenges that testing organizations face in implementation. This first-of-its-kind book provides evidence-based practices for the use of NLP-based approaches to automated text and speech scoring, language proficiency assessment, technology-assisted item generation, gamification, learner feedback, and beyond.

Posted: | Victoria Yaneva, Peter Baldwin, Le An Ha, Christopher Runyon

Advancing Natural Language Processing in Educational Assessment: Pages 167-182

 

This chapter discusses the evolution of natural language processing (NLP) approaches to text representation and how different ways of representing text can be utilized for a relatively understudied task in educational assessment – that of predicting item characteristics from item text.