library bookshelves

RESEARCH LIBRARY

View the latest publications from members of the NBME research team

Showing 1 - 7 of 7 Research Library Publications
Posted: | Karen E. Hauer, Pamela M. Williams, Julie S. Byerley, Jennifer L. Swails, Michael A. Barone

Academic Medicine: Volume 98 - Issue 2 - Pages 162-170

 

The US medical education transition from school to residency is resource-intensive. The Coalition for Physician Accountability aims to improve it, emphasizing learner support, diversity, and minimizing conflicts. This study explores key tensions and offers strategies to align the transition with ideal goals, aiding educators and organizations in implementing recommendations.

Posted: | Stanley J. Hamstra, Monica M. Cuddy, Daniel Jurich, Kenji Yamazaki, John Burkhardt, Eric S. Holmboe, Michael A. Barone, Sally A. Santen

Academic Medicine: Volume 96 - Issue 9 - Pages 1324-1331

 

This study examines associations between USMLE Step 1 and Step 2 Clinical Knowledge (CK) scores and ACGME emergency medicine (EM) milestone ratings.

Posted: | Z. Cui, C. Liu, Y. He, H. Chen

Journal of Educational Measurement: Volume 55, Issue 4, Pages 582-594

 

This article proposes and evaluates a new method that implements computerized adaptive testing (CAT) without any restriction on item review. In particular, it evaluates the new method in terms of the accuracy on ability estimates and the robustness against test‐manipulation strategies. This study shows that the newly proposed method is promising in a win‐win situation: examinees have full freedom to review and change answers, and the impacts of test‐manipulation strategies are undermined.

Posted: | S. Tackett, M. Raymond, R. Desai, S. A. Haist, A. Morales, S. Gaglani, S. G. Clyman

Medical Teacher: Volume 40 - Issue 8 - p 838-841

 

Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs.

Posted: | P.J. Hicks, M.J. Margolis, C.L. Carraccio, B.E. Clauser, K. Donnelly, H.B. Fromme, K.A. Gifford, S.E. Poynter, D.J. Schumacher, A. Schwartz & the PMAC Module 1 Study Group

Medical Teacher: Volume 40 - Issue 11 - p 1143-1150

 

This study explores a novel milestone-based workplace assessment system that was implemented in 15 pediatrics residency programs. The system provided: web-based multisource feedback and structured clinical observation instruments that could be completed on any computer or mobile device; and monthly feedback reports that included competency-level scores and recommendations for improvement.

Posted: | I. Kirsch, W. Thorn, M. von Davier

Quality Assurance in Education, Vol. 26 No. 2, pp. 150-152

 

An introduction to a special issue of Quality Assurance in Education featuring papers based on presentations at a two-day international seminar on managing the quality of data collection in large-scale assessments.

Posted: | M. von Davier

Psychometrika 83, 847–857 (2018)

 

Utilizing algorithms to generate items in educational and psychological testing is an active area of research for obvious reasons: Test items are predominantly written by humans, in most cases by content experts who represent a limited and potentially costly resource. Using algorithms instead has the appeal to provide an unlimited resource for this crucial part of assessment development.