Showing 1 - 4 of 4 Research Library Publications
Posted: | Victoria Yaneva, Peter Baldwin, Daniel P. Jurich, Kimberly Swygert, Brian E. Clauser

Academic Medicine: Volume 99 - Issue 2 - p 192-197

 

This report investigates the potential of artificial intelligence (AI) agents, exemplified by ChatGPT, to perform on the United States Medical Licensing Examination (USMLE), following reports of its successful performance on sample items. 

Posted: | Andrew A. White, Ann M. King, Angelo E. D’Addario, Karen Berg Brigham, Suzanne Dintzis, Emily E. Fay, Thomas H. Gallagher, Kathleen M. Mazor

JMIR Medical Education: Volume 8 - Issue 2 - e30988

 

This article aims to compare the reliability of two assessment groups (crowdsourced laypeople and patient advocates) in rating physician error disclosure communication skills using the Video-Based Communication Assessment app.

Posted: | J. Salt, P. Harik, M. A. Barone

Academic Medicine: March 2019 - Volume 94 - Issue 3 - p 314-316

 

The United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam uses physician raters to evaluate patient notes written by examinees. In this Invited Commentary, the authors describe the ways in which the Step 2 CS exam could benefit from adopting a computer-assisted scoring approach that combines physician raters’ judgments with computer-generated scores based on natural language processing (NLP).

Posted: | S. Tackett, M. Raymond, R. Desai, S. A. Haist, A. Morales, S. Gaglani, S. G. Clyman

Medical Teacher: Volume 40 - Issue 8 - p 838-841

 

Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs.