library bookshelves

RESEARCH LIBRARY

View the latest publications from members of the NBME research team

Showing 1 - 5 of 5 Research Library Publications
Posted: | Christopher Runyon, Polina Harik, Michael Barone

Diagnosis: Volume 10, Issue 1, Pages 54-60

 

This op-ed discusses the advantages of leveraging natural language processing (NLP) in the assessment of clinical reasoning. It also provides an overview of INCITE, the Intelligent Clinical Text Evaluator, a scalable NLP-based computer-assisted scoring system that was developed to measure clinical reasoning ability as assessed in the written documentation portion of the now-discontinued USMLE Step 2 Clinical Skills examination. 

Posted: | Hanin Rashid, Christopher Runyon, Jesse Burk-Rafel, Monica M. Cuddy, Liselotte Dyrbye, Katie Arnhart, Ulana Luciw-Dubas, Hilit F. Mechaber, Steve Lieberman, Miguel Paniagua

Academic Medicine: Volume 97 - Issue 11S - Page S176

 

As Step 1 begins to transition to pass/fail, it is interesting to consider the impact of score goal on wellness. This study examines the relationship between goal score, gender, and students’ self-reported anxiety, stress, and overall distress immediately following their completion of Step 1.

Posted: | Erfan Khalaji, Sukru Eraslan, Yeliz Yesilada, Victoria Yaneva

Behavior & Information Technology

 

This study builds upon prior work in this area that focused on developing a machine-learning classifier trained on gaze data from web-related tasks to detect ASD in adults. Using the same data, we show that a new data pre-processing approach, combined with an exploration of the performance of different classification algorithms, leads to an increased classification accuracy compared to prior work.

Posted: | Daniel Jurich, Chunyan Liu, Amanda Clauser

Journal of Graduate Medical Education: Volume 14, Issue 3, Pages 353-354

 

Letter to the editor.

Posted: | Ian Micir, Kimberly Swygert, Jean D'Angelo

Journal of Applied Technology: Volume 23 - Special Issue 1 - Pages 30-40

 

The interpretations of test scores in secure, high-stakes environments are dependent on several assumptions, one of which is that examinee responses to items are independent and no enemy items are included on the same forms. This paper documents the development and implementation of a C#-based application that uses Natural Language Processing (NLP) and Machine Learning (ML) techniques to produce prioritized predictions of item enemy statuses within a large item bank.