Showing 1 - 10 of 14 Research Library Publications
Posted: | Michael A. Barone, Jessica L. Bienstock, Elise Lovell, John R. Gimpel, Grant L. Lin, Jennifer Swails, George C. Mejicano

Journal of Graduate Medical Education: Volume 14, Issue 6, Pages 634-638

 

This article discusses recent recommendations from the UME-GME Review Committee (UGRC) to address challenges in the UME-GME transition—including complexity, negative impact on well-being, costs, and inequities.

Posted: | Christopher Runyon, Polina Harik, Michael Barone

Diagnosis: Volume 10, Issue 1, Pages 54-60

 

This op-ed discusses the advantages of leveraging natural language processing (NLP) in the assessment of clinical reasoning. It also provides an overview of INCITE, the Intelligent Clinical Text Evaluator, a scalable NLP-based computer-assisted scoring system that was developed to measure clinical reasoning ability as assessed in the written documentation portion of the now-discontinued USMLE Step 2 Clinical Skills examination. 

Posted: | Jennifer L. Swails, Steven Angus, Michael Barone, Jessica Bienstock, Jesse Burk-Rafel, Michelle Roett, Karen E. Hauer

Academic Medicine: Volume 98 - Issue 2 - Pages 180-187

 

This article describes the work of the Coalition for Physician Accountability’s Undergraduate Medical Education to Graduate Medical Education Review Committee (UGRC) to apply a quality improvement approach and systems thinking to explore the underlying causes of dysfunction in the undergraduate medical education (UME) to graduate medical education (GME) transition.

Posted: | Erfan Khalaji, Sukru Eraslan, Yeliz Yesilada, Victoria Yaneva

Behavior & Information Technology

 

This study builds upon prior work in this area that focused on developing a machine-learning classifier trained on gaze data from web-related tasks to detect ASD in adults. Using the same data, we show that a new data pre-processing approach, combined with an exploration of the performance of different classification algorithms, leads to an increased classification accuracy compared to prior work.

Posted: | Jonathan D. Rubright, Thai Q. Ong, Michael G. Jodoin, David A. Johnson, Michael A. Barone

Academic Medicine: Volume 97 - Issue 8 - Pages 1219-1225

 

Since 2012, the United States Medical Licensing Examination (USMLE) has maintained a policy of ≤ 6 attempts on any examination component. The purpose of this study was to empirically examine the appropriateness of existing USMLE retake policy.

Posted: | Victoria Yaneva, Janet Mee, Le Ha, Polina Harik, Michael Jodoin, Alex Mechaber

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - p 2880–2886

 

This paper presents a corpus of 43,985 clinical patient notes (PNs) written by 35,156 examinees during the high-stakes USMLE® Step 2 Clinical Skills examination.

Posted: | Monica M. Cuddy, Chunyan Liu, Wenli Ouyang, Michael A. Barone, Aaron Young, David A. Johnson

Academic Medicine: June 2022

 

This study examines the associations between Step 3 scores and subsequent receipt of disciplinary action taken by state medical boards for problematic behavior in practice. It analyzes Step 3 total, Step 3 computer-based case simulation (CCS), and Step 3multiple-choice question (MCQ) scores.

Posted: | Daniel Jurich, Chunyan Liu, Amanda Clauser

Journal of Graduate Medical Education: Volume 14, Issue 3, Pages 353-354

 

Letter to the editor.

Posted: | Andrew A. White, Ann M. King, Angelo E. D’Addario, Karen Berg Brigham, Suzanne Dintzis, Emily E. Fay, Thomas H. Gallagher, Kathleen M. Mazor

JMIR Medical Education: Volume 8 - Issue 2 - e30988

 

This article aims to compare the reliability of two assessment groups (crowdsourced laypeople and patient advocates) in rating physician error disclosure communication skills using the Video-Based Communication Assessment app.

Posted: | Jonathan D. Rubright, Michael Jodoin, Stephanie Woodward, Michael A. Barone

Academic Medicine: Volume 97 - Issue 5 - Pages 718-722

 

The purpose of this 2019–2020 study was to statistically identify and qualitatively review USMLE Step 1 exam questions (items) using differential item functioning (DIF) methodology.