Showing 11 - 20 of 21 Research Library Publications
Posted: | Katie L. Arnhart, Monica M. Cuddy, David Johnson, Michael A. Barone, Aaron Young

Academic Medicine: Volume 96 - Issue 9 - Pages 1319-1323

 

This study examined the relationship between USMLE attempts and the likelihood of receiving disciplinary actions from state medical boards.

Posted: | Karen E. Hauer, Daniel Jurich, Jonathan Vandergrift, Rebecca S. Lipner, Furman S. McDonald, Kenji Yamazaki, Davoren Chick, Kevin McAllister, Eric S. Holmboe

Academic Medicine: Volume 96 - Issue 6 - p 876-884(9)

 

This study examines whether there are group differences in milestone ratings submitted by program directors working with clinical competency committees based on gender for internal medicine residents and whether women and men rated similarly on subsequent in-training and certification examinations.

Posted: | Martin G. Tolsgaard, Christy K. Boscardin, Yoon Soo Park, Monica M. Cuddy, Stefanie S. Sebok-Syer

Advances in Health Sciences Education: Volume 25, p 1057–1086 (2020)

 

This critical review explores: (1) published applications of data science and ML in HPE literature and (2) the potential role of data science and ML in shifting theoretical and epistemological perspectives in HPE research and practice.

Posted: | Kimberly Hu, Patricia J. Hicks, Melissa Margolis, Carol Carraccio, Amanda Osta, Marcia L. Winward, Alan Schwartz

Academic Medicine: Volume 95 - Issue 11S - Pages S89-S94

 

Semiannually, U.S. pediatrics residency programs report resident milestone levels to the Accreditation Council for Graduate Medical Education (ACGME). The Pediatrics Milestones Assessment Collaborative (PMAC) developed workplace-based assessments of 2 inferences. The authors compared learner and program variance in PMAC scores with ACGME milestones.

Posted: | F.S. McDonald, D. Jurich, L.M. Duhigg, M. Paniagua, D. Chick, M. Wells, A. Williams, P. Alguire

Academic Medicine: September 2020 - Volume 95 - Issue 9 - p 1388-1395

 

This article aims to assess the correlations between United States Medical Licensing Examination (USMLE) performance, American College of Physicians Internal Medicine In-Training Examination (IM-ITE) performance, American Board of Internal Medicine Internal Medicine Certification Exam (IM-CE) performance, and other medical knowledge and demographic variables.

Posted: | L. E. Peterson, J. R. Boulet, B. E. Clauser

Academic Medicine: Volume 95 - Issue 9 - p 1396-1403

 

The objective of this study was to evaluate the associations of all required standardized examinations in medical education with ABFM certification examination scores and eventual ABFM certification.

Posted: | V. Yaneva, L. A. Ha, S. Eraslan, Y. Yesilada, R. Mitkov

IEEE Transactions on Neural Systems and Rehabilitation Engineering

 

The purpose of this study is to test whether visual processing differences between adults with and without high-functioning autism captured through eye tracking can be used to detect autism.

Posted: | D. Jurich, S.A. Santen, M. Paniagua, A. Fleming, V. Harnik, A. Pock, A. Swan-Sein, M.A. Barone, M. Daniel

Academic Medicine: Volume 95 - Issue 1 - p 111-121

 

This paper investigates the effect of a change in the United States Medical Licensing Examination Step 1 timing on Step 2 Clinical Knowledge (CK) scores, the effect of lag time on Step 2 CK performance, and the relationship of incoming Medical College Admission Test (MCAT) score to Step 2 CK performance pre and post change.

Posted: | P.J. Hicks, M.J. Margolis, C.L. Carraccio, B.E. Clauser, K. Donnelly, H.B. Fromme, K.A. Gifford, S.E. Poynter, D.J. Schumacher, A. Schwartz & the PMAC Module 1 Study Group

Medical Teacher: Volume 40 - Issue 11 - p 1143-1150

 

This study explores a novel milestone-based workplace assessment system that was implemented in 15 pediatrics residency programs. The system provided: web-based multisource feedback and structured clinical observation instruments that could be completed on any computer or mobile device; and monthly feedback reports that included competency-level scores and recommendations for improvement.

Posted: | M. von Davier

Psychometrika 83, 847–857 (2018)

 

Utilizing algorithms to generate items in educational and psychological testing is an active area of research for obvious reasons: Test items are predominantly written by humans, in most cases by content experts who represent a limited and potentially costly resource. Using algorithms instead has the appeal to provide an unlimited resource for this crucial part of assessment development.