Showing 21 - 30 of 31 Research Library Publications
Posted: | F.S. McDonald, D. Jurich, L.M. Duhigg, M. Paniagua, D. Chick, M. Wells, A. Williams, P. Alguire

Academic Medicine: September 2020 - Volume 95 - Issue 9 - p 1388-1395

 

This article aims to assess the correlations between United States Medical Licensing Examination (USMLE) performance, American College of Physicians Internal Medicine In-Training Examination (IM-ITE) performance, American Board of Internal Medicine Internal Medicine Certification Exam (IM-CE) performance, and other medical knowledge and demographic variables.

Posted: | M. M. Hammoud, L. M.Foster, M. M.Cuddy, D. B. Swanson, P. M. Wallach

American Journal of Obstetrics and Gynecology, Volume 223, Issue 3, Pages 435.e1-435.e6

 

The purpose of this study was to examine medical student reporting of electronic health record use during the obstetrics and gynecology clerkship.

Posted: | P. Harik, R.A. Feinberg RA, B.E. Clauser

Integrating Timing Considerations to Improve Testing Practices

 

This chapter addresses a different aspect of the use of timing data: it provides a framework for understanding how an examinee's use of time interfaces with time limits to impact both test performance and the validity of inferences made based on test scores. It focuses primarily on examinations that are administered as part of the physician licensure process.

Posted: | D. Jurich, S.A. Santen, M. Paniagua, A. Fleming, V. Harnik, A. Pock, A. Swan-Sein, M.A. Barone, M. Daniel

Academic Medicine: Volume 95 - Issue 1 - p 111-121

 

This paper investigates the effect of a change in the United States Medical Licensing Examination Step 1 timing on Step 2 Clinical Knowledge (CK) scores, the effect of lag time on Step 2 CK performance, and the relationship of incoming Medical College Admission Test (MCAT) score to Step 2 CK performance pre and post change.

Posted: | E. Knetka, C. Runyon, S. Eddy

CBE—Life Sciences Education Vol. 18, No. 1

 

This article briefly reviews the aspects of validity that researchers should consider when using surveys. It then focuses on factor analysis, a statistical method that can be used to collect an important type of validity evidence.

Posted: | P. Harik, B. E. Clauser, I. Grabovsky, P. Baldwin, M. Margolis, D. Bucak, M. Jodoin, W. Walsh, S. Haist

Journal of Educational Measurement: Volume 55, Issue 2, Pages 308-327

 

The widespread move to computerized test delivery has led to the development of new approaches to evaluating how examinees use testing time and to new metrics designed to provide evidence about the extent to which time limits impact performance. Much of the existing research is based on these types of observational metrics; relatively few studies use randomized experiments to evaluate the impact time limits on scores. Of those studies that do report on randomized experiments, none directly compare the experimental results to evidence from observational metrics to evaluate the extent to which these metrics are able to sensitively identify conditions in which time constraints actually impact scores. The present study provides such evidence based on data from a medical licensing examination.

Posted: | M. von Davier, J. H. Shin, L. Khorramdel, L. Stankov

Applied Psychological Measurement: Volume: 42 issue: 4, page(s): 291-306

 

The research presented in this article combines mathematical derivations and empirical results to investigate effects of the nonparametric anchoring vignette approach proposed by King, Murray, Salomon, and Tandon on the reliability and validity of rating data. The anchoring vignette approach aims to correct rating data for response styles to improve comparability across individuals and groups.

Posted: | P.J. Hicks, M.J. Margolis, C.L. Carraccio, B.E. Clauser, K. Donnelly, H.B. Fromme, K.A. Gifford, S.E. Poynter, D.J. Schumacher, A. Schwartz & the PMAC Module 1 Study Group

Medical Teacher: Volume 40 - Issue 11 - p 1143-1150

 

This study explores a novel milestone-based workplace assessment system that was implemented in 15 pediatrics residency programs. The system provided: web-based multisource feedback and structured clinical observation instruments that could be completed on any computer or mobile device; and monthly feedback reports that included competency-level scores and recommendations for improvement.

Posted: | B. Michalec, M. M. Cuddy, P. Hafferty, M. D. Hanson, S. L. Kanter, D. Littleton, M. A. T. Martimianakis, R. Michaels, F. W. Hafferty

Med Educ, 52: 359-361

 

Focusing specifically on examples set in the context of movement from Bachelor's level undergraduate programmes to enrolment in medical school, this publication argues that a great deal of what happens on college campuses today, curricular and otherwise, is (in)directly driven by the not‐so‐invisible hand of the medical education enterprise.

Posted: | Monica M. Cuddy, Aaron Young, Andrew Gelman, David B. Swanson, David A. Johnson, Gerard F. Dillon, Brian E. Clauser

The authors examined the extent to which USMLE scores relate to the odds of receiving a disciplinary action from a U.S. state medical board.