
RESEARCH LIBRARY
RESEARCH LIBRARY
View the latest publications from members of the NBME research team
Academic Medicine: Volume 96 - Issue 6 - p 876-884(9)
This study examines whether there are group differences in milestone ratings submitted by program directors working with clinical competency committees based on gender for internal medicine residents and whether women and men rated similarly on subsequent in-training and certification examinations.
Academic Medicine: Volume 95 - Issue 11S - Pages S89-S94
Semiannually, U.S. pediatrics residency programs report resident milestone levels to the Accreditation Council for Graduate Medical Education (ACGME). The Pediatrics Milestones Assessment Collaborative (PMAC) developed workplace-based assessments of 2 inferences. The authors compared learner and program variance in PMAC scores with ACGME milestones.
Evaluation & the Health Professions: Volume: 43 issue: 3, page(s): 149-158
This study examines the innovative and practical application of DCM framework to health professions educational assessments using retrospective large-scale assessment data from the basic and clinical sciences: National Board of Medical Examiners Subject Examinations in pathology (n = 2,006) and medicine (n = 2,351).
Academic Medicine: September 2020 - Volume 95 - Issue 9 - p 1388-1395
This article aims to assess the correlations between United States Medical Licensing Examination (USMLE) performance, American College of Physicians Internal Medicine In-Training Examination (IM-ITE) performance, American Board of Internal Medicine Internal Medicine Certification Exam (IM-CE) performance, and other medical knowledge and demographic variables.
Academic Medicine: Volume 95 - Issue 9 - p 1396-1403
The objective of this study was to evaluate the associations of all required standardized examinations in medical education with ABFM certification examination scores and eventual ABFM certification.
Integrating Timing Considerations to Improve Testing Practices
This chapter addresses a different aspect of the use of timing data: it provides a framework for understanding how an examinee's use of time interfaces with time limits to impact both test performance and the validity of inferences made based on test scores. It focuses primarily on examinations that are administered as part of the physician licensure process.
Academic Medicine: Volume 95 - Issue 1 - p 111-121
This paper investigates the effect of a change in the United States Medical Licensing Examination Step 1 timing on Step 2 Clinical Knowledge (CK) scores, the effect of lag time on Step 2 CK performance, and the relationship of incoming Medical College Admission Test (MCAT) score to Step 2 CK performance pre and post change.
CBE—Life Sciences Education Vol. 18, No. 1
This article briefly reviews the aspects of validity that researchers should consider when using surveys. It then focuses on factor analysis, a statistical method that can be used to collect an important type of validity evidence.
Journal of Educational Measurement: Volume 55, Issue 2, Pages 308-327
The widespread move to computerized test delivery has led to the development of new approaches to evaluating how examinees use testing time and to new metrics designed to provide evidence about the extent to which time limits impact performance. Much of the existing research is based on these types of observational metrics; relatively few studies use randomized experiments to evaluate the impact time limits on scores. Of those studies that do report on randomized experiments, none directly compare the experimental results to evidence from observational metrics to evaluate the extent to which these metrics are able to sensitively identify conditions in which time constraints actually impact scores. The present study provides such evidence based on data from a medical licensing examination.
Applied Psychological Measurement: Volume: 42 issue: 4, page(s): 291-306
The research presented in this article combines mathematical derivations and empirical results to investigate effects of the nonparametric anchoring vignette approach proposed by King, Murray, Salomon, and Tandon on the reliability and validity of rating data. The anchoring vignette approach aims to correct rating data for response styles to improve comparability across individuals and groups.