Showing 1 - 5 of 5 Research Library Publications
Posted: | Michael A. Barone, Jessica L. Bienstock, Elise Lovell, John R. Gimpel, Grant L. Lin, Jennifer Swails, George C. Mejicano

Journal of Graduate Medical Education: Volume 14, Issue 6, Pages 634-638

 

This article discusses recent recommendations from the UME-GME Review Committee (UGRC) to address challenges in the UME-GME transition—including complexity, negative impact on well-being, costs, and inequities.

Posted: | Jennifer L. Swails, Steven Angus, Michael Barone, Jessica Bienstock, Jesse Burk-Rafel, Michelle Roett, Karen E. Hauer

Academic Medicine: Volume 98 - Issue 2 - Pages 180-187

 

This article describes the work of the Coalition for Physician Accountability’s Undergraduate Medical Education to Graduate Medical Education Review Committee (UGRC) to apply a quality improvement approach and systems thinking to explore the underlying causes of dysfunction in the undergraduate medical education (UME) to graduate medical education (GME) transition.

Posted: | Y.S. Park, A. Morales, L. Ross, M. Paniagua

Evaluation & the Health Professions: Volume: 43 issue: 3, page(s): 149-158

 

This study examines the innovative and practical application of DCM framework to health professions educational assessments using retrospective large-scale assessment data from the basic and clinical sciences: National Board of Medical Examiners Subject Examinations in pathology (n = 2,006) and medicine (n = 2,351).

Posted: | M. von Davier, J. H. Shin, L. Khorramdel, L. Stankov

Applied Psychological Measurement: Volume: 42 issue: 4, page(s): 291-306

 

The research presented in this article combines mathematical derivations and empirical results to investigate effects of the nonparametric anchoring vignette approach proposed by King, Murray, Salomon, and Tandon on the reliability and validity of rating data. The anchoring vignette approach aims to correct rating data for response styles to improve comparability across individuals and groups.

Posted: | Z. Jiang, M.R. Raymond

Applied Psychological Measurement: Volume: 42 issue: 8, page(s): 595-612

 

Conventional methods for evaluating the utility of subscores rely on reliability and correlation coefficients. However, correlations can overlook a notable source of variability: variation in subtest means/difficulties. Brennan introduced a reliability index for score profiles based on multivariate generalizability theory, designated as G, which is sensitive to variation in subtest difficulty. However, there has been little, if any, research evaluating the properties of this index. A series of simulation experiments, as well as analyses of real data, were conducted to investigate G under various conditions of subtest reliability, subtest correlations, and variability in subtest means.