Showing 1 - 6 of 6 Research Library Publications
Posted: | Irina Grabovsky, Jerusha J. Henderek, Ulana A. Luciw-Dubas, Brent Pierce, Soren Campbell, Katherine S. Monroe

Journal of Medical Education and Curricular Development: Volume 10

In-training examinations (ITEs) are a popular teaching tool for certification programs. This study examines the relationship between examinees’ performance on the National Commission for Certification of Anesthesiologist Assistants (NCCAA) ITE and the high-stakes NCCAA Certification Examination.

Posted: | Kimberly Hu, Patricia J. Hicks, Melissa Margolis, Carol Carraccio, Amanda Osta, Marcia L. Winward, Alan Schwartz

Academic Medicine: Volume 95 - Issue 11S - Pages S89-S94

 

Semiannually, U.S. pediatrics residency programs report resident milestone levels to the Accreditation Council for Graduate Medical Education (ACGME). The Pediatrics Milestones Assessment Collaborative (PMAC) developed workplace-based assessments of 2 inferences. The authors compared learner and program variance in PMAC scores with ACGME milestones.

Posted: | B. E. Clauser, M. Kane, J. C. Clauser

Journal of Educational Measurement: Volume 57, Issue 2, Pages 216-229

 

This article presents two generalizability-theory–based analyses of the proportion of the item variance that contributes to error in the cut score. For one approach, variance components are estimated on the probability (or proportion-correct) scale of the Angoff judgments, and for the other, the judgments are transferred to the theta scale of an item response theory model before estimating the variance components.

Posted: | Z. Jiang, M.R. Raymond

Applied Psychological Measurement: Volume: 42 issue: 8, page(s): 595-612

 

Conventional methods for evaluating the utility of subscores rely on reliability and correlation coefficients. However, correlations can overlook a notable source of variability: variation in subtest means/difficulties. Brennan introduced a reliability index for score profiles based on multivariate generalizability theory, designated as G, which is sensitive to variation in subtest difficulty. However, there has been little, if any, research evaluating the properties of this index. A series of simulation experiments, as well as analyses of real data, were conducted to investigate G under various conditions of subtest reliability, subtest correlations, and variability in subtest means.

Posted: | B. Michalec, M. M. Cuddy, P. Hafferty, M. D. Hanson, S. L. Kanter, D. Littleton, M. A. T. Martimianakis, R. Michaels, F. W. Hafferty

Med Educ, 52: 359-361

 

Focusing specifically on examples set in the context of movement from Bachelor's level undergraduate programmes to enrolment in medical school, this publication argues that a great deal of what happens on college campuses today, curricular and otherwise, is (in)directly driven by the not‐so‐invisible hand of the medical education enterprise.

Posted: | K. Walsh, P. Harik, K. Mazor, D. Perfetto, M. Anatchkova, C. Biggins, J. Wagner

Medical Care: April 2017 - Volume 55 - Issue 4 - p 436-441

 

The objective of this study is to identify modifiable factors that improve the reliability of ratings of severity of health care–associated harm in clinical practice improvement and research.