Showing 1 - 10 of 15 Research Library Publications
Posted: | Victoria Yaneva, Peter Baldwin, Le An Ha, Christopher Runyon

Advancing Natural Language Processing in Educational Assessment: Pages 167-182

 

This chapter discusses the evolution of natural language processing (NLP) approaches to text representation and how different ways of representing text can be utilized for a relatively understudied task in educational assessment – that of predicting item characteristics from item text.

Posted: | Polina Harik, Janet Mee, Christopher Runyon, Brian E. Clauser

Advancing Natural Language Processing in Educational Assessment: Pages 58-73

 

This chapter describes INCITE, an NLP-based system for scoring free-text responses. It emphasizes the importance of context and the system’s intended use and explains how each component of the system contributed to its accuracy.

Posted: | Kimberly Hu, Patricia J. Hicks, Melissa Margolis, Carol Carraccio, Amanda Osta, Marcia L. Winward, Alan Schwartz

Academic Medicine: Volume 95 - Issue 11S - Pages S89-S94

 

Semiannually, U.S. pediatrics residency programs report resident milestone levels to the Accreditation Council for Graduate Medical Education (ACGME). The Pediatrics Milestones Assessment Collaborative (PMAC) developed workplace-based assessments of 2 inferences. The authors compared learner and program variance in PMAC scores with ACGME milestones.

Posted: | F.S. McDonald, D. Jurich, L.M. Duhigg, M. Paniagua, D. Chick, M. Wells, A. Williams, P. Alguire

Academic Medicine: September 2020 - Volume 95 - Issue 9 - p 1388-1395

 

This article aims to assess the correlations between United States Medical Licensing Examination (USMLE) performance, American College of Physicians Internal Medicine In-Training Examination (IM-ITE) performance, American Board of Internal Medicine Internal Medicine Certification Exam (IM-CE) performance, and other medical knowledge and demographic variables.

Posted: | L. E. Peterson, J. R. Boulet, B. E. Clauser

Academic Medicine: Volume 95 - Issue 9 - p 1396-1403

 

The objective of this study was to evaluate the associations of all required standardized examinations in medical education with ABFM certification examination scores and eventual ABFM certification.

Posted: | B. E. Clauser, M. Kane, J. C. Clauser

Journal of Educational Measurement: Volume 57, Issue 2, Pages 216-229

 

This article presents two generalizability-theory–based analyses of the proportion of the item variance that contributes to error in the cut score. For one approach, variance components are estimated on the probability (or proportion-correct) scale of the Angoff judgments, and for the other, the judgments are transferred to the theta scale of an item response theory model before estimating the variance components.

Posted: | M. von Davier, YS. Lee

Springer International Publishing; 2019

 

This handbook provides an overview of major developments around diagnostic classification models (DCMs) with regard to modeling, estimation, model checking, scoring, and applications. It brings together not only the current state of the art, but also the theoretical background and models developed for diagnostic classification.

Posted: | R.A. Feinberg, D.P Jurich

On the Cover. Educational Measurement: Issues and Practice, 38: 5-5

 

This informative graphic reports between‐individual information where a vertical line—with dashed lines on either side indicating an error band—spans three graphics allowing a student to easily see their score relative to four defined performance categories and, more notably, three relevant score distributions.

Posted: | J. Salt, P. Harik, M. A. Barone

Academic Medicine: March 2019 - Volume 94 - Issue 3 - p 314-316

 

The United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam uses physician raters to evaluate patient notes written by examinees. In this Invited Commentary, the authors describe the ways in which the Step 2 CS exam could benefit from adopting a computer-assisted scoring approach that combines physician raters’ judgments with computer-generated scores based on natural language processing (NLP).

Posted: | M. von Davier, Y. Cho, T. Pan

Psychometrika 84, 147–163 (2019)

 

This paper provides results on a form of adaptive testing that is used frequently in intelligence testing. In these tests, items are presented in order of increasing difficulty. The presentation of items is adaptive in the sense that a session is discontinued once a test taker produces a certain number of incorrect responses in sequence, with subsequent (not observed) responses commonly scored as wrong.