Showing 1 - 8 of 8 Research Library Publications
Posted: | Victoria Yaneva (editor), Matthias von Davier (editor)

Advancing Natural Language Processing in Educational Assessment

 

This book examines the use of natural language technology in educational testing, measurement, and assessment. Recent developments in natural language processing (NLP) have enabled large-scale educational applications, though scholars and professionals may lack a shared understanding of the strengths and limitations of NLP in assessment as well as the challenges that testing organizations face in implementation. This first-of-its-kind book provides evidence-based practices for the use of NLP-based approaches to automated text and speech scoring, language proficiency assessment, technology-assisted item generation, gamification, learner feedback, and beyond.

Posted: | M. G. Jodoin, J. D. Rubright

Educational Measurement: Issues and Practice

 

This short, invited manuscript focuses on the implications for certification and licensure assessment organizations as a result of the wide‐spread disruptions caused by the COVID-19 pandemic. 

Posted: | M.J. Margolis, R.A. Feinberg (eds)

Integrating Timing Considerations to Improve Testing Practices

 

This book synthesizes a wealth of theory and research on time issues in assessment into actionable advice for test development, administration, and scoring. 

Posted: | D. Jurich

Integrating Timing Considerations to Improve Testing Practices

 

This chapter presents a historical overview of the testing literature that exemplifies the theoretical and operational evolution of test speededness.

Posted: | B.C. Leventhal, I. Grabovsky

Educational Measurement: Issues and Practice, 39: 30-36

 

This article proposes the conscious weight method and subconscious weight method to bring more objectivity to the standard setting process. To do this, these methods quantify the relative harm of the negative consequences of false positive and false negative misclassification.

Posted: | J. Salt, P. Harik, M. A. Barone

Academic Medicine: July 2019 - Volume 94 - Issue 7 - p 926-927

 

A response to concerns regarding potential bias in the implementation of machine learning (ML) to scoring of the United States Medical Licensing Examination Step 2 Clinical Skills (CS) patient notes (PN).

Posted: | J. Salt, P. Harik, M. A. Barone

Academic Medicine: March 2019 - Volume 94 - Issue 3 - p 314-316

 

The United States Medical Licensing Examination Step 2 Clinical Skills (CS) exam uses physician raters to evaluate patient notes written by examinees. In this Invited Commentary, the authors describe the ways in which the Step 2 CS exam could benefit from adopting a computer-assisted scoring approach that combines physician raters’ judgments with computer-generated scores based on natural language processing (NLP).

Posted: | M. Paniagua, J. Salt, K. Swygert, M. Barone

Journal of Medical Regulation (2018) 104 (2): 51–57

 

There have been a number of important stakeholder opinions critical of the Step 2 Clinical Skills Examination (CS) in the United States Medical Licensing Examination (USMLE) licensure sequence. The Resident Program Director (RPD) Awareness survey was convened to gauge perceptions of current and potential Step 2 CS use, attitudes towards the importance of residents' clinical skills, and awareness of a medical student petition against Step 2 CS. This was a cross-sectional survey which resulted in 205 responses from a representative sampling of RPDs across various specialties, regions and program sizes.