Showing 1 - 8 of 8 Research Library Publications
Posted: | Victoria Yaneva (editor), Matthias von Davier (editor)

Advancing Natural Language Processing in Educational Assessment

 

This book examines the use of natural language technology in educational testing, measurement, and assessment. Recent developments in natural language processing (NLP) have enabled large-scale educational applications, though scholars and professionals may lack a shared understanding of the strengths and limitations of NLP in assessment as well as the challenges that testing organizations face in implementation. This first-of-its-kind book provides evidence-based practices for the use of NLP-based approaches to automated text and speech scoring, language proficiency assessment, technology-assisted item generation, gamification, learner feedback, and beyond.

Posted: | M.J. Margolis, R.A. Feinberg (eds)

Integrating Timing Considerations to Improve Testing Practices

 

This book synthesizes a wealth of theory and research on time issues in assessment into actionable advice for test development, administration, and scoring. 

Posted: | D. Jurich

Integrating Timing Considerations to Improve Testing Practices

 

This chapter presents a historical overview of the testing literature that exemplifies the theoretical and operational evolution of test speededness.

Posted: | M. von Davier, YS. Lee

Springer International Publishing; 2019

 

This handbook provides an overview of major developments around diagnostic classification models (DCMs) with regard to modeling, estimation, model checking, scoring, and applications. It brings together not only the current state of the art, but also the theoretical background and models developed for diagnostic classification.

Posted: | M.R. Raymond, C. Stevens, S.D. Bucak

Adv in Health Sci Educ 24, 141–150 (2019)

 

Research suggests that the three-option format is optimal for multiple choice questions (MCQs). This conclusion is supported by numerous studies showing that most distractors (i.e., incorrect answers) are selected by so few examinees that they are essentially nonfunctional. However, nearly all studies have defined a distractor as nonfunctional if it is selected by fewer than 5% of examinees.

Posted: | S. H. Felgoise, R. A. Feinberg, H. B. Stephens, P. Barkhaus, K. Boylan, J. Caress, Z. Simmons

Muscle Nerve, 58: 646-654

 

The Amyotrophic Lateral Sclerosis (ALS)‐Specific Quality of Life instrument and its revised version (ALSSQOL and ALSSQOL‐R) have strong psychometric properties, and have demonstrated research and clinical utility. This study aimed to develop a short form (ALSSQOL‐SF) suitable for limited clinic time and patient stamina.

Posted: | D. Franzen, M. Cuddy, J. S. Ilgen

Journal of Graduate Medical Education: June 2018, Vol. 10, No. 3, pp. 337-338

 

To create examinations with scores that accurately support their intended interpretation and use in a particular setting, examination writers must clearly define what the test is intended to measure (the construct). Writers must also pay careful attention to how content is sampled, how questions are constructed, and how questions perform in their unique testing contexts.1–3 This Rip Out provides guidance for test developers to ensure that scores from MCQ examinations fit their intended purpose.

Posted: | I. Kirsch, W. Thorn, M. von Davier

Quality Assurance in Education, Vol. 26 No. 2, pp. 150-152

 

An introduction to a special issue of Quality Assurance in Education featuring papers based on presentations at a two-day international seminar on managing the quality of data collection in large-scale assessments.