Showing 1 - 6 of 6 Research Library Publications
Posted: | Shana Stites, Hannah Cao, Jeanine Gill, Kristin Harkins, Jonathan Rubright, Jason Flatt

Innovation in Aging, Volume 4, Issue Supplement_1, 2020, Pages 696-697

 

This presentation describes the framework informing our approach and present results from analyses of gender effects in The Health and Retirement Study that examine gender differences in the associations observed between education and cognitive measures in older adults.

Posted: | Martin G. Tolsgaard, Christy K. Boscardin, Yoon Soo Park, Monica M. Cuddy, Stefanie S. Sebok-Syer

Advances in Health Sciences Education: Volume 25, p 1057–1086 (2020)

 

This critical review explores: (1) published applications of data science and ML in HPE literature and (2) the potential role of data science and ML in shifting theoretical and epistemological perspectives in HPE research and practice.

Posted: | M. M. Hammoud, L. M.Foster, M. M.Cuddy, D. B. Swanson, P. M. Wallach

American Journal of Obstetrics and Gynecology, Volume 223, Issue 3, Pages 435.e1-435.e6

 

The purpose of this study was to examine medical student reporting of electronic health record use during the obstetrics and gynecology clerkship.

Posted: | P. Harik, R.A. Feinberg RA, B.E. Clauser

Integrating Timing Considerations to Improve Testing Practices

 

This chapter addresses a different aspect of the use of timing data: it provides a framework for understanding how an examinee's use of time interfaces with time limits to impact both test performance and the validity of inferences made based on test scores. It focuses primarily on examinations that are administered as part of the physician licensure process.

Posted: | D. Jurich

Integrating Timing Considerations to Improve Testing Practices

 

This chapter presents a historical overview of the testing literature that exemplifies the theoretical and operational evolution of test speededness.

Posted: | B. E. Clauser, M. Kane, J. C. Clauser

Journal of Educational Measurement: Volume 57, Issue 2, Pages 216-229

 

This article presents two generalizability-theory–based analyses of the proportion of the item variance that contributes to error in the cut score. For one approach, variance components are estimated on the probability (or proportion-correct) scale of the Angoff judgments, and for the other, the judgments are transferred to the theta scale of an item response theory model before estimating the variance components.