library bookshelves

RESEARCH LIBRARY

View the latest publications from members of the NBME research team

Showing 21 - 29 of 29 Research Library Publications
Posted: April 3, 2018 | M. von Davier

Quality Assurance in Education, Vol. 26 No. 2, pp. 243-262

 

Surveys that include skill measures may suffer from additional sources of error compared to those containing questionnaires alone. Examples are distractions such as noise or interruptions of testing sessions, as well as fatigue or lack of motivation to succeed. This paper aims to provide a review of statistical tools based on latent variable modeling approaches extended by explanatory variables that allow detection of survey errors in skill surveys.

Posted: April 1, 2018 | R. A. Feinberg, D. P. Jurich, L. M. Foster

Academic Medicine: April 2018 - Volume 93 - Issue 4 - p 636-641

 

Increasing criticism of maintenance of certification (MOC) examinations has prompted certifying boards to explore alternative assessment formats. The purpose of this study was to examine the effect of allowing test takers to access reference material while completing their MOC Part III standardized examination.

Posted: March 30, 2018 | M. von Davier

Measurement: Interdisciplinary Research and Perspectives, 16:1, 59-70

 

This article critically reviews how diagnostic models have been conceptualized and how they compare to other approaches used in educational measurement. In particular, certain assumptions that have been taken for granted and used as defining characteristics of diagnostic models are reviewed and it is questioned whether these assumptions are the reason why these models have not had the success in operational analyses and large-scale applications, contrary to what many have hoped.

Posted: March 27, 2018 | S. D. Stites, J. D. Rubright, J. Karlawish

Alzheimer's & Dementia, 14: 925-932

 

The purpose of this survey is to understand how the prevalence of beliefs, attitudes, and expectations about Alzheimer's disease dementia in the public could inform strategies to mitigate stigma.

Posted: March 25, 2018 | B. Michalec, M. M. Cuddy, P. Hafferty, M. D. Hanson, S. L. Kanter, D. Littleton, M. A. T. Martimianakis, R. Michaels, F. W. Hafferty

Med Educ, 52: 359-361

 

Focusing specifically on examples set in the context of movement from Bachelor's level undergraduate programmes to enrolment in medical school, this publication argues that a great deal of what happens on college campuses today, curricular and otherwise, is (in)directly driven by the not‐so‐invisible hand of the medical education enterprise.

Posted: March 12, 2018 | M. von Davier

Psychometrika 83, 847–857 (2018)

 

Utilizing algorithms to generate items in educational and psychological testing is an active area of research for obvious reasons: Test items are predominantly written by humans, in most cases by content experts who represent a limited and potentially costly resource. Using algorithms instead has the appeal to provide an unlimited resource for this crucial part of assessment development.

Posted: February 2, 2018 | R.A. Feinberg, D. Jurich, J. Lord, H. Case, J. Hawley

Journal of Veterinary Medical Education 2018 45:3, 381-387

 

This study uses item response data from the November–December 2014 and April 2015 NAVLE administrations (n =5,292), to conduct timing analyses comparing performance across several examinee subgroups. The results provide evidence that conditions were sufficient for most examinees, thereby supporting the current time limits. For the relatively few examinees who may have been impacted, results suggest the cause is not a bias with the test but rather the effect of poor pacing behavior combined with knowledge deficits.

Posted: January 24, 2018 | J. D. Rubright

Educational Measurement: Issues and Practice, 37: 40-45

 

This simulation study demonstrates that the strength of item dependencies and the location of an examination systems’ cut‐points both influence the accuracy (i.e., the sensitivity and specificity) of examinee classifications. Practical implications of these results are discussed in terms of false positive and false negative classifications of test takers.

Posted: January 12, 2018 | M. R. Raymond

CLEAR Exam Review 2018 27(2): 21-27

 

The purpose of this paper is to suggest an approach to job analysis that addresses broad competencies while maintaining the rigor of traditional job analysis and the specificity of good test blueprints.