Journal of Veterinary Medical Education 2018 45:3, 381-387
This study uses item response data from the November–December 2014 and April 2015 NAVLE administrations (n =5,292), to conduct timing analyses comparing performance across several examinee subgroups. The results provide evidence that conditions were sufficient for most examinees, thereby supporting the current time limits. For the relatively few examinees who may have been impacted, results suggest the cause is not a bias with the test but rather the effect of poor pacing behavior combined with knowledge deficits.
Educational Measurement: Issues and Practice, 37: 40-45
This simulation study demonstrates that the strength of item dependencies and the location of an examination systems’ cut‐points both influence the accuracy (i.e., the sensitivity and specificity) of examinee classifications. Practical implications of these results are discussed in terms of false positive and false negative classifications of test takers.
CLEAR Exam Review 2018 27(2): 21-27
The purpose of this paper is to suggest an approach to job analysis that addresses broad competencies while maintaining the rigor of traditional job analysis and the specificity of good test blueprints.
Pagination
- First page
- Previous page
- 1
- 2
- 3
- 4
- 5