Showing 1 - 8 of 8 Research Library Publications
Posted: | Erfan Khalaji, Sukru Eraslan, Yeliz Yesilada, Victoria Yaneva

Behavior & Information Technology

 

This study builds upon prior work in this area that focused on developing a machine-learning classifier trained on gaze data from web-related tasks to detect ASD in adults. Using the same data, we show that a new data pre-processing approach, combined with an exploration of the performance of different classification algorithms, leads to an increased classification accuracy compared to prior work.

Posted: | Victoria Yaneva, Brian E. Clauser, Amy Morales, Miguel Paniagua

Advances in Health Sciences Education: Volume 27, p 1401–1422

 

After collecting eye-tracking data from 26 students responding to clinical MCQs, analysis is performed by providing 119 eye-tracking features as input for a machine-learning model aiming to classify correct and incorrect responses. The predictive power of various combinations of features within the model is evaluated to understand how different feature interactions contribute to the predictions.

Posted: | Sukru Eraslan, Yeliz Yesilada, Victoria Yaneva, Simon Harper

ACM SIGACCESS Accessibility and Computing

 

In this article, we first summarise STA (Scanpath Trend Analysis) with its application in autism detection, and then discuss future directions for this research.

Posted: | V. Yaneva, L. A. Ha, S. Eraslan, Y. Yesilada, R. Mitkov

IEEE Transactions on Neural Systems and Rehabilitation Engineering

 

The purpose of this study is to test whether visual processing differences between adults with and without high-functioning autism captured through eye tracking can be used to detect autism.

Posted: | Z. Cui, C. Liu, Y. He, H. Chen

Journal of Educational Measurement: Volume 55, Issue 4, Pages 582-594

 

This article proposes and evaluates a new method that implements computerized adaptive testing (CAT) without any restriction on item review. In particular, it evaluates the new method in terms of the accuracy on ability estimates and the robustness against test‐manipulation strategies. This study shows that the newly proposed method is promising in a win‐win situation: examinees have full freedom to review and change answers, and the impacts of test‐manipulation strategies are undermined.

Posted: | S. Tackett, M. Raymond, R. Desai, S. A. Haist, A. Morales, S. Gaglani, S. G. Clyman

Medical Teacher: Volume 40 - Issue 8 - p 838-841

 

Adaptive learning requires frequent and valid assessments for learners to track progress against their goals. This study determined if multiple-choice questions (MCQs) “crowdsourced” from medical learners could meet the standards of many large-scale testing programs.

Posted: | I. Kirsch, W. Thorn, M. von Davier

Quality Assurance in Education, Vol. 26 No. 2, pp. 150-152

 

An introduction to a special issue of Quality Assurance in Education featuring papers based on presentations at a two-day international seminar on managing the quality of data collection in large-scale assessments.

Posted: | M. von Davier

Psychometrika 83, 847–857 (2018)

 

Utilizing algorithms to generate items in educational and psychological testing is an active area of research for obvious reasons: Test items are predominantly written by humans, in most cases by content experts who represent a limited and potentially costly resource. Using algorithms instead has the appeal to provide an unlimited resource for this crucial part of assessment development.