Graphical Tools for Item Response Theory Model Assessment
Date
2020-07-13
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
ORCID
Type
Thesis
Degree Level
Masters
Abstract
Item response theory(IRT) is widely used in many fields such as psychology, education and health. IRT model assessment is essential because model-data misfi t can result in the risk of drawing incorrect inferences and conclusions. There have been extensive work on model assessment for item responses theory, but most literature mainly concentrates on theoretical
methods such as test statistic procedures for goodness-of-fi t. Though graphical diagnosis tools have been explored in the current literature, it is still not enough and needs more work. Hence, our work focus on exploring graphical diagnosis tools for assessing model fit
in IRT contexts. First, we compare the observed and expected sum scores through plot. Second, we propose residual diagnostic plots based on randomized quantile residual(RQR).
Finally, we consider comparing a non-parametric model fi t with the posited parametric model fit via item characteristic curves(ICC). The first method has been long recognized in the
existing literature, while the remaining two methods are proposed and new in this thesis, which is actually a contribution of my research. Also, in each of methods, We consider
both in-sample and out-of-sample prediction. A simulation study has been conducted to evaluate and compare the performance of these methods. Our preliminary results indicate
that observed v.s expected sum scores fails to detect lack of model fit. For RQR checking, out-of-sample prediction outperforms in-sample prediction in terms of detecting the misfi t,
while non-parametric methods seem to be promising for model assessment of a parametric
model.
Description
Keywords
randomized quantile residual (RQR), kernel smoothing ICC, IRT models, Model assessment.
Citation
Degree
Master of Science (M.Sc.)
Department
Mathematics and Statistics
Program
Mathematics