nominal response model
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 11)

H-INDEX

11
(FIVE YEARS 3)

Assessment ◽  
2021 ◽  
pp. 107319112110526
Author(s):  
Steven P. Reise ◽  
Anne S. Hubbard ◽  
Emily F. Wong ◽  
Benjamin D. Schalet ◽  
Mark G. Haviland ◽  
...  

As part of a scale development project, we fit a nominal response item response theory model to responses to the Health Care Engagement Measure (HEM). When using the original 5-point response format, categories were not ordered as intended for six of the 23 items. For the remaining, the category boundary discrimination between Categories 0 ( not at all true) and 1 ( a little bit true) was only weakly discriminating, suggesting uninformative categories. When the lowest two categories were collapsed, psychometric properties improved greatly. Category boundary discriminations within items, however, varied significantly. Specifically, higher response category distinctions, such as responding 3 ( very true) versus 2 ( mostly true) were considerably more discriminating than lower response category distinctions. Implications for HEM scoring and for improving measurement precision at lower levels of the construct are presented as is the unique role of the nominal response model in category analysis.


Author(s):  
John Stewart ◽  
Byron Drury ◽  
James Wells ◽  
Aaron Adair ◽  
Rachel Henderson ◽  
...  

2020 ◽  
pp. 014662162096574
Author(s):  
Zhonghua Zhang

Researchers have developed a characteristic curve procedure to estimate the parameter scale transformation coefficients in test equating under the nominal response model. In the study, the delta method was applied to derive the standard error expressions for computing the standard errors for the estimates of the parameter scale transformation coefficients. This brief report presents the results of a simulation study that examined the accuracy of the derived formulas and compared the performance of this analytical method with that of the multiple imputation method. The results indicated that the standard errors produced by the delta method were very close to the criterion standard errors as well as those yielded by the multiple imputation method under all the simulation conditions.


2019 ◽  
Vol 7 (3) ◽  
pp. 17 ◽  
Author(s):  
Martin Storme ◽  
Nils Myszkowski ◽  
Simon Baron ◽  
David Bernard

Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample ( N = 2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed.


Sign in / Sign up

Export Citation Format

Share Document