A Comparison of the Item Difficulty and Item Discrimination of Multiple-Choice Items Using the "None of the Above" and One Correct Response Options

1987 ◽  
Vol 47 (2) ◽  
pp. 377-383 ◽  
Author(s):  
Nona Tollefson
2020 ◽  
Vol 2 (4) ◽  
pp. p59
Author(s):  
Michael Joseph Wise

The effectiveness of multiple-choice (MC) items depends on the quality of the response options—particularly how well the incorrect options (“distractors”) attract students who have incomplete knowledge. It is often contended that test-writers are unable to devise more than two plausible distractors for most MC items, and that the effort needed to do so is not worthwhile in terms of the items’ psychometric qualities. To test these contentions, I analyzed students’ performance on 545 MC items across six science courses that I have taught over the past decade. Each MC item contained four distractors, and the dataset included more than 19,000 individual responses. All four distractors were deemed plausible in one-third of the items, and three distractors were plausible in another third. Each increase in plausible distractor led to an average of a 13% increase in item difficulty. Moreover, an increase in plausible distractors led to a significant increase in the discriminability of the items, with a leveling off by the fourth distractor. These results suggest that—at least for teachers writing tests to assess mastery of course content—it may be worthwhile to eschew recent skepticism and continue to attempt to write MC items with three or four distractors.


2019 ◽  
Vol 44 (1) ◽  
pp. 33-48
Author(s):  
Daniel M. Bolt ◽  
Nana Kim ◽  
James Wollack ◽  
Yiqin Pan ◽  
Carol Eckerly ◽  
...  

Discrete-option multiple-choice (DOMC) items differ from traditional multiple-choice (MC) items in the sequential administration of response options (up to display of the correct option). DOMC can be appealing in computer-based test administrations due to its protection of item security and its potential to reduce testwiseness effects. A psychometric model for DOMC items that attends to the random positioning of key location across different administrations of the same item is proposed, a feature that has been shown to affect DOMC item difficulty. Using two empirical data sets having items administered in both DOMC and MC formats, the variability in key location effects across both items and persons is considered. The proposed model exploits the capacity of the DOMC format to isolate both (a) distinct sources of item difficulty (i.e., related to the identification of keyed responses versus the ruling out of distractor options) and (b) distinct person proficiencies related to the same two components. Practical implications in terms of the randomized process applied to schedule item key location in DOMC test administrations are considered.


2020 ◽  
Vol 3 (1) ◽  
pp. 102-113
Author(s):  
Sutami

This research aims to produce a valid and reliable Indonesian language assessment instrument in form of HOTS test items and it describes the quality of HOTS test items to measure HOTS skill for the tenth grade of SMA and SMK students. This study was a research and development study adapted from Borg & Gall’s development model, including the following steps: research and information collection, planning, early product development, limited try out, revising the early product, field try out, and revising the final product. The research’s result shows that the HOTS assessment instrument in the form of HOTS test consists of 40 multiple choice items and 5 essay test items. Based on the judgment of the materials, construction, and language was valid and appropriate to be used. The reliability coefficients were 0.88 for the multiple-choice items, and 0.79 for essays. The multiple-choice items have the average difficulty 0.57 (average), the average of item discrimination 0.44 (good), and the distractors function well. The essay items have the average of item difficulty 0.60 (average) and the average of item discrimination 0.45 (good)


2011 ◽  
Vol 35 (4) ◽  
pp. 396-401 ◽  
Author(s):  
Jonathan D. Kibble ◽  
Teresa Johnson

The purpose of this study was to evaluate whether multiple-choice item difficulty could be predicted either by a subjective judgment by the question author or by applying a learning taxonomy to the items. Eight physiology faculty members teaching an upper-level undergraduate human physiology course consented to participate in the study. The faculty members annotated questions before exams with the descriptors “easy,” “moderate,” or “hard” and classified them according to whether they tested knowledge, comprehension, or application. Overall analysis showed a statistically significant, but relatively low, correlation between the intended item difficulty and actual student scores (ρ = −0.19, P < 0.01), indicating that, as intended item difficulty increased, the resulting student scores on items tended to decrease. Although this expected inverse relationship was detected, faculty members were correct only 48% of the time when estimating difficulty. There was also significant individual variation among faculty members in the ability to predict item difficulty (χ2 = 16.84, P = 0.02). With regard to the cognitive level of items, no significant correlation was found between the item cognitive level and either actual student scores (ρ = −0.09, P = 0.14) or item discrimination (ρ = 0.05, P = 0.42). Despite the inability of faculty members to accurately predict item difficulty, the examinations were of high quality, as evidenced by reliability coefficients (Cronbach's α) of 0.70–0.92, the rejection of only 4 of 300 items in the postexamination review, and a mean item discrimination (point biserial) of 0.37. In conclusion, the effort of assigning annotations describing intended difficulty and cognitive levels to multiple-choice items is of doubtful value in terms of controlling examination difficulty. However, we also report that the process of annotating questions may enhance examination validity and can reveal aspects of the hidden curriculum.


1968 ◽  
Vol 23 (1) ◽  
pp. 301-302
Author(s):  
Lewis R. Aiken

It is demonstrated that the item discrimination index (d) and an index of the uniformity of the distribution of choices of distracters (U) provide useful information about the effectiveness of distracters on multiple-choice items.


Author(s):  
Bettina Hagenmüller

Abstract. The multiple-choice item format is widely used in test construction and Large-Scale Assessment. So far, there has been little research on the impact of the position of the solution among the response options and the few existing results are even inconsistent. Since it would be an easy way to create parallel items for group setting by altering the response options, the influence of the response options’ position on item difficulty should be examined. The Linear Logistic Test Model ( Fischer, 1972 ) was used to analyze the data of 829 students aged 8–20 years, who worked on general knowledge items. It was found that the position of the solution among the response options has an influence on item difficulty. Items are easiest when the solution is in first place and more difficult when the solution is placed in a middle position or at the end of the set of response options.


2017 ◽  
Vol 25 (4) ◽  
pp. 483-504
Author(s):  
Tsung-han Tsai ◽  
Chang-chih Lin

Due to the crucial role of political knowledge in democratic participation, the measurement of political knowledge has been a major concern in the discipline of political science. Common formats used for political knowledge questions include multiple-choice items and open-ended identification questions. The conventional wisdom holds that multiple-choice items induce guessing behavior, which leads to underestimated item-difficulty parameters and biased estimates of political knowledge. This article examines guessing behavior in multiple-choice items and argues that a successful guess requires certain levels of knowledge conditional on the difficulties of items. To deal with this issue, we propose a Bayesian IRT guessing model that accommodates the guessing components of item responses. The proposed model is applied to analyzing survey data in Taiwan, and the results show that the proposed model appropriately describes the guessing components based on respondents’ levels of political knowledge and item characteristics. That is, in general, partially informed respondents are more likely to have a successful guess because well-informed respondents do not need to guess and barely informed ones are highly seducible by the attractive distractors. We also examine the gender gap in political knowledge and find that, even when the guessing effect is accounted for, men are more knowledgeable than women about political affairs, which is consistent with the literature.


2020 ◽  
pp. 026553222091731
Author(s):  
Franz Holzknecht ◽  
Gareth McCray ◽  
Kathrin Eberharter ◽  
Benjamin Kremmel ◽  
Matthias Zehentner ◽  
...  

Studies from various disciplines have reported that spatial location of options in relation to processing order impacts the ultimate choice of the option. A large number of studies have found a primacy effect, that is, the tendency to prefer the first option. In this paper we report on evidence that position of the key in four-option multiple-choice (MC) listening test items may affect item difficulty and thereby potentially introduce construct-irrelevant variance. Two sets of analyses were undertaken. With Study 1 we explored 30 test takers’ processing via eye-tracking on listening items from the Aptis Test. An unexpected finding concerned the amount of processing undertaken on different response options on the MC questions, given their order. Based on this, in Study 2 we looked at the direct effect of key position on item difficulty in a sample of 200 live Aptis items and around 6000 test takers per item. The results suggest that the spatial location of the key in MC listening tests affects the amount of processing it receives and the item’s difficulty. Given the widespread use of MC tasks in language assessments, these findings seem crucial, particularly for tests that randomize response order. Candidates who by chance have many keys in last position might be significantly disadvantaged.


Sign in / Sign up

Export Citation Format

Share Document