Did the shift to computer-based testing in PISA 2015 affect reading scores? A View from East Asia

Author(s):  
Hikaru Komatsu ◽  
Jeremy Rappleye
2020 ◽  
Vol 10 (1) ◽  
pp. 235-244
Author(s):  
Elena A. M. Gandini ◽  
Tania Horák

AbstractThis contribution reports on the developing and piloting of a computer-based version of the test of English as a foreign language produced by the University of Central Lancashire (UCLan), where it is currently used for the admission of international students and the subsequent evaluation of their language progress. Among other benefits, computer-based testing allows for better and individualised feedback to both teachers and students, and it can provide a more authentic test experience in light of the current digital shift that UK universities are undergoing. In particular, the qualitative improvement in the feedback available for test-takers and teachers was for us a crucial factor. Providing students with personalised feedback, that is, directly linked to their performance, has positive washforward, because it means we can guide their future learning, highlighting the areas they need to work on to improve their language skills and giving them suggestions on how to succeed in academia. Furthermore, explaining the meaning of test results in detail improves transparency and ultimately washback, as teachers can use the more accessible marking criteria, together with information on how their students performed, to review plans and schemes of work for subsequent courses.


PLoS ONE ◽  
2015 ◽  
Vol 10 (12) ◽  
pp. e0143616 ◽  
Author(s):  
Anja J. Boevé ◽  
Rob R. Meijer ◽  
Casper J. Albers ◽  
Yta Beetsma ◽  
Roel J. Bosker

2017 ◽  
Vol 10 (2) ◽  
pp. 23 ◽  
Author(s):  
Hooshang Khoshsima ◽  
Monirosadat Hosseini ◽  
Seyyed Morteza Hashemi Toroujeni

Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the language assessment field in recent years. But, enjoying advantages of computers in language assessment raise the concerns of the effects that computerized mode of testing may have on CBT performance. Thus, this study investigated the score comparability of Vocabulary in Use test taken by 30 Iranian undergraduate students studying at a state university located in Chabahar region of Iran (CMU) to see whether scores from two administrations of testing mode were different. Therefore, two similar tests were administered to the male and female participants on two testing mode occasions with four weeks interval. Employing One-Way ANOVA statistical test to compare the mean scores and Pearson Correlation test to find the relationship between mode preference and performance revealed that two sets of scores were not different and gender difference was not also considered a variable that might affect performance on CBT. Based on the results, computerized version of the test can be considered a favorable alternative for the state undergraduate students in Iran.


Sign in / Sign up

Export Citation Format

Share Document