Continuous Assessment for Improved Student Outcomes

Author(s):  
Daniel Volchok ◽  
Maisie Caines ◽  
David Graf

WebCT views assessment as an activity that is integral to the full scope of the learning process. A variety of methods and strategies are available to course designers and instructors to assess student performance before, during, and after a course has taken place. WebCT provides three major categories of assessment tools (self-tests, quizzes and surveys, and assignments within these tools) and seven types of questions (multiple choice, including true/false; combination multiple choice; matching; calculated; short answer; jumbled sentence; and paragraph). The layout, design, and administration of assessments is flexible through selective release, timed assessments, and the sequencing of questions. Through examples from the WebCT Exemplary Course Project, this chapter reviews the many tools and methods available and describes the assessment, grading, and reporting capabilities of WebCT.

2015 ◽  
Vol 39 (4) ◽  
pp. 327-334 ◽  
Author(s):  
Brandon M. Franklin ◽  
Lin Xiang ◽  
Jason A. Collett ◽  
Megan K. Rhoads ◽  
Jeffrey L. Osborn

Student populations are diverse such that different types of learners struggle with traditional didactic instruction. Problem-based learning has existed for several decades, but there is still controversy regarding the optimal mode of instruction to ensure success at all levels of students' past achievement. The present study addressed this problem by dividing students into the following three instructional groups for an upper-level course in animal physiology: traditional lecture-style instruction (LI), guided problem-based instruction (GPBI), and open problem-based instruction (OPBI). Student performance was measured by three summative assessments consisting of 50% multiple-choice questions and 50% short-answer questions as well as a final overall course assessment. The present study also examined how students of different academic achievement histories performed under each instructional method. When student achievement levels were not considered, the effects of instructional methods on student outcomes were modest; OPBI students performed moderately better on short-answer exam questions than both LI and GPBI groups. High-achieving students showed no difference in performance for any of the instructional methods on any metric examined. In students with low-achieving academic histories, OPBI students largely outperformed LI students on all metrics (short-answer exam: P < 0.05, d = 1.865; multiple-choice question exam: P < 0.05, d = 1.166; and final score: P < 0.05, d = 1.265). They also outperformed GPBI students on short-answer exam questions ( P < 0.05, d = 1.109) but not multiple-choice exam questions ( P = 0.071, d = 0.716) or final course outcome ( P = 0.328, d = 0.513). These findings strongly suggest that typically low-achieving students perform at a higher level under OPBI as long as the proper support systems (formative assessment and scaffolding) are provided to encourage student success.


Author(s):  
Jacqueline A. Carnegie

Summative evaluation for large classes of first- and second-year undergraduate courses often involves the use of multiple choice question (MCQ) exams in order to provide timely feedback. Several versions of those exams are often prepared via computer-based question scrambling in an effort to deter cheating. An important parameter to consider when preparing multiple exam versions is that they must be equivalent in their assessment of student knowledge. This project investigated a possible influence of correct answer organization on student answer selection when writing multiple versions of MCQ exams. The specific question asked was whether the existence of a series of four to five consecutive MCQs in which the same letter represented the correct answer had a detrimental influence on a student’s ability to continue to select the correct answer as he/she moved through that series. Student outcomes from such exams were compared with results from exams with identical questions but which did not contain such series. These findings were supplemented by student survey data in which students self-assessed the extent to which they paid attention to the distribution of correct answer choices when writing summative exams, both during their initial answer selection and when transferring their answer letters to the Scantron sheet for correction. Despite the fact that more than half of survey respondents indicated that they do make note of answer patterning during exams and that a series of four to five questions with the same letter for the correct answer would encourage many of them to take a second look at their answer choice, the results pertaining to student outcomes suggest that MCQ randomization, even when it does result in short serial arrays of letter-specific correct answers, does not constitute a distraction capable of adversely influencing student performance. Dans les très grandes classes de cours de première et deuxième années, l’évaluation sommative se déroule souvent par le biais d’examens comportant des questions à choix multiples afin de pouvoir donner rapidement les résultats aux étudiants. Plusieurs versions de ces examens sont souvent préparées et les questions sont brouillées par ordinateur pour dissuader la tricherie. Lors de la préparation de plusieurs versions d’un examen à choix multiples, l’un des paramètres importants à prendre en considération est que chaque version doit être semblable aux autres pour évaluer équitablement les connaissances des étudiants. Ce projet a pour but d’examiner l’influence possible de l’organisation des réponses correctes sur le choix des réponses des étudiants lors de la préparation de plusieurs versions d’un examen à choix multiples. La question spécifique qui a été posée était de savoir si l’existence d’une série de quatre ou cinq questions à choix multiples consécutives pour lesquelles la même lettre représentait la bonne réponse pouvait avoir une influence préjudiciable sur l’aptitude des étudiants à continuer à choisir la bonne réponse alors qu’ils progressent d’une question à l’autre dans la même série. Les résultats des étudiants qui passent de tels examens ont été comparés aux résultats obtenus quand les étudiants passent des examens dont les questions sont les mêmes mais qui ne comportent pas de telles séries. Ces résultats ont été enrichis par les réponses à une enquête auprès des étudiants pour laquelle les étudiants ont été auto-évalués concernant la question de savoir s’ils avaient remarqué la répartition des réponses correctes parmi les choix multiples quand ils passaient des examens sommatifs, à la fois au départ, quand ils choisissaient leurs réponses, et ensuite quand ils transféraient les lettres correspondant à leurs réponses sur la feuille Scanton pour la correction. Malgré le fait que plus de la moitié des répondants aient indiqué qu’ils ne font pas attention à la structuration des réponses pendant l’examen et qu’une série de quatre ou cinq questions ayant la même lettre pour la bonne réponse pourrait encourager beaucoup d’entre eux à regarder de plus près leur choix de réponse, la conclusion concernant les résultats obtenus par les étudiants suggère que la randomisation des questions à choix multiples, même quand elle aboutit à des séries de réponses correctes identifiées par la même lettre, ne constitue pas une distraction capable d’influencer négativement le rendement des étudiants.


Author(s):  
Charanjit Kaur Swaran Singh ◽  
Harsharan Kaur Jaswan Singh ◽  
Dodi Mulyadi ◽  
Eng Tek Ong ◽  
Tarsame Singh Masa Singh ◽  
...  

The main purpose of this study is to investigate in-service teachers’ familiarization of the CEFR-aligned school-based assessment (SBA) in the Malaysian secondary ESL classroom. It also intends to explore teachers’ knowledge, understanding, and perceptions of the CEFR-aligned SBA. The study also examined the implementation of the SBA and the challenges that TESL teachers faced embracing the CEFR-aligned SBA in their ESL classroom. An exploratory mixed-method research design was employed. Data were collected by administering a survey to 108 in-service teachers, and 12 in-service teachers participated in the interview. The results show that the in-service teachers have rather a good level of familiarization with CEFR-aligned SBA and a moderate level of awareness and comprehension of the CEFR-aligned SBA. However, the in-service teachers are aware of the importance of CEFR-aligned SBA to assist students to improve their proficiency. In-service teachers exhibit a good understanding of selecting the appropriate assessment tools and methods to assess students’ learning. In-service teachers expressed their struggles and concerns regarding implementing CEFR-aligned SBA effectively, including lack of training, sourcing for good materials to teach, students'' negative attitude towards the teaching and learning process, students’ attendance, time constraint and their workload. In conclusion, the implementation of the CEFR-aligned SBA is crucial as it is a national agenda and teachers’ involvement in executing the assessment is obligatory.


2020 ◽  
Author(s):  
THOMAS PUTHIAPARAMPIL ◽  
Md Mizanur Rahman

Abstract Background Multiple choice questions, used in medical school assessments for decades, have many drawbacks, such as: hard to construct, allow guessing, encourage test-wiseness, promote rote learning, provide no opportunity for examinees to express ideas, and do not provide information about strengths and weakness of candidates. Directly asked and answered questions like Very Short Answer Questions (VSAQ) is considered a better alternative with several advantages. Objectives This study aims to substantiate the superiority of VSAQ by actual tests and obtaining feedback from the stakeholders. Methods Conduct multiple true-false, one best answer and VSAQ tests in two batches of medical students, compare their scores and psychometric indexes of the tests and seek opinions from students and academics regarding these assessment methods. Results Multiple true-false and best answer test scores showed skewed results and low psychometric performance compared to better psychometrics and more balanced student performance in VSAQ tests. The stakeholders’ opinions were significantly in favour of VSAQ. Conclusion and recommendation This study concludes that VSAQ is a viable alternative to multiple choice question tests, and it is widely accepted by medical students and academics in the medical faculty.


2016 ◽  
Vol 8 ◽  
pp. 2
Author(s):  
Stephen Lippi

The testing effect is a phenomenon that predicts increased retention of material when individuals are tested on soon-to-be-recalled information (McDaniel, Anderson, Derbish, & Morrisette, 2007). Although this effect is well documented in numerous studies, no study has looked at the impact that computer-based quizzes or online companion tools in a course can have on test performance. In addition to the use of online programs, it is important to understand whether or not the presentation of different question types can lead to increased or decreased student test performance. Although other pedagogical studies have looked at question order on student performance (Norman, 1954; Balch, 1989), none has looked at whether students exposed to questions in short answer format (testing free recall) before taking a multiple choice test (recognition memory) can lead to increased exam scores. The present study sought to understand how use of an online learning system (MindTap, Cengage) and test format order could affect final test scores. There were 5 exams (consisting of separate short answer and multiple choice sections) given to each set of Physiological Psychology students at George Mason University; each exam being worth 150 points. Results indicate that testing order (whether short-answer sections or multiple choice sections were taken first) impacts student test performance and this effect may be mediated by whether or not an online computer program is required. This research has implications for course organization and selection of test format, which may improve student performance. 


2008 ◽  
Vol 9 (2) ◽  
pp. 66-70
Author(s):  
Jennifer Walz Garrett

Abstract School-based speech-language pathologists assess students to establish eligibility, collect baselines for treatment goals, determine progress during intervention and verify generalization of skills. Selecting appropriate assessment tools and methods can be challenging due to time constraints, agency regulations, and availability of tests. This article will describe legal considerations, types of assessments, and the factors involved with the selection and use of various assessment procedures and tools. In addition, speech-language pathologists will learn to calculate words correct per minute (WCPM) and perform miscue analysis, which can provide additional language and literacy information about a child's educational needs.


2017 ◽  
Vol 32 (4) ◽  
pp. 1-17 ◽  
Author(s):  
Dianne Massoudi ◽  
SzeKee Koh ◽  
Phillip J. Hancock ◽  
Lucia Fung

ABSTRACT In this paper we investigate the effectiveness of an online learning resource for introductory financial accounting students using a suite of online multiple choice questions (MCQ) for summative and formative purposes. We found that the availability and use of an online resource resulted in improved examination performance for those students who actively used the online learning resource. Further, we found a positive relationship between formative MCQ and unit content related to challenging financial accounting concepts. However, better examination performance was also linked to other factors, such as prior academic performance, tutorial participation, and demographics, including gender and attending university as an international student. JEL Classifications: I20; M41.


2020 ◽  
Vol 11 (1) ◽  
pp. 237
Author(s):  
Abdallah Namoun ◽  
Abdullah Alshanqiti

The prediction of student academic performance has drawn considerable attention in education. However, although the learning outcomes are believed to improve learning and teaching, prognosticating the attainment of student outcomes remains underexplored. A decade of research work conducted between 2010 and November 2020 was surveyed to present a fundamental understanding of the intelligent techniques used for the prediction of student performance, where academic success is strictly measured using student learning outcomes. The electronic bibliographic databases searched include ACM, IEEE Xplore, Google Scholar, Science Direct, Scopus, Springer, and Web of Science. Eventually, we synthesized and analyzed a total of 62 relevant papers with a focus on three perspectives, (1) the forms in which the learning outcomes are predicted, (2) the predictive analytics models developed to forecast student learning, and (3) the dominant factors impacting student outcomes. The best practices for conducting systematic literature reviews, e.g., PICO and PRISMA, were applied to synthesize and report the main results. The attainment of learning outcomes was measured mainly as performance class standings (i.e., ranks) and achievement scores (i.e., grades). Regression and supervised machine learning models were frequently employed to classify student performance. Finally, student online learning activities, term assessment grades, and student academic emotions were the most evident predictors of learning outcomes. We conclude the survey by highlighting some major research challenges and suggesting a summary of significant recommendations to motivate future works in this field.


2017 ◽  
Vol 16 (1) ◽  
pp. ar7 ◽  
Author(s):  
Xiaoying Xu ◽  
Jennifer E. Lewis ◽  
Jennifer Loertscher ◽  
Vicky Minderhout ◽  
Heather L. Tienson

Multiple-choice assessments provide a straightforward way for instructors of large classes to collect data related to student understanding of key concepts at the beginning and end of a course. By tracking student performance over time, instructors receive formative feedback about their teaching and can assess the impact of instructional changes. The evidence of instructional effectiveness can in turn inform future instruction, and vice versa. In this study, we analyzed student responses on an optimized pretest and posttest administered during four different quarters in a large-enrollment biochemistry course. Student performance and the effect of instructional interventions related to three fundamental concepts—hydrogen bonding, bond energy, and pKa—were analyzed. After instructional interventions, a larger proportion of students demonstrated knowledge of these concepts compared with data collected before instructional interventions. Student responses trended from inconsistent to consistent and from incorrect to correct. The instructional effect was particularly remarkable for the later three quarters related to hydrogen bonding and bond energy. This study supports the use of multiple-choice instruments to assess the effectiveness of instructional interventions, especially in large classes, by providing instructors with quick and reliable feedback on student knowledge of each specific fundamental concept.


Sign in / Sign up

Export Citation Format

Share Document