validity argument
Recently Published Documents


TOTAL DOCUMENTS

79
(FIVE YEARS 32)

H-INDEX

11
(FIVE YEARS 1)

2021 ◽  
pp. 295-298
Author(s):  
Carol A. Chapelle ◽  
Peter C. Hauser ◽  
Hye-won Lee ◽  
Christian Rathmann ◽  
Krister Schönström

The use of argument-based validity as a framework for discussion of validity issues in spoken and signed second language (L2) assessment reveals many areas of commonality. Common areas include the role of systematic test development practices in the validity argument, the complexity of rating issues, the need to define and assess a construct of functional communication of meaning, and the centrality of test use in the validity argument. Examining these areas of commonality in this chapter reveals the fundamental similarities in the basic validity issues faced in spoken and signed language assessment. This chapter is a joint discussion of key items related to validation issues related to signed and spoken language assessment that were discussed in Chapters 8.1 and 8.2.


2021 ◽  
Author(s):  
◽  
Diep Tran

<p>More than a decade ago, the Vietnamese Government announced an educational reform to enhance the quality of English language education in the country. An important aspect of this reform is the introduction of the localized test of English proficiency which covers four language skills, namely listening, speaking, reading, and writing. This high-stakes English test is developed and administered by only a limited number of institutions in Vietnam. Although the validity of the test is a considerable concern for test-takers and test score users, it has remained an under-researched area. This study aims to partly address the issue by validating a listening test developed by one of the authorized institutions in Vietnam. In this thesis, the test is referred to as the Locally Created Listening Test or the LCLT.  Using the argument-based approach to validation (Kane, 1992, 2013; Chapelle, 2008), this research aims to develop a validity argument for the evaluation, generalization and explanation inferences of the LCLT. Three studies were carried out to elicit evidence to support these inferences. The first study investigated the statistical characteristics of the LCLT test scores, focusing on the evaluation and generalization inference. The second study shed light on the extent to which test items engaged the target construct. The third study examined whether test-takers’ scores on the LCLT correlated well with their scores on an international English test that measured a similar construct. Both the second and third study were carried out to support the explanation inference.  These three studies did not provide enough evidence to successfully support the validity argument for the LCLT. The test was found to have major flaws that affected the validity of score interpretations. In light of the research findings, suggestions were given for the betterment of future LCLTs. At the same time, this research helped to uncover the impacts of certain text and task-related factors on the test-takers’ performance. Such insights led to practical implications for the assessment of second language listening in general. The results of this research also contributed to the theory and practice of test localization, a relatively new paradigm in language testing and assessment.</p>


2021 ◽  
Author(s):  
◽  
Diep Tran

<p>More than a decade ago, the Vietnamese Government announced an educational reform to enhance the quality of English language education in the country. An important aspect of this reform is the introduction of the localized test of English proficiency which covers four language skills, namely listening, speaking, reading, and writing. This high-stakes English test is developed and administered by only a limited number of institutions in Vietnam. Although the validity of the test is a considerable concern for test-takers and test score users, it has remained an under-researched area. This study aims to partly address the issue by validating a listening test developed by one of the authorized institutions in Vietnam. In this thesis, the test is referred to as the Locally Created Listening Test or the LCLT.  Using the argument-based approach to validation (Kane, 1992, 2013; Chapelle, 2008), this research aims to develop a validity argument for the evaluation, generalization and explanation inferences of the LCLT. Three studies were carried out to elicit evidence to support these inferences. The first study investigated the statistical characteristics of the LCLT test scores, focusing on the evaluation and generalization inference. The second study shed light on the extent to which test items engaged the target construct. The third study examined whether test-takers’ scores on the LCLT correlated well with their scores on an international English test that measured a similar construct. Both the second and third study were carried out to support the explanation inference.  These three studies did not provide enough evidence to successfully support the validity argument for the LCLT. The test was found to have major flaws that affected the validity of score interpretations. In light of the research findings, suggestions were given for the betterment of future LCLTs. At the same time, this research helped to uncover the impacts of certain text and task-related factors on the test-takers’ performance. Such insights led to practical implications for the assessment of second language listening in general. The results of this research also contributed to the theory and practice of test localization, a relatively new paradigm in language testing and assessment.</p>


2021 ◽  
Vol 268 ◽  
pp. 507-513
Author(s):  
Catalina Ortiz ◽  
Francisca Belmar ◽  
Rolando Rebolledo ◽  
Javier Vela ◽  
Caterina Contreras ◽  
...  

2021 ◽  
pp. 32-47
Author(s):  
Michael T. Kane
Keyword(s):  

2021 ◽  
Vol 6 ◽  
Author(s):  
Tia Fechter ◽  
Ting Dai ◽  
Jennifer G. Cromley ◽  
Frank E. Nelson ◽  
Martin Van Boekel ◽  
...  

The Inference-Making and Reasoning in Biology (IMRB) measure is an assessment tool intended to 1) aid university personnel in advising students on course enrollment, 2) identify students in need of interventions to increase their reasoning skills and likelihood of completing STEM majors, 3) support instructors in determining growth in students’ reasoning skills, and 4) provide a measuring tool to gauge success of higher-education interventions intended to increase reasoning skills. Validity arguments for these four uses of the IMRB are provided by implementing a validity argument approach. This work exemplifies the advantages of framing validation studies within a validity argument framework.


2021 ◽  
Vol 6 ◽  
Author(s):  
Peter Yongqi Gu

The embedded and contingent nature of classroom-based formative assessment means that validity in the norm-referenced, summative tradition cannot be understood in exactly the same way for formative assessment. In fact, some scholars (e.g., Gipps, Beyond testing: towards a theory of educational assessment, 1994, Falmer Press, London, UK) have even contended for an entirely different paradigm with an independent set of criteria for its evaluation. Many others have conceptualized the validity of formative assessment in different ways (e.g., Nichols et al., 2009, 28 (3), 14–23; Stobart, Validity in formative assessment, 2012, SAGE Publications Ltd, London, UK; Pellegrino et al., Educ. Psychol., 2016, 51 (1), 59–81). This article outlines a framework for evaluating the argument-based validity of CBFA. In particular, I use Kane (J. Educ. Meas., 2013, 50 (1), 1–73) as a starting point to map out the types of inferences made in CBFA (interpretation and use argument) and the structure of arguments for the validity of the inferences (validity argument). It is posited that a coherent and practical framework, together with its suggested list of inferences, warrants and backings, will help researchers evaluate the usefulness of CBFA. Teachers may find the framework useful in validating their own CBFA as well.


Sign in / Sign up

Export Citation Format

Share Document