Critical language assessment literacy of EFL teachers: Scale construction and validation

2022 ◽  
pp. 026553222110570
Author(s):  
Zia Tajeddin ◽  
Mohammad Khatib ◽  
Mohsen Mahdavi

Critical language assessment (CLA) has been addressed in numerous studies. However, the majority of the studies have overlooked the need for a practical framework to measure the CLA dimension of teachers’ language assessment literacy (LAL). This gap prompted us to develop and validate a critical language assessment literacy (CLAL) scale to further underscore the role of CLA principles and their practice as an essential part of teachers’ LAL. In the first phase, a pool of items was generated through a comprehensive review of the related studies. In the quantitative phase, the developed scale was administered to 255 English as a foreign language teachers selected through convenience and snowball sampling. The data were analyzed through exploratory factor analysis for construct validity and Cronbach’s alpha for estimating internal consistency. The results showed that the items loaded on five factors: (a) teachers’ knowledge of assessment objectives, scopes, and types; (b) assessment use consequences; (c) fairness; (d) assessment policies; and (e) national policy and ideology. It was found that the scale had a high level of internal consistency and construct validity, which suggests that this scale has the potential to be useful in assessing language teachers’ CLAL and to raise language teachers’ awareness of CLAL constructs.

2019 ◽  
Vol 21 (2) ◽  
pp. 243-259
Author(s):  
Frank Giraldo ◽  
Daniel Murcia Quintero

Language Assessment Literacy (LAL) research has focused on defining the knowledge, skills, and principles that the stakeholders involved in language assessment activities are required to master. However, there is scarce research on the relationship between LAL and the professional development of language teachers. Therefore, this exploratory action research study examined the impact of a language assessment course on pre-service teachers in a Colombian language teaching programme. Data were collected through questionnaires, interviews, teacher and researcher journals and class observations. The findings show that the course promoted theoretical, technical and operational dimensions in the language assessment design practices of the participants. In addition, it enhanced their LAL and professional development. Consequently, this study contends that the LAL course changed language assessment perceptions radically and encouraged pre-service teachers to design assessments conscientiously, a feature not explicitly stated in LAL research involving this group of stakeholders elsewhere.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nurdiana Nurdiana Nurdiana

<span>H<span>Half of the language teachers’ time is spent on assessing students’ performance. Therefore, they should be literate to language assessment in terms of how to make a good test or knowing which method appropriate to assess their students’ learning. Without having assessment literacy, they may not be able to help their students achieve the best results of their performance. For this reason, the present study attempts to examine language teacher assessment literacy and how it has been measured. Besides, suggestions and recommendations for language teachers regarding assessment literacy are discussed in this study. A literature review was employed to conduct this research. Findings suggest that language teachers need more training on language assessment due to their lack of knowledge of language assessment. Although some of them are assessment literate, they do not practice the knowledge in their classroom. This implies that the training they need could be on how to select appropriate assessments for their students, how to design a test,  alternative assessments, and test specifications.</span></span>


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Christine Coombe ◽  
Hossein Vafadar ◽  
Hassan Mohebbi

Abstract Recently, we have witnessed a growing interest in developing teachers’ language assessment literacy. The ever increasing demand for and use of assessment products and data by a more varied group of stakeholders than ever before, such as newcomers with limited assessment knowledge in the field, and the knowledge assessors need to possess (Stiggins, Phi Delta Kappa 72:534-539, 1991) directs an ongoing discussion on assessment literacy. The 1990 Standards for Teacher Competence in Educational Assessment of Students (AFT, NCME, & NEA, Educational Measurement: Issues and Practice 9:30-32, 1990) made a considerable contribution to this field of study. Following these Standards, a substantial number of for and against studies have been published on the knowledge base and skills for assessment literacy, assessment goals, the stakeholders, formative assessment and accountability contexts, and measures examining teacher assessment literacy levels. This paper elaborates on the nature of the language assessment literacy, its conceptual framework, the related studies on assessment literacy, and various components of teacher assessment literacy and their interrelationships. The discussions, which focus on what language teachers and testers need to learn, unlearn, and relearn, should develop a deep understanding of the work of teachers, teacher trainers, professional developers, stakeholders, teacher educators, and educational policymakers. Further, the outcome of the present paper can provide more venues for further research.


2018 ◽  
Vol 20 (1) ◽  
pp. 179-195 ◽  
Author(s):  
Frank Giraldo

Recently, the applied linguistics field has examined the knowledge, skills, and principles needed for assessment, defined as language assessment literacy. Two major issues in language assessment literacy have been addressed but not fully resolved—what exactly language assessment literacy is and how it differs among stakeholders (e.g., students and teachers). This reflective article reviews assessment literacy from general education experts and language education scholars and shows how the meaning of language assessment literacy has expanded. To add to the discussion of this construct, the article focuses on the specific language assessment literacy for language teachers and proposes a core list of assessment knowledge, skills, and principles for these stakeholders.


HOW ◽  
2021 ◽  
Vol 28 (3) ◽  
pp. 78-92
Author(s):  
Frank Giraldo

At some point, language teachers need to be engaged in language assessment in their profession. Because language assessment is such a primary task for teachers, the field of language testing is encouraging research around the knowledge, skills, and principles that are foundational for sound assessment. In this paper, I provide a definition of Language Assessment Literacy (LAL), especially when it comes to teachers, by reviewing existing models. I then discuss ongoing issues in this area and end the paper by offering language teacher educators suggestions for fostering LAL among pre- and in-service teachers. In the article, I argue that, if more LAL initiatives take place, we are collectively raising the status and nature of language assessment and its impact on teachers’ professional development.


2020 ◽  
Vol 22 (1) ◽  
pp. 189-200
Author(s):  
Frank Giraldo

The language assessment literacy of English language teachers has been one of the topics of discussion in the language testing field. In this article, I focus on the need to expand research constructs and methodologies to understand, in depth, the language assessment literacy for these key players in language assessment. I first explain the need to focus on language teachers and examine current challenges in researching language assessment literacy. Then, I reflect on how post-positivist, interpretive research constructs and methodologies can expand and why they should. If this happens, research might yield more valid, useful data to unveil the complexities of language assessment literacy for language teachers. That data can provide valuable feedback to advance teachers’ professional development through language assessment literacy.


2021 ◽  
Vol 3 (1) ◽  
pp. 120-130
Author(s):  
Ildikó Csépes

Language teachers’ assessment knowledge and skills have received considerable attention from language assessment researchers over the past few decades (Davison & Leung, 2009; Hill & McNamara, 2012; Rea-Dickins, 2001; Taylor, 2013). This seems to be linked to the increased professionalism expected of them in classroom-based assessments. However, teachers seem to face a number of challenges, including how large-scale standardized language exams influence their classroom assessment practices. Teachers’ assessment literacy, therefore, needs to be examined in order to explain their assessment decisions. In this paper, we review the concept of (language) assessment literacy, how it has evolved and how it is conceptualized currently. Recent interpretations seem to reflect a multidimensional, dynamic and situated view of (language) assessment literacy. Implications for teacher education are also highlighted by presenting research findings from studies that explored teachers’ and teacher candidates’ assessment literacy in various educational contexts. As a result, we can identify some common patterns in classroom assessment practices as well as context-specific training needs. Finally, we make a recommendation for tackling some of the challenges language teachers are facing in relation to classroom-based assessment in the Hungarian context.


2021 ◽  
Vol 33 (S1) ◽  
pp. 87-88
Author(s):  
J. Antonio Garcia-Casal ◽  
Natacha Coelho de Cunha Guimarães ◽  
Sofía Díaz Mosquera ◽  
María Alvarez Ariza ◽  
Raimundo Mateos Álvarez

Background:Rowland Universal Dementia Assessment Scale (RUDAS) is a brief cognitive test, appropriate for people with minimum completed level of education and sensitive to multicultural contexts. It could be a good instrument for cognitive impairment (CI) screening in Primary Health Care (PHC). It comprises the following areas: recent memory, body orientation, praxis, executive functions and language.Research Objective:The objective of this study is to assess the construct validity of RUDAS analysing its internal consistency and factorial structure.Method:Internal consistency will be calculated using ordinal Cronbach’s α, which reflects the average inter-item correlation score and, as such, will increase when correlations between the items increase. Exploratory Factor Analysis will be used to arrange the variables in domains using principal components extraction. The factorial analysis will include the extraction of five factors reflecting the neuropsychological areas assessed by the test. The result will be rotated under Varimax procedure to ease interpretation.Exploratory factor analysis will be used to arrange the variables in domains using principal components extraction. The analysis will include Kaiser–Meyer–Olkin measure of sampling adequacy and Bartlett’s test of sphericity. Estimations will be based based on Pearson’s correlations between indicators using a principal component analysis and later replicated with a tetrachoric correlation matrix. The variance in the tetrachoric model will be analysed to indentify convergent iterations and their explicative power.Preliminary results of the ongoing study:RUDAS is being administered to 321 participants older than 65 years, from seven PHC physicians’ consultations in O Grove Health Center. The data collection will be finished by August 2021 and in this poster we will present the final results of the exploratory factor analysis.Conclusions:We expect that the results of the exploratory factor analysis will replicate the results of previous studies of construct validity of the test in which explanatory factor weights were between 0.57 and 0.82, and all were above 40%. Confirming that RUDAS has a strong factor construct with high factor weights and variance ratio, and 6-item model is appropriate for measurement will support its recommendation as a valid screening instrument for PHC.


Sign in / Sign up

Export Citation Format

Share Document