Validity evidence as a key marker of quality of technical skill assessment in OTL-HNS

2018 ◽  
Vol 128 (10) ◽  
pp. 2296-2300 ◽  
Author(s):  
Mathilde Labbé ◽  
Meredith Young ◽  
Lily H.P. Nguyen
2016 ◽  
Vol 6 (5) ◽  
Author(s):  
Nkechi Patricia-Mary Esomonu ◽  
Martins Ndibem Esomonu ◽  
Gbenga Kayode Oluwatoyo

Author(s):  
Jamie S. Switzer

Colleges and universities are adept at teaching students in the academic sense. Often what is lacking in a student’s education is a thorough grasp of the “real world”; how their chosen field actually functions and operates. One way for students to gain an understanding of a particular occupation is to interact with a mentor. Mentors can offer valuable intellectual resources to students (O’Neil & Gomez, 1996). Regardless of the quality of their education, students still need the practical information that can only be provided by a working professional who can present students an awareness of the real world (O’Neil, 2001). A mentor, however, is much, much more than a professional with unique expertise in a specific vocation. While mentors do provide career knowledge and the means for technical skill development, mentors can offer a myriad of services. They provide support, encouragement, and guidance. Mentors act as role models, teaching and nurturing students, demonstrating appropriate skills and behaviors. They are friends to students, providing them a means to network and find jobs.


2016 ◽  
Vol 2 (3) ◽  
pp. 61-67 ◽  
Author(s):  
Jane Runnacles ◽  
Libby Thomas ◽  
James Korndorffer ◽  
Sonal Arora ◽  
Nick Sevdalis

IntroductionDebriefing is essential to maximise the simulation-based learning experience, but until recently, there was little guidance on an effective paediatric debriefing. A debriefing assessment tool, Objective Structured Assessment of Debriefing (OSAD), has been developed to measure the quality of feedback in paediatric simulation debriefings. This study gathers and evaluates the validity evidence of OSAD with reference to the contemporary hypothesis-driven approach to validity.MethodsExpert input on the paediatric OSAD tool from 10 paediatric simulation facilitators provided validity evidence based on content and feasibility (phase 1). Evidence for internal structure validity was sought by examining reliability of scores from video ratings of 35 postsimulation debriefings; and evidence for validity based on relationship to other variables was sought by comparing results with trainee ratings of the same debriefings (phase 2).ResultsSimulation experts’ scores were significantly positive regarding the content of OSAD and its instructions. OSAD's feasibility was demonstrated with positive comments regarding clarity and application. Inter-rater reliability was demonstrated with intraclass correlations above 0.45 for 6 of the 7 dimensions of OSAD. The internal consistency of OSAD (Cronbach α) was 0.78. Pearson correlation of trainee total score with OSAD total score was 0.82 (p<0.001) demonstrating validity evidence based on relationships to other variables.ConclusionThe paediatric OSAD tool provides a structured approach to debriefing, which is evidence-based, has multiple sources of validity evidence and is relevant to end-users. OSAD may be used to improve the quality of debriefing after paediatric simulations.


2019 ◽  
Vol 18 (1) ◽  
pp. rm1 ◽  
Author(s):  
Eva Knekta ◽  
Christopher Runyon ◽  
Sarah Eddy

Across all sciences, the quality of measurements is important. Survey measurements are only appropriate for use when researchers have validity evidence within their particular context. Yet, this step is frequently skipped or is not reported in educational research. This article briefly reviews the aspects of validity that researchers should consider when using surveys. It then focuses on factor analysis, a statistical method that can be used to collect an important type of validity evidence. Factor analysis helps researchers explore or confirm the relationships between survey items and identify the total number of dimensions represented on the survey. The essential steps to conduct and interpret a factor analysis are described. This use of factor analysis is illustrated throughout by a validation of Diekman and colleagues’ goal endorsement instrument for use with first-year undergraduate science, technology, engineering, and mathematics students. We provide example data, annotated code, and output for analyses in R, an open-source programming language and software environment for statistical computing. For education researchers using surveys, understanding the theoretical and statistical underpinnings of survey validity is fundamental for implementing rigorous education research.


Author(s):  
Jorge Osma ◽  
Víctor Martínez-Loredo ◽  
Amanda Díaz-García ◽  
Alba Quilez-Orden ◽  
Óscar Peris-Baquero

The lifetime prevalence of emotional disorders in Spain is 4.1% for anxiety and 5.2% for depression, increasing among university students. Considering the scarcity of screenings with adequate psychometric properties, this study aims to explore the validity evidence of the Overall Anxiety/Depression Severity and Impairment Scales (OASIS and ODSIS). A total of 382 university students from the general population were assessed on anxiety and depressive symptoms, as well as quality of life. The one-dimensional structure of both the OASIS and ODSIS explained 87.53% and 90.60% of variance, with excellent internal consistency (α = 0.94 and 0.95, respectively) and optimal cut-offs of 4 and 5, respectively. Both scales show a significant moderate association with other measures of anxiety, depression and quality of life. The OASIS and ODSIS have shown good reliability and sound validity evidence that recommend their use for the assessment and early detection of anxiety and depressive symptoms, and associated quality of life impairment in Spanish youth.


2017 ◽  
Vol 9 (4) ◽  
pp. 473-478 ◽  
Author(s):  
Glenn Rosenbluth ◽  
Natalie J. Burman ◽  
Sumant R. Ranji ◽  
Christy K. Boscardin

ABSTRACT Background  Improving the quality of health care and education has become a mandate at all levels within the medical profession. While several published quality improvement (QI) assessment tools exist, all have limitations in addressing the range of QI projects undertaken by learners in undergraduate medical education, graduate medical education, and continuing medical education. Objective  We developed and validated a tool to assess QI projects with learner engagement across the educational continuum. Methods  After reviewing existing tools, we interviewed local faculty who taught QI to understand how learners were engaged and what these faculty wanted in an ideal assessment tool. We then developed a list of competencies associated with QI, established items linked to these competencies, revised the items using an iterative process, and collected validity evidence for the tool. Results  The resulting Multi-Domain Assessment of Quality Improvement Projects (MAQIP) rating tool contains 9 items, with criteria that may be completely fulfilled, partially fulfilled, or not fulfilled. Interrater reliability was 0.77. Untrained local faculty were able to use the tool with minimal guidance. Conclusions  The MAQIP is a 9-item, user-friendly tool that can be used to assess QI projects at various stages and to provide formative and summative feedback to learners at all levels.


Sign in / Sign up

Export Citation Format

Share Document