scholarly journals The Living Codebook: Documenting the Process of Qualitative Data Analysis

2021 ◽  
pp. 004912412098618
Author(s):  
Victoria Reyes ◽  
Elizabeth Bogumil ◽  
Levin Elias Welch

Transparency is once again a central issue of debate across types of qualitative research. Work on how to conduct qualitative data analysis, on the other hand, walks us through the step-by-step process on how to code and understand the data we’ve collected. Although there are a few exceptions, less focus is on transparency regarding decision-making processes in the course of research. In this article, we argue that scholars should create a living codebook, which is a set of tools that documents the data analysis process. It has four parts: (1) a processual database that keeps track of initial codes and a final database for completed codes, (2) a “definitions and key terms” list for conversations about codes, (3) memo-writing, and (4) a difference list explaining the rationale behind unmatched codes. It allows researchers to interrogate taken-for-granted assumptions about what data are focused on, why, and how to analyze it. To that end, the living codebook moves beyond discussions around intercoder reliability to how analytic codes are created, refined, and debated.

2020 ◽  
Author(s):  
Victoria Reyes;Reyes ◽  
Elizabeth Bogumil ◽  
Levin Elias Welch

Transparency is once again a central issue of debate across types of qualitative research. Workon how to conduct qualitative data analysis, on the other hand, walks us through the step-by-stepprocess on how to code and understand the data we’ve collected. Although there are a fewexceptions, less focus is on transparency regarding decision-making processes in the course ofresearch. In this paper, we argue that scholars should create a living codebook, which is a set oftools that documents the data analysis process. It has four parts: 1) a processual database whichkeeps track of initial codes and a final database for completed codes, 2) a “definitions and keyterms” list for conversations about codes, 3) memo-writing, and 4) a difference list explainingthe rationale behind unmatched codes. It allows researchers to interrogate taken-for-grantedassumptions about what data is focused on, why, and how to analyze it. To that end, the livingcodebook moves beyond discussions around inter-coder reliability by documenting the process by which analytic codes are created, refined, and debated.


2021 ◽  
Vol 8 (1) ◽  
pp. 57
Author(s):  
Iryani Abdul Halim Choo ◽  
Mohd. Sabrizaa Abd Rashid ◽  
Kartina Alauddin ◽  
Nazrul Helmy Jamaludin

This research aims to identify the Malay principal form in the roof decorative elements of Rumah Limas Bumbung Perak (RLBP). Through site observation, the data is collected in the form of images and analysed using the CAQDAS (Computer Assisted Qualitative Data Analysis Software) of Atlas Ti. 8. The findings of the research found that there are four Malay principal forms; Gunungan, Buah Guntung, Lebah Bergantung and Pohon Beringin present in the roof decorative elements of RLBP. The similarity and uniformity in the engagement of the form and its meaning is identifiable with the traditional houses of the other region which indicates a uniform understanding of the belief system and practises of the craftsmens throughout the Peninsula.


2016 ◽  
Vol 17 (1) ◽  
pp. 20-36 ◽  
Author(s):  
Xiaoli Hong ◽  
Michelle M Falter ◽  
Bob Fecho

In this article we introduce tension as a means for qualitative data analysis based on Mikhail Bakhtin’s dialogical theory. We first explain the foundations of Bakhtin’s theory and show the inevitability of tension in our lives and qualitative data analysis. We then offer a review of how Bakhtin’s notion of tension has manifested itself in qualitative research, which prompts us to establish a tensional approach to qualitative data analysis. Finally, we outline our framework for a tensional approach to data analysis and illustrate examples of putting this approach into practice in our own study. Our tensional approach (1) explores key moments of tension; (2) seeks out unease and discomfort; (3) involves researcher and research participants in ongoing dialogue; (4) and embraces multiple perspectives on a range of tensions during the data analysis process. It encourages uncertainties and questions instead of pursuing certainty of meaning and fixed conclusions.


Author(s):  
Jessica Nina Lester

The purpose of this chapter is to illustrate how Computer-Assisted Qualitative Data Analysis Software (CAQDAS) packages, such as ATLAS.ti or Transana, can be used to support the transcription and data analysis process of large interactional data sets – specifically data analyzed from a discourse analysis perspective. Drawing from a larger ethnographic study, in this chapter the author illustrates how carrying out the transcription and analysis process within a CAQDAS package (in this case, Transana and ATLAS.ti) allows for an increase in transparency within the transcription and data analysis process, while also meeting the particular needs of the discourse analyst. By using one particular case/research study, the author demonstrates how CAQDAS packages might function to support a researcher in generating a more systematic and transparent analytical process, specifically during the early stages of the analysis process. The author gives particular attention to interactional data (i.e., 300 hours of video and audio recordings of therapy sessions) collected in a larger study and demonstrates the potential benefits of working across two CAQDAS packages, specifically Transana and ATLAS.ti, to support both the nuanced transcription process and the larger data analysis process.


Author(s):  
Neringa Kalpokaite ◽  
Ivana Radivojevic

Qualitative research is a rich and diverse discipline, yet novice qualitative researchers may struggle in discerning how to approach their qualitative data analysis among the plethora of possibilities. This paper presents a foundational model that facilitates a comprehensive yet manageable approach to qualitative data analysis, and it can be applied within an array of qualitative methodologies. Based on an exhaustive review of expert qualitative methodologists, along with our own experience of teaching qualitative research, this model synthesises commonly-used analytic strategies and methods that are likewise applicable to novice qualitative researchers. This foundational model consists of four iterative cycles: The Inspection Cycle, Coding Cycle, Categorisation Cycle, and Modelling Cycle, and memo-writing is inherent to the entire analysis process. Our goal is to offer a solid foundation from which novice qualitative researchers may begin familiarising themselves with the craft of qualitative research and continue discovering methods for making sense of qualitative data.


2018 ◽  
Vol 17 (1) ◽  
pp. 160940691878636 ◽  
Author(s):  
Carmel Maher ◽  
Mark Hadfield ◽  
Maggie Hutchings ◽  
Adam de Eyto

Deep and insightful interactions with the data are a prerequisite for qualitative data interpretation, in particular, in the generation of grounded theory. The researcher must also employ imaginative insight as they attempt to make sense of the data and generate understanding and theory. Design research is also dependent upon the researchers’ creative interpretation of the data. To support the research process, designers surround themselves with data, both as a source of empirical information and inspiration to trigger imaginative insights. Constant interaction with the data is integral to design research methodology. This article explores a design researchers approach to qualitative data analysis, in particular, the use of traditional tools such as colored pens, paper, and sticky notes with the CAQDAS software, NVivo for analysis, and the associated implications for rigor. A design researchers’ approach which is grounded in a practice which maximizes researcher data interaction in a variety of learning modalities ensures the analysis process is rigorous and productive. Reflection on the authors’ research analysis process, combined with consultation with the literature, would suggest digital analysis software packages such as NVivo do not fully scaffold the analysis process. They do, however, provide excellent data management and retrieval facilities that support analysis and write-up. This research finds that coding using traditional tools such as colored pens, paper, and sticky notes supporting data analysis combined with digital software packages such as NVivo supporting data management offer a valid and tested analysis method for grounded theory generation. Insights developed from exploring a design researchers approach may benefit researchers from other disciplines engaged in qualitative analysis.


2018 ◽  
Vol 23 (1) ◽  
pp. 42-55 ◽  
Author(s):  
Abdolghader Assarroudi ◽  
Fatemeh Heshmati Nabavi ◽  
Mohammad Reza Armat ◽  
Abbas Ebadi ◽  
Mojtaba Vaismoradi

Qualitative content analysis consists of conventional, directed and summative approaches for data analysis. They are used for provision of descriptive knowledge and understandings of the phenomenon under study. However, the method underpinning directed qualitative content analysis is insufficiently delineated in international literature. This paper aims to describe and integrate the process of data analysis in directed qualitative content analysis. Various international databases were used to retrieve articles related to directed qualitative content analysis. A review of literature led to the integration and elaboration of a stepwise method of data analysis for directed qualitative content analysis. The proposed 16-step method of data analysis in this paper is a detailed description of analytical steps to be taken in directed qualitative content analysis that covers the current gap of knowledge in international literature regarding the practical process of qualitative data analysis. An example of “the resuscitation team members' motivation for cardiopulmonary resuscitation” based on Victor Vroom's expectancy theory is also presented. The directed qualitative content analysis method proposed in this paper is a reliable, transparent, and comprehensive method for qualitative researchers. It can increase the rigour of qualitative data analysis, make the comparison of the findings of different studies possible and yield practical results.


2017 ◽  
Vol 1 (1) ◽  
pp. 46
Author(s):  
Pantas Simanjuntak

The research was carried out to analyse a legislative text as a product of translation. It is referred to Seiddel's qualitative data analysis where the data processing was performed by selecting, identification, and tabulating. (2%), compensation (1%), description (2%) discursive creation (5%) generalization (5%), literal translation (10%), modulation (8%), particularization (5%) reduction (5%) completion (4%), and dilation (14%). Meanwhile, the four categories shift are implemented as the following frequency: Intra-system shifts; 90 (52.02%), followed by Unit Shifts 46 (26.59%), Structural Shifts 24 (13.88%), and then Class Shifts 13 (7.51%). The other finding was inaccurateness. It was found that there were inaccurate translation for 5 phrases, and resulted not equivalence ones in rendering the source language into the target language.


10.5153/sro.1 ◽  
1996 ◽  
Vol 1 (1) ◽  
pp. 80-91 ◽  
Author(s):  
Amanda Coffey ◽  
Holbrook Beverley ◽  
Atkinson Paul

In this paper we address a number of contemporary themes concerning the analysis of qualitative data and the ethnographic representation of social realities. A contrast is drawn. On the one hand, a diversity of representational modes and devices is currently celebrated, in response to various critiques of conventional ethnographic representation. On the other hand, the widespread influence of computer- assisted qualitative data analysis is promoting convergence on a uniform mode of data analysis and representation (often justified with reference to grounded theory). We note the ironic contrast between these two tendencies, the heterodox and the orthodox, in contemporary qualitative research. We go on to suggest that there exist alternatives that reflect both the diversity of representational approaches, and the broader possibilities of contemporary computing. We identify the technical and intellectual possibilities of hypertext software as offering just one such synthesis.


2015 ◽  
pp. 893-908
Author(s):  
Jessica Nina Lester

The purpose of this chapter is to illustrate how Computer-Assisted Qualitative Data Analysis Software (CAQDAS) packages, such as ATLAS.ti or Transana, can be used to support the transcription and data analysis process of large interactional data sets – specifically data analyzed from a discourse analysis perspective. Drawing from a larger ethnographic study, in this chapter the author illustrates how carrying out the transcription and analysis process within a CAQDAS package (in this case, Transana and ATLAS.ti) allows for an increase in transparency within the transcription and data analysis process, while also meeting the particular needs of the discourse analyst. By using one particular case/research study, the author demonstrates how CAQDAS packages might function to support a researcher in generating a more systematic and transparent analytical process, specifically during the early stages of the analysis process. The author gives particular attention to interactional data (i.e., 300 hours of video and audio recordings of therapy sessions) collected in a larger study and demonstrates the potential benefits of working across two CAQDAS packages, specifically Transana and ATLAS.ti, to support both the nuanced transcription process and the larger data analysis process.


Sign in / Sign up

Export Citation Format

Share Document