scholarly journals AOIR ETHICS PANEL 2: PLATFORM CHALLENGES

Author(s):  
Martyna Gliniecka ◽  
Joseph Reagle ◽  
Nicholas Proferes ◽  
Casey Fiesler ◽  
Sarah Gilbert ◽  
...  

This panel is one of two sessions organized by the AoIR Ethics Working Committee. It collects five papers exploring a broad (but in many ways common) set of ethical dilemmas faced by researchers engaged with specific platforms such as Reddit, Amazon’s Mechanical Turk, and private messaging platforms. These include: a study of people's online conversations about health matters on Reddit in support of a proposed situated ethics framework for researchers working with publicly available data; an exploration into sourcing practices among Reddit researchers to determine if their sources could be unmasked and located in Reddit archives; a broader systematic review of over 700 research studies that used Reddit data to assess the kinds of analysis and methods researchers are engaging in as well as any ethical considerations that emerge when researching Reddit; a critical examination of the use of Amazon’s Mechanical Turk for academic research; and an investigation into current practices and ethical dilemmas faced when researching closed messaging applications and their users. Taken together, these papers illuminate emerging ethical dilemmas facing researchers when investigating novel platforms and user communities; challenges often not fully addressed–if even contemplated–in existing ethical guidelines. These papers are among those under consideration for publication in a special issue of the Journal of Information, Communication and Ethics in Society associated with the AoIR Ethics Working Committee and AoIR2021.

2020 ◽  
Vol 7 (1) ◽  
pp. 205316801990118 ◽  
Author(s):  
Eric Loepp ◽  
Jarrod T. Kelly

Amazon’s Mechanical Turk (MTurk) platform is a popular tool for scholars seeking a reasonably representative population to recruit subjects for academic research that is cheaper than contract work via survey research firms. Numerous scholarly inquiries affirm that the MTurk pool is at least as representative as college student samples; however, questions about the validity of MTurk data persist. Amazon classifies all MTurk Workers into two types: (1) “regular” Workers, and (2) more qualified (and expensive) “master” Workers. In this paper, we evaluate how choice in Worker type impacts the nature of research samples in terms of characteristics/features and performance. Our results identify few meaningful differences between master and regular Workers. However, we do find that master Workers are more likely to be female, older, and Republican, than regular Workers. Additionally, master Workers have far more experience, having spent twice as much time working on MTurk and having completed over seven times the number of assignments. Based on these findings, we recommend that researchers ask for Worker status and number of assignments completed to control for effects related to experience. However, the results imply that budget-conscious scholars will not compromise project integrity by using the wider pool of regular Workers in academic studies.


2017 ◽  
Vol 30 (1) ◽  
pp. 111-122 ◽  
Author(s):  
Steve Buchheit ◽  
Marcus M. Doxey ◽  
Troy Pollard ◽  
Shane R. Stinson

ABSTRACT Multiple social science researchers claim that online data collection, mainly via Amazon's Mechanical Turk (MTurk), has revolutionized the behavioral sciences (Gureckis et al. 2016; Litman, Robinson, and Abberbock 2017). While MTurk-based research has grown exponentially in recent years (Chandler and Shapiro 2016), reasonable concerns have been raised about online research participants' ability to proxy for traditional research participants (Chandler, Mueller, and Paolacci 2014). This paper reviews recent MTurk research and provides further guidance for recruiting samples of MTurk participants from populations of interest to behavioral accounting researchers. First, we provide guidance on the logistics of using MTurk and discuss the potential benefits offered by TurkPrime, a third-party service provider. Second, we discuss ways to overcome challenges related to targeted participant recruiting in an online environment. Finally, we offer suggestions for disclosures that authors may provide about their efforts to attract participants and analyze responses.


2021 ◽  
pp. 003435522110142
Author(s):  
Deniz Aydemir-Döke ◽  
James T. Herbert

Microaggressions are daily insults to minority individuals such as people with disabilities (PWD) that communicate messages of exclusion, inferiority, and abnormality. In this study, we developed a new scale, the Ableist Microaggressions Impact Questionnaire (AMIQ), which assesses ableist microaggression experiences of PWD. Data from 245 PWD were collected using Amazon’s Mechanical Turk (MTurk) platform. An exploratory factor analysis of the 25-item AMIQ revealed a three-factor structure with internal consistency reliability ranging between .87 and .92. As a more economical and psychometrically sound instrument assessing microaggression impact as it pertains to disability, the AMIQ offers promise for rehabilitation counselor research and practice.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Jon Agley ◽  
Yunyu Xiao ◽  
Esi E. Thompson ◽  
Lilian Golzarri-Arroyo

Abstract Objective This study describes the iterative process of selecting an infographic for use in a large, randomized trial related to trust in science, COVID-19 misinformation, and behavioral intentions for non-pharmaceutical prevenive behaviors. Five separate concepts were developed based on underlying subcomponents of ‘trust in science and scientists’ and were turned into infographics by media experts and digital artists. Study participants (n = 100) were recruited from Amazon’s Mechanical Turk and randomized to five different arms. Each arm viewed a different infographic and provided both quantitative (narrative believability scale and trust in science and scientists inventory) and qualitative data to assist the research team in identifying the infographic most likely to be successful in a larger study. Results Data indicated that all infographics were perceived to be believable, with means ranging from 5.27 to 5.97 on a scale from one to seven. No iatrogenic outcomes were observed for within-group changes in trust in science. Given equivocal believability outcomes, and after examining confidence intervals for data on trust in science and then the qualitative responses, we selected infographic 3, which addressed issues of credibility and consensus by illustrating changing narratives on butter and margarine, as the best candidate for use in the full study.


2021 ◽  
pp. 027507402110488
Author(s):  
Mark Benton

Policing in the United States has a racist history, with negative implications for its legitimacy among African Americans today. Legitimacy is important for policing's effective operations. Community policing may improve policing's legitimacy but is difficult to implement with fidelity and does not address history. An apology for policing's racist history may work as a legitimizing supplement to community policing. On the other hand, an apology may be interpreted as words without changes in practices. Using a survey vignette experiment on Amazon's Mechanical Turk to sample African Americans, this research tests the legitimizing effect of a supplemental apology for historical police racism during a community policing policy announcement. Statistical findings suggest that supplementing the communication with an apology imparted little to no additional legitimacy on policing among respondents. Qualitative data suggested a rationale: Apologies need not indicate future equitable behavior or policy implementation, with implementation itself seeming crucial for police legitimacy improvements.


2021 ◽  
Author(s):  
David Matthew Sumantry

This thesis investigated accent-based stereotyping and prejudice – a line of research originating in Lambert et al. (1960) – by studying perceptions of four accented groups. Participants recruited from Amazon’s Mechanical Turk listened to audio clips where the speakers had native accents from either Toronto, Latin America, Arabic countries, or India. They then evaluated the speakers on several dimensions based on the Stereotype Content Model (SCM) and the solidarity-status-dynamism model (SSD), and completed direct measures of prejudice. Speakers were not evaluated differently on measures of prejudice but were stereotyped differently. Participants higher in right-wing ideologies held more negative stereotypes of speakers and demonstrated greater prejudice. Comparing theoretical models indicated that the more commonly-used SCM provides a suitable alternative to the SSD model. Implications for research on accent-based prejudice are discussed.


Aviation ◽  
2021 ◽  
Vol 25 (3) ◽  
pp. 220-231
Author(s):  
Sena Kiliç ◽  
Caglar Ucler ◽  
Luis Martin-Domingo

Airports operate in a highly-competitive and challenging environment. Therefore, in order to remain competitive, innovation is imperative for airports. This paper aims to conduct academic research into innovation at airports by reviewing studies published from 2000 to 2019 for presenting key findings. A systematic literature review was made based on scientific papers indexed in Scopus with the keywords innovation and airport in the title, abstract or keywords sections, consolidating the innovation focus, approach and degree discussed with respect to innovation areas and territorial focal points. Consequently, it was found that research on airport innovation is: (i) mainly focused on products/services, (ii) concerned with leveraging ICT (Informatıon Communication Technology), (iii) implemented ad-hoc without a consolidated strategic approach, and (iv) lacks the input of external innovation scholars and specialists.


Sign in / Sign up

Export Citation Format

Share Document