replication crisis
Recently Published Documents


TOTAL DOCUMENTS

280
(FIVE YEARS 198)

H-INDEX

19
(FIVE YEARS 8)

2022 ◽  
Author(s):  
Bermond Scoggins ◽  
Matthew Peter Robertson

The scientific method is predicated on transparency -- yet the pace at which transparent research practices are being adopted by the scientific community is slow. The replication crisis in psychology showed that published findings employing statistical inference are threatened by undetected errors, data manipulation, and data falsification. To mitigate these problems and bolster research credibility, open data and preregistration have increasingly been adopted in the natural and social sciences. While many political science and international relations journals have committed to implementing these reforms, the extent of open science practices is unknown. We bring large-scale text analysis and machine learning classifiers to bear on the question. Using population-level data -- 93,931 articles across the top 160 political science and IR journals between 2010 and 2021 -- we find that approximately 21% of all statistical inference papers have open data, and 5% of all experiments are preregistered. Despite this shortfall, the example of leading journals in the field shows that change is feasible and can be effected quickly.


2022 ◽  
Author(s):  
Niklas Schürmann

Neuroscience is facing a replication crisis. Little effort is invested in replication projects and low power in many studies indicates a potentially poor state of research. To assess replicability of EEG research, the #EEGManyLabs project aims to reproduce the most influential original EEG studies. A spin-off to the main project shall investigate the relationship between frontal alpha asymmetries and psychopathological symptoms, the predictive qualities of which have lately been considered controversial. To ensure that preprocessing of EEG data can be conducted automatically (via Automagic), we tested 47 healthy participants in an EEG resting state paradigm and collected psychopathological measures. We analyzed reliability and quality of manual and automated preprocessing and performed multiple regressions to investigate the association of frontal alpha asymmetries and depression, worry, trait anxiety and COVID-19 related worry. We hypothesized comparably good interrater reliability of preprocessing methods and higher data quality in automatically preprocessed data. We expected associations of leftward frontal alpha asymmetries and higher depression and anxiety scores and significant associations of rightward frontal alpha asymmetries and higher worrying and COVID-19- related worrying. Interrater reliability of preprocessing methods was mostly good, automatically preprocessed data achieved higher quality scores than manually preprocessed data. We uncovered an association of relative rightward lateralization of alpha power at one electrode pair and depressive symptoms. No further associations of interest emerged. We conclude that Automagic is an appropriate tool for large-scale preprocessing. Findings regarding associations of frontal alpha asymmetries and psychopathology likely stem from sample limitations and shrinking effect sizes.


Author(s):  
Wolfgang I. Schöllhorn ◽  
Nikolas Rizzi ◽  
Agnė Slapšinskaitė-Dackevičienė ◽  
Nuno Leite

This critical review considers the epistemological and historical background of the theoretical construct of motor learning for a more differentiated understanding. More than simply reflecting critically on the models that are used to solve problems—whether they are applied in therapy, physical education, or training practice—this review seeks to respond constructively to the recent discussion caused by the replication crisis in life sciences. To this end, an in-depth review of contemporary motor learning approaches is provided, with a pragmatism-oriented clarification of the researcher’s intentions on fundamentals (what?), subjects (for whom?), time intervals (when?), and purpose (for what?). The complexity in which the processes of movement acquisition, learning, and refinement take place removes their predictable and linear character and therefore, from an applied point of view, invites a great deal of caution when trying to make generalization claims. Particularly when we attempt to understand and study these phenomena in unpredictable and dynamic contexts, it is recommended that scientists and practitioners seek to better understand the central role that the individual and their situatedness plays in the system. In this way, we will be closer to making a meaningful and authentic contribution to the advancement of knowledge, and not merely for the sake of renaming inventions.


2021 ◽  
Author(s):  
Jennifer L Beaudry ◽  
Matt N Williams ◽  
Michael Carl Philipp ◽  
Emily Jane Kothe

Background: Understanding students’ naive conceptions about how science works and the norms that guide scientific best practice is important so that teachers can adapt their teaching to students’ existing understandings. Objective: To describe what incoming undergraduate students of psychology believe about reproducibility and open science practices in psychology. Method: International online survey with participants who were about to start their first course in psychology at a university (N = 239). Results: When asked about how research should be done, most students endorsed most (but not all) of ten open science practices. When asked to estimate the proportion of published psychological studies that apply each of a set of 10 open science practices, participants’ estimates tended to average near 50%. Only 18% of participants had heard of the term “replication crisis.” Conclusion: Despite relatively significant media attention on the replication crisis, few incoming psychology students are familiar with the term. Incoming students nevertheless appear to be sympathetic toward most open science practices, although they may overestimate the prevalence of these practices in psychology. Teaching Implications: Teaching materials aimed at incoming psychology students should not assume pre-existing knowledge about open science or replicability.


2021 ◽  
Vol 5 (4) ◽  
pp. 76
Author(s):  
Satoshi Yazawa ◽  
Kikue Sakaguchi ◽  
Kazuo Hiraki

Advances in web technology and the widespread use of smartphones and PCs have proven that it is possible to optimize various services using personal data, such as location information and search history. While considerations of personal privacy and legal aspects lead to situations where data are monopolized by individual services and companies, a replication crisis has been pointed out for the data of laboratory experiments, which is challenging to solve given the difficulty of data distribution. To ensure distribution of experimental data while guaranteeing security, an online experiment platform can be a game changer. Current online experiment platforms have not yet considered improving data distribution, and it is currently difficult to use the data obtained from one experiment for other purposes. In addition, various devices such as activity meters and consumer-grade electroencephalography meters are emerging, and if a platform that collects data from such devices and tasks online is to be realized, the platform will hold a large amount of sensitive data, making it even more important to ensure security. We propose GO-E-MON, a service that combines an online experimental environment with a distributed personal data store (PDS), and explain how GO-E-MON can realize the reuse of experimental data with the subject’s consent by connecting to a distributed PDS. We report the results of the experiment in a groupwork lecture for university students to verify whether this method works. By building an online experiment environment integrated with a distributed PDS, we present the possibility of integrating multiple experiments performed by different experimenters—with the consent of individual subjects—while solving the security issues.


2021 ◽  
Author(s):  
Frank Hillary ◽  
Sarah Rajtmajer

Abstract:This critical review discusses evidence for the replication crisis in the clinical neuroscience literature with focus on the size of the literature and how scientific hypotheses are framed and tested. We aim to reinvigorate discussions born from philosophy of science regarding falsification (see Popper, 1959;1962) but with hope to bring pragmatic application that might give real leverage to attempts to address scientific reproducibility. The surging publication rate has not translated to unparalleled scientific progress so the current “science-by-volume” approach requires new perspective for determining scientific ground truths. We describe an example from the network neurosciences in the study of traumatic brain injury where there has been little effort to refute two prominent hypotheses leading to a literature without resolution. Based upon this example, we discuss how building strong hypotheses and then designing efforts to falsify them can bring greater precision to the clinical neurosciences. With falsification as the goal, we can harness big data and computational power to identify the fitness of each theory to advance the neurosciences.


2021 ◽  
Author(s):  
Anne M. Scheel

Psychology’s replication crisis is typically conceptualised as the insight that the published literature contains a worrying amount of unreplicable, false-positive findings. At the same time, meta-scientific attempts to assess the crisis in more detail have reported substantial difficulties to identify unambiguous definitions of the scientific claims in published articles and to determine how they are connected to the presented evidence. I argue that most claims in the literature are so critically underspecified that attempts to empirically evaluate them are doomed to failure — they are not even wrong. Meta-scientists should beware of the flawed assumption that the psychological literature is a collection of well-defined claims. To move beyond the crisis, psychologists must reconsider and rebuild the conceptual basis of their hypotheses before trying to test them.


2021 ◽  
Author(s):  
Riccardo Bettin

In the last decade science has fallen into a replication crisis, this means that researches, when replicated, do not give the same results as the original ones. The difficulty in replicating studies can be due to several reasons, some of which regard the scientific world in general, such as the actual publication system that encourages incorrect behaviours and questionable research practices by scientists, and some that change between scientific fields. In fact, some scientific field feel this crisis more than others, and psychology is one of them. Low statistical power and misuse of statistics in psychology is reported from a long time. The first to criticize psychologist in regards of the use of power has been Cohen in 1962. This crisis can lead to the loss of trust in psychology and in science in general, for this reason it is important to find some solutions to the crisis.Several possible solutions have been proposed. In this work we will focus on Design analysis, that is quite a new notion that can help in the process of getting out of the replication crisis. This analysis consists in calculating (power, and) new type of inferential errors to help researchers to better understand the consequences of low power, small sample sizes and studies started without appropriate planning. Design analysis can be done prospectively and retrospectively, that is different from post-hoc power calculations. The aims of this work are mainly to extend the design analysis to the case of differences between independent proportions and to provide an R implementation that can be used by researchers.


Sign in / Sign up

Export Citation Format

Share Document