Meta-analysis of the impacts of water management on aquatic communities

2008 ◽  
Vol 65 (3) ◽  
pp. 437-447 ◽  
Author(s):  
Tim J Haxton ◽  
C Scott Findlay

Systematic meta-analyses were conducted on the ecological impacts of water management, including effects of (i) dewatering on macroinvertebrates, (ii) a hypolimnetic release on downstream aquatic fish and macro invertebrate communities, and (iii) flow modification on fluvial and habitat generalists. Our meta-analysis indicates, in general, that (i) macroinvertebrate abundance is lower in zones or areas that have been dewatered as a result of water fluctuations or low flows (overall effect size, –1.64; 95% confidence intervals (CIs), –2.51, –0.77), (ii) hypolimnetic draws are associated with reduced abundance of aquatic (fish and macroinvertebrates) communities (overall effect size, –0.84; 95% CIs, –1.38, –0.33) and macroinvertebrates (overall effect size, –0.73; 95% CIs, –1.24, –0.22) downstream of a dam, and (iii) altered flows are associated with reduced abundance of fluvial specialists (–0.42; 95% CIs, –0.81, –0.02) but not habitat generalists (overall effect size, –0.14; 95% CIs, –0.61, 0.32). Publication bias is evident in several of the meta-analyses; however, multiple experiments from a single study may be contributing to this bias. Fail-safe Ns suggest that many (>100) studies showing positive or no effects of water management on the selected endpoints would be required to qualitatively change the results of the meta-analysis, which in turn suggests that the conclusions are reasonably robust.

Author(s):  
John C. Norcross ◽  
Thomas P. Hogan ◽  
Gerald P. Koocher ◽  
Lauren A. Maggio

Assessing and interpreting research reports involves examination of individual studies as well as summaries of many studies. Summaries may be conveyed in narrative reviews or, more typically, in meta-analyses. This chapter reviews how researchers conduct a meta-analysis and report the results, especially by means of forest plots, which incorporate measures of effect size and their confidence intervals. A meta-analysis may also use moderator analyses or meta-regressions to identify important influences on the results. Critical appraisal of a study requires careful attention to the details of the sample used, the independent variable (treatment), dependent variable (outcome measure), the comparison groups, and the relation between the stated conclusions and the actual results. The CONSORT flow diagram provides a context for interpreting the sample and comparison groups. Finally, users must be alert to possible artifacts of publication bias.


2021 ◽  
pp. 146531252110272
Author(s):  
Despina Koletsi ◽  
Anna Iliadi ◽  
Theodore Eliades

Objective: To evaluate all available evidence on the prediction of rotational tooth movements with aligners. Data sources: Seven databases of published and unpublished literature were searched up to 4 August 2020 for eligible studies. Data selection: Studies were deemed eligible if they included evaluation of rotational tooth movement with any type of aligner, through the comparison of software-based and actually achieved data after patient treatment. Data extraction and data synthesis: Data extraction was done independently and in duplicate and risk of bias assessment was performed with the use of the QUADAS-2 tool. Random effects meta-analyses with effect sizes and their 95% confidence intervals (CIs) were performed and the quality of the evidence was assessed through GRADE. Results: Seven articles were included in the qualitative synthesis, of which three contributed to meta-analyses. Overall results revealed a non-accurate prediction of the outcome for the software-based data, irrespective of the use of attachments or interproximal enamel reduction (IPR). Maxillary canines demonstrated the lowest percentage accuracy for rotational tooth movement (three studies: effect size = 47.9%; 95% CI = 27.2–69.5; P < 0.001), although high levels of heterogeneity were identified (I2: 86.9%; P < 0.001). Contrary, mandibular incisors presented the highest percentage accuracy for predicted rotational movement (two studies: effect size = 70.7%; 95% CI = 58.9–82.5; P < 0.001; I2: 0.0%; P = 0.48). Risk of bias was unclear to low overall, while quality of the evidence ranged from low to moderate. Conclusion: Allowing for all identified caveats, prediction of rotational tooth movements with aligner treatment does not appear accurate, especially for canines. Careful selection of patients and malocclusions for aligner treatment decisions remain challenging.


Circulation ◽  
2007 ◽  
Vol 116 (suppl_16) ◽  
Author(s):  
George A Diamond ◽  
Sanjay Kaul

Background A highly publicized meta-analysis of 42 clinical trials comprising 27,844 diabetics ignited a firestorm of controversy by charging that treatment with rosiglitazone was associated with a “…worrisome…” 43% greater risk of myocardial infarction ( p =0.03) and a 64% greater risk of cardiovascular death ( p =0.06). Objective The investigators excluded 4 trials from the infarction analysis and 19 trials from the mortality analysis in which no events were observed. We sought to determine if these exclusions biased the results. Methods We compared the index study to a Bayesian meta-analysis of the entire 42 trials (using odds ratio as the measure of effect size) and to fixed-effects and random-effects analyses with and without a continuity correction that adjusts for values of zero. Results The odds ratios and confidence intervals for the analyses are summarized in the Table . Odds ratios for infarction ranged from 1.43 to 1.22 and for death from 1.64 to 1.13. Corrected models resulted in substantially smaller odds ratios and narrower confidence intervals than did uncorrected models. Although corrected risks remain elevated, none are statistically significant (*p<0.05). Conclusions Given the fragility of the effect sizes and confidence intervals, the charge that roziglitazone increases the risk of adverse events is not supported by these additional analyses. The exaggerated values observed in the index study are likely the result of excluding the zero-event trials from analysis. Continuity adjustments mitigate this error and provide more consistent and reliable assessments of true effect size. Transparent sensitivity analyses should therefore be performed over a realistic range of the operative assumptions to verify the stability of such assessments especially when outcome events are rare. Given the relatively wide confidence intervals, additional data will be required to adjudicate these inconclusive results.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


2021 ◽  
Author(s):  
Megha Joshi ◽  
James E Pustejovsky ◽  
S. Natasha Beretvas

The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some meta-analyses also include multiple studies conducted by the same lab or investigator, creating another potential source of dependence. An increasingly popular method to handle dependence is robust variance estimation (RVE), but this method can result in inflated Type I error rates when the number of studies is small. Small-sample correction methods for RVE have been shown to control Type I error rates adequately but may be overly conservative, especially for tests of multiple-contrast hypotheses. We evaluated an alternative method for handling dependence, cluster wild bootstrapping, which has been examined in the econometrics literature but not in the context of meta-analysis. Results from two simulation studies indicate that cluster wild bootstrapping maintains adequate Type I error rates and provides more power than extant small sample correction methods, particularly for multiple-contrast hypothesis tests. We recommend using cluster wild bootstrapping to conduct hypothesis tests for meta-analyses with a small number of studies. We have also created an R package that implements such tests.


BMJ Open ◽  
2019 ◽  
Vol 9 (6) ◽  
pp. e024886 ◽  
Author(s):  
Klaus Munkholm ◽  
Asger Sand Paludan-Müller ◽  
Kim Boesen

ObjectivesTo investigate whether the conclusion of a recent systematic review and network meta-analysis (Ciprianiet al) that antidepressants are more efficacious than placebo for adult depression was supported by the evidence.DesignReanalysis of a systematic review, with meta-analyses.Data sources522 trials (116 477 participants) as reported in the systematic review by Ciprianiet aland clinical study reports for 19 of these trials.AnalysisWe used the Cochrane Handbook’s risk of bias tool and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to evaluate the risk of bias and the certainty of evidence, respectively. The impact of several study characteristics and publication status was estimated using pairwise subgroup meta-analyses.ResultsSeveral methodological limitations in the evidence base of antidepressants were either unrecognised or underestimated in the systematic review by Ciprianiet al. The effect size for antidepressants versus placebo on investigator-rated depression symptom scales was higher in trials with a ‘placebo run-in’ study design compared with trials without a placebo run-in design (p=0.05). The effect size of antidepressants was higher in published trials compared with unpublished trials (p<0.0001). The outcome data reported by Ciprianiet aldiffered from the clinical study reports in 12 (63%) of 19 trials. The certainty of the evidence for the placebo-controlled comparisons should be very low according to GRADE due to a high risk of bias, indirectness of the evidence and publication bias. The mean difference between antidepressants and placebo on the 17-item Hamilton depression rating scale (range 0–52 points) was 1.97 points (95% CI 1.74 to 2.21).ConclusionsThe evidence does not support definitive conclusions regarding the benefits of antidepressants for depression in adults. It is unclear whether antidepressants are more efficacious than placebo.


Author(s):  
Giuseppina Spano ◽  
Marina D’Este ◽  
Vincenzo Giannico ◽  
Giuseppe Carrus ◽  
Mario Elia ◽  
...  

Recent literature has revealed the positive effect of gardening on human health; however, empirical evidence on the effects of gardening-based programs on psychosocial well-being is scant. This meta-analysis aims to examine the scientific literature on the effect of community gardening or horticultural interventions on a variety of outcomes related to psychosocial well-being, such as social cohesion, networking, social support, and trust. From 383 bibliographic records retrieved (from 1975 to 2019), seven studies with a total of 22 effect sizes were selected on the basis of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Meta-analytic findings on 11 comparisons indicate a positive and moderate effect of horticultural or gardening interventions on psychosocial well-being. Moderation analysis shows a greater effect size in individualistic than collectivistic cultures. A greater effect size was also observed in studies involving community gardening compared to horticultural intervention. Nevertheless, an effect of publication bias and study heterogeneity has been detected. Despite the presence of a large number of qualitative studies on the effect of horticulture/gardening on psychosocial well-being, quantitative studies are lacking. There is a strong need to advance into further high-quality studies on this research topic given that gardening has promising applied implications for human health, the community, and sustainable city management.


1990 ◽  
Vol 24 (3) ◽  
pp. 405-415 ◽  
Author(s):  
Nathaniel McConaghy

Meta-analysis replaced statistical significance with effect size in the hope of resolving controversy concerning evaluation of treatment effects. Statistical significance measured reliability of the effect of treatment, not its efficacy. It was strongly influenced by the number of subjects investigated. Effect size as assessed originally, eliminated this influence but by standardizing the size of the treatment effect could distort it. Meta-analyses which combine the results of studies which employ different subject types, outcome measures, treatment aims, no-treatment rather than placebo controls or therapists with varying experience can be misleading. To ensure discussion of these variables meta-analyses should be used as an aid rather than a substitute for literature review. While meta-analyses produce contradictory findings, it seems unwise to rely on the conclusions of an individual analysis. Their consistent finding that placebo treatments obtain markedly higher effect sizes than no treatment hopefully will render the use of untreated control groups obsolete.


2008 ◽  
Vol 13 (1) ◽  
pp. 31-48 ◽  
Author(s):  
Julio Sánchez-Meca ◽  
Fulgencio Marín-Martínez

F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 407 ◽  
Author(s):  
Michael Duggan ◽  
Patrizio Tressoldi

Background: This is an update of the Mossbridge et al’s meta-analysis related to the physiological anticipation preceding seemingly unpredictable stimuli which overall effect size was 0.21; 95% Confidence Intervals: 0.13 - 0.29 Methods: Nineteen new peer and non-peer reviewed studies completed from January 2008 to June 2018 were retrieved describing a total of 27 experiments and 36 associated effect sizes. Results: The overall weighted effect size, estimated with a frequentist multilevel random model, was: 0.28; 95% Confidence Intervals: 0.18-0.38; the overall weighted effect size, estimated with a multilevel Bayesian model, was: 0.28; 95% Credible Intervals: 0.18-0.38. The weighted mean estimate of the effect size of peer reviewed studies was higher than that of non-peer reviewed studies, but with overlapped confidence intervals: Peer reviewed: 0.36; 95% Confidence Intervals: 0.26-0.47; Non-Peer reviewed: 0.22; 95% Confidence Intervals: 0.05-0.39. Similarly, the weighted mean estimate of the effect size of Preregistered studies was higher than that of Non-Preregistered studies: Preregistered: 0.31; 95% Confidence Intervals: 0.18-0.45; No-Preregistered: 0.24; 95% Confidence Intervals: 0.08-0.41. The statistical estimation of the publication bias by using the Copas selection model suggest that the main findings are not contaminated by publication bias. Conclusions: In summary, with this update, the main findings reported in Mossbridge et al’s meta-analysis, are confirmed.


Sign in / Sign up

Export Citation Format

Share Document