Reporting Bayesian Results

2020 ◽  
pp. 0193841X2097761
Author(s):  
David Rindskopf

Because of the different philosophy of Bayesian statistics, where parameters are random variables and data are considered fixed, the analysis and presentation of results will differ from that of frequentist statistics. Most importantly, the probabilities that a parameter is in certain regions of the parameter space are crucial quantities in Bayesian statistics that are not calculable (or considered important) in the frequentist approach that is the basis of much of traditional statistics. In this article, I discuss the implications of these differences for presentation of the results of Bayesian analyses. In doing so, I present more detailed guidelines than are usually provided and explain the rationale for my suggestions.

Author(s):  
Janet L. Peacock ◽  
Philip J. Peacock

Bayesian statistics 478 How Bayesian methods work 480 Prior distributions 482 Likelihood; posterior distributions 484 Summarizing and presenting results 486 Using Bayesian analyses in medicine 488 Software for Bayesian statistics 492 Reading Bayesian analyses in papers 494 Bayesian methods: a summary 496 In this chapter we describe the Bayesian approach to statistical analysis in contrast to the frequentist approach. We describe how Bayesian methods work including a description of prior and posterior distributions. We outline the role and choice of prior distributions and how they are combined with the data collected to provide an updated estimate of the unknown quantity being studied. We include examples of the use of Bayesian methods in medicine, and discuss the pros and cons of the Bayesian approach compared with the frequentist approach Finally, we give guidance on how to read and interpret Bayesian analyses in the medical literature....


2011 ◽  
Vol 48 (02) ◽  
pp. 527-546 ◽  
Author(s):  
Patrizia Berti ◽  
Irene Crimaldi ◽  
Luca Pratelli ◽  
Pietro Rigo

Let X n be a sequence of integrable real random variables, adapted to a filtration (G n ). Define C n = √{(1 / n)∑ k=1 n X k − E(X n+1 | G n )} and D n = √n{E(X n+1 | G n ) − Z}, where Z is the almost-sure limit of E(X n+1 | G n ) (assumed to exist). Conditions for (C n , D n ) → N(0, U) x N(0, V) stably are given, where U and V are certain random variables. In particular, under such conditions, we obtain √n{(1 / n)∑ k=1 n X_k - Z} = C n + D n → N(0, U + V) stably. This central limit theorem has natural applications to Bayesian statistics and urn problems. The latter are investigated, by paying special attention to multicolor randomly reinforced urns.


2017 ◽  
Author(s):  
Mirko Thalmann ◽  
Marcel Niklaus ◽  
Klaus Oberauer

Using mixed-effects models and Bayesian statistics has been advocated by statisticians in recent years. Mixed-effects models allow researchers to adequately account for the structure in the data. Bayesian statistics – in contrast to frequentist statistics – can state the evidence in favor of or against an effect of interest. For frequentist statistical methods, it is known that mixed models can lead to serious over-estimation of evidence in favor of an effect (i.e., inflated Type-I error rate) when models fail to include individual differences in the effect sizes of predictors ("random slopes") that are actually present in the data. Here, we show through simulation that the same problem exists for Bayesian mixed models. Yet, at present there is no easy-to-use application that allows for the estimation of Bayes Factors for mixed models with random slopes on continuous predictors. Here, we close this gap by introducing a new R package called BayesRS. We tested its functionality in four simulation studies. They show that BayesRS offers a reliable and valid tool to compute Bayes Factors. BayesRS also allows users to account for correlations between random effects. In a fifth simulation study we show, however, that doing so leads to slight underestimation of the evidence in favor of an actually present effect. We only recommend modeling correlations between random effects when they are of primary interest and when sample size is large enough. BayesRS is available under https://cran.r-project.org/web/packages/BayesRS/, R code for all simulations is available under https://osf.io/nse5x/?view_only=b9a7caccd26a4764a084de3b8d459388


2015 ◽  
Vol 123 (1) ◽  
pp. 101-115 ◽  
Author(s):  
Emine Ozgur Bayman ◽  
Franklin Dexter ◽  
Michael M. Todd

Abstract Background: Periodic assessment of performance by anesthesiologists is required by The Joint Commission Ongoing Professional Performance Evaluation program. Methods: The metrics used in this study were the (1) measurement of blood pressure and (2) oxygen saturation (Spo2) either before or less than 5 min after anesthesia induction. Noncompliance was defined as no measurement within this time interval. The authors assessed the frequency of noncompliance using information from 63,913 cases drawn from the anesthesia information management system. To adjust for differences in patient and procedural characteristics, 135 preoperative variables were analyzed with decision trees. The retained covariate for the blood pressure metric was patient’s age and, for Spo2 metric, was American Society of Anesthesiologist’s physical status, whether the patient was coming from an intensive care unit, and whether induction occurred within 5 min of the start of the scheduled workday. A Bayesian hierarchical model, designed to identify anesthesiologists as “performance outliers,” after adjustment for covariates, was developed and was compared with frequentist methods. Results: The global incidences of noncompliance (with frequentist 95% CI) were 5.35% (5.17 to 5.53%) for blood pressure and 1.22% (1.14 to 1.30%) for Spo2 metrics. By using unadjusted rates and frequentist statistics, it was found that up to 43% of anesthesiologists would be deemed noncompliant for the blood pressure metric and 70% of anesthesiologists for the Spo2 metric. By using Bayesian analyses with covariate adjustment, only 2.44% (1.28 to 3.60%) and 0.00% of the anesthesiologists would be deemed “noncompliant” for blood pressure and Spo2, respectively. Conclusion: Bayesian hierarchical multivariate methodology with covariate adjustment is better suited to faculty monitoring than the nonhierarchical frequentist approach.


2013 ◽  
Vol 28 (02) ◽  
pp. 1340008
Author(s):  
LESZEK ROSZKOWSKI ◽  
ENRICO MARIA SESSOLO ◽  
YUE-LIN SMING TSAI

In this talk we present our recent Bayesian analyses of the Constrained MSSM in which the model's parameter space is constrained by the CMS αT 1.1/fb data at the LHC, the XENON100 dark matter direct detection data, and Fermi-LAT γ-ray data from dwarf spheroidal galaxies (dSphs). We also show that the projected one-year sensitivities for annihilation-induced neutrinos from the Sun in the 86-string configuration of IceCube/DeepCore have the potential to yield additional constraining power on the parameter space of the CMSSM.


2021 ◽  
pp. 165-180
Author(s):  
Timothy E. Essington

The chapter “Bayesian Statistics” gives a brief overview of the Bayesian approach to statistical analysis. It starts off by examining the difference between frequentist statistics and Bayesian statistics. Next, it introduces Bayes’ theorem and explains how the theorem is used in statistics and model selection, with the prosecutor’s fallacy given as a practice example. The chapter then goes on to discuss priors and Bayesian parameter estimation. It concludes with some final thoughts on Bayesian approaches. The chapter does not answer the question “Should ecologists become Bayesian?” However, to the extent that alternative models can be posed as alternative values of parameters, Bayesian parameter estimation can help assign probabilities to those hypotheses.


2018 ◽  
Vol 8 (1) ◽  
pp. 18-29 ◽  
Author(s):  
K. R. Koch

Abstract The Bayesian approach allows an intuitive way to derive the methods of statistics. Probability is defined as a measure of the plausibility of statements or propositions. Three rules are sufficient to obtain the laws of probability. If the statements refer to the numerical values of variables, the so-called random variables, univariate and multivariate distributions follow. They lead to the point estimation by which unknown quantities, i.e. unknown parameters, are computed from measurements. The unknown parameters are random variables, they are fixed quantities in traditional statistics which is not founded on Bayes’ theorem. Bayesian statistics therefore recommends itself for Monte Carlo methods, which generate random variates from given distributions. Monte Carlo methods, of course, can also be applied in traditional statistics. The unknown parameters, are introduced as functions of the measurements, and the Monte Carlo methods give the covariance matrix and the expectation of these functions. A confidence region is derived where the unknown parameters are situated with a given probability. Following a method of traditional statistics, hypotheses are tested by determining whether a value for an unknown parameter lies inside or outside the confidence region. The error propagation of a random vector by the Monte Carlo methods is presented as an application. If the random vector results from a nonlinearly transformed vector, its covariance matrix and its expectation follow from the Monte Carlo estimate. This saves a considerable amount of derivatives to be computed, and errors of the linearization are avoided. The Monte Carlo method is therefore efficient. If the functions of the measurements are given by a sum of two or more random vectors with different multivariate distributions, the resulting distribution is generally not known. TheMonte Carlo methods are then needed to obtain the covariance matrix and the expectation of the sum.


1999 ◽  
Vol 19 (3) ◽  
pp. 767-807 ◽  
Author(s):  
HANS THUNBERG

It is known that in generic, full unimodal families with a critical point of finite order, there exists a set of positive measure in parameter space such that the corresponding maps have chaotic behaviour. In this paper we prove the corresponding statement for certain families of unimodal maps with flat critical point. One of the key-points is a large deviation argument for sums of ‘almost’ independent random variables with only finitely many moments.


2011 ◽  
Vol 48 (2) ◽  
pp. 527-546 ◽  
Author(s):  
Patrizia Berti ◽  
Irene Crimaldi ◽  
Luca Pratelli ◽  
Pietro Rigo

Let Xn be a sequence of integrable real random variables, adapted to a filtration (Gn). Define Cn = √{(1 / n)∑k=1nXk − E(Xn+1 | Gn)} and Dn = √n{E(Xn+1 | Gn) − Z}, where Z is the almost-sure limit of E(Xn+1 | Gn) (assumed to exist). Conditions for (Cn, Dn) → N(0, U) x N(0, V) stably are given, where U and V are certain random variables. In particular, under such conditions, we obtain √n{(1 / n)∑k=1nX_k - Z} = Cn + Dn → N(0, U + V) stably. This central limit theorem has natural applications to Bayesian statistics and urn problems. The latter are investigated, by paying special attention to multicolor randomly reinforced urns.


2019 ◽  
Vol 22 ◽  
Author(s):  
Miguel Ángel García-Pérez

Abstract Criticism of null hypothesis significance testing, confidence intervals, and frequentist statistics in general has evolved into advocacy of Bayesian analyses with informative priors for strong inference. This paper shows that Bayesian analysis with informative priors is formally equivalent to data falsification because the information carried by the prior can be expressed as the addition of fabricated observations whose statistical characteristics are determined by the parameters of the prior. This property of informative priors makes clear that only the use of non-informative, uniform priors in all types of Bayesian analyses is compatible with standards of research integrity. At the same time, though, Bayesian estimation with uniform priors yields point and interval estimates that are identical or nearly identical to those obtained with frequentist methods. At a qualitative level, frequentist and Bayesian outcomes have different interpretations but they are interchangeable when uniform priors are used. Yet, Bayesian interpretations require either the assumption that population parameters are random variables (which they are not) or an explicit acknowledgment that the posterior distribution (which is thus identical to the likelihood function except for a scale factor) only expresses the researcher’s beliefs and not any information about the parameter of concern.


Sign in / Sign up

Export Citation Format

Share Document