scholarly journals Identical twins and Bayes' theorem in the 21st century

F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 278
Author(s):  
Valentin Amrhein ◽  
Tobias Roth ◽  
Fränzi Korner-Nievergelt

In a recent article in Science on "Bayes' Theorem in the 21st Century", Bradley Efron uses Bayes' theorem to calculate the probability that twins are identical given that the sonogram shows twin boys. He concludes that Bayesian calculations cannot be uncritically accepted when using uninformative priors. We argue that this conclusion is problematic because Efron's example on identical twins does not use data, hence it is not Bayesian statistics; his priors are not appropriate and are not uninformative; and using the available data point and an uninformative prior actually leads to a reasonable posterior distribution.

F1000Research ◽  
2015 ◽  
Vol 2 ◽  
pp. 278
Author(s):  
Valentin Amrhein ◽  
Tobias Roth ◽  
Fränzi Korner-Nievergelt

In an article in Science on "Bayes' Theorem in the 21st Century", Bradley Efron uses Bayes' theorem to calculate the probability that twins are identical given that the sonogram shows twin boys. He concludes that Bayesian calculations cannot be uncritically accepted when using uninformative priors. While we agree that the choice of the prior is essential, we argue that the calculations on identical twins give a biased impression of the influence of uninformative priors in Bayesian data analyses.


2017 ◽  
Vol 7 (1) ◽  
pp. 21
Author(s):  
Marco Dall'Aglio ◽  
Theodore P. Hill

It is well known that the classical Bayesian posterior arises naturally as the unique solution of different optimization problems, without the necessity of interpreting data as conditional probabilities and then using Bayes' Theorem. Here it is shown that the Bayesian posterior is also the unique minimax optimizer of the loss of self-information in combining the prior and the likelihood distributions, and is the unique proportional consolidation of the same distributions. These results, direct corollaries of recent results about conflations of probability distributions, further reinforce the use of Bayesian posteriors, and may help partially reconcile some of the differences between classical and Bayesian statistics.


Science ◽  
2013 ◽  
Vol 340 (6137) ◽  
pp. 1177-1178 ◽  
Author(s):  
Bradley Efron
Keyword(s):  

2020 ◽  
pp. 0193841X1989562
Author(s):  
David Rindskopf

Bayesian statistics is becoming a popular approach to handling complex statistical modeling. This special issue of Evaluation Review features several Bayesian contributions. In this overview, I present the basics of Bayesian inference. Bayesian statistics is based on the principle that parameters have a distribution of beliefs about them that behave exactly like probability distributions. We can use Bayes’ Theorem to update our beliefs about values of the parameters as new information becomes available. Even better, we can make statements that frequentists do not, such as “the probability that an effect is larger than 0 is .93,” and can interpret 95% (e.g.) intervals as people naturally want, that there is a 95% probability that the parameter is in that interval. I illustrate the basic concepts of Bayesian statistics through a simple example of predicting admissions to a PhD program.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

The purpose of this chapter is to illustrate some of the things that can go wrong in Markov Chain Monte Carlo (MCMC) analysis and to introduce some diagnostic tools that help identify whether the results of such an analysis can be trusted. The goal of a Bayesian MCMC analysis is to estimate the posterior distribution while skipping the integration required in the denominator of Bayes’ Theorem. The MCMC approach does this by breaking the problem into small, bite-sized pieces, allowing the posterior distribution to be built bit by bit. The main challenge, however, is that several things might go wrong in the process. Several diagnostic tests can be applied to ensure that an MCMC analysis provides an adequate estimate of the posterior distribution. Such diagnostics are required of all MCMC analyses and include tuning, burn-in, and pruning.


2021 ◽  
pp. 165-180
Author(s):  
Timothy E. Essington

The chapter “Bayesian Statistics” gives a brief overview of the Bayesian approach to statistical analysis. It starts off by examining the difference between frequentist statistics and Bayesian statistics. Next, it introduces Bayes’ theorem and explains how the theorem is used in statistics and model selection, with the prosecutor’s fallacy given as a practice example. The chapter then goes on to discuss priors and Bayesian parameter estimation. It concludes with some final thoughts on Bayesian approaches. The chapter does not answer the question “Should ecologists become Bayesian?” However, to the extent that alternative models can be posed as alternative values of parameters, Bayesian parameter estimation can help assign probabilities to those hypotheses.


Author(s):  
Janet L. Peacock ◽  
Philip J. Peacock

Analysis of variance See One-way analysis of variance (p. 280) and Two-way analysis of variance (p. 412) Bayes’s theorem A formula that allows the reversal of conditional probabilities (see Bayes’ theorem, p. 234) Bayesian statistics A statistical approach based on Bayes’ theorem, where prior information or beliefs are combined with new data to provide estimates of unknown parameters (see ...


Author(s):  
Bradley E. Alger

This chapter covers the basics of Bayesian statistics, emphasizing the conceptual framework for Bayes’ Theorem. It works through several iterations of the theorem to demonstrate how the same equation is applied in different circumstances, from constructing and updating models to parameter evaluation, to try to establish an intuitive feel for it. The chapter also covers the philosophical underpinnings of Bayesianism and compares them with the frequentist perspective described in Chapter 5. It addresses the question of whether Bayesians are inductivists. Finally, the chapter shows how the Bayesian procedures of model selection and comparison can be pressed into service to allow Bayesian methods to be used in hypothesis testing in essentially the same way that various p-tests are used in the frequentist hypothesis testing framework.


2013 ◽  
Vol 838-841 ◽  
pp. 3300-3304
Author(s):  
Chong Wei ◽  
Lin Xiao ◽  
Chun Fu Shao

In this study we proposed a semi-compensatory model to analyze the mode choice behavior. The proposed model formulated the conjunctive rule through a straightforward way. The proposed model can take into account the probability distribution of the threshold involved by the conjunctive rule. To estimate the parameters of the proposed model, we derived the posterior distribution of the parameters by using the Bayes theorem and developed a blacked Metropolis-Hastings algorithm to carry out the estimation based on the posterior distribution. We also employed the data augmentation technology to simplify the estimation procedure. The proposed model was validated by using a SP survey dataset. We compared the performance of the proposed model to that of the logit model.


Sign in / Sign up

Export Citation Format

Share Document