scholarly journals A probability distribution over latent causes in the orbitofrontal cortex

2016 ◽  
Author(s):  
Stephanie C.Y. Chan ◽  
Yael Niv ◽  
Kenneth A. Norman

ABSTRACTThe orbitofrontal cortex (OFC) has been implicated in both the representation of “state”, in studies of reinforcement learning and decision making, and also in the representation of “schemas”, in studies of episodic memory. Both of these cognitive constructs require a similar inference about the underlying situation or “latent cause” that generates our observations at any given time. The statistically optimal solution to this inference problem is to use Bayes rule to compute a posterior probability distribution over latent causes. To test whether such a posterior probability distribution is represented in the OFC, we tasked human participants with inferring a probability distribution over four possible latent causes, based on their observations. Using fMRI pattern similarity analyses, we found that BOLD activity in OFC is best explained as representing the (log-transformed) posterior distribution over latent causes. Furthermore, this pattern explained OFC activity better than other task-relevant alternatives such as the most probable latent cause, the most recent observation, or the uncertainty over latent causes.SIGNIFICANCE STATEMENTOur world is governed by hidden (latent) causes that we cannot observe, but which generate the observations that we do see. A range of high-level cognitive processes require inference of a probability distribution (or “belief distribution”) over the possible latent causes that might be generating our current observations. This is true for reinforcement learning (where the latent cause comprises the true “state” of the task), and for episodic memory (where memories are believed to be organized by the inferred situation or “schema”). Using fMRI, we show that this belief distribution over latent causes is encoded in patterns of brain activity in the orbitofrontal cortex — an area that has been separately implicated in the representations of both states and schemas.CONFLICT OF INTERESTThe authors declare no competing financial interests.

Author(s):  
Munir S Pathan ◽  
S M Pradhan ◽  
T Palani Selvam

Abstract In this study, the Bayesian probabilistic approach is applied for the estimation of the actual dose using personnel monitoring dose records of occupational workers. To implement the Bayesian approach, the probability distribution of the uncertainty in the reported dose as a function of the actual dose is derived. Using the uncertainty distribution function of reported dose and prior knowledge of dose levels generally observed in a monitoring period, the posterior probability distribution of the actual dose is estimated. The posterior distributions of each monitoring period in a year are convoluted to arrive at actual annual dose distribution. The estimated actual doses distributions show a significant deviation from reported annual doses particularly for low annual doses.


2020 ◽  
Vol 09 (04) ◽  
pp. 2050017
Author(s):  
Benjamin D. Donovan ◽  
Randall L. McEntaffer ◽  
Casey T. DeRoo ◽  
James H. Tutt ◽  
Fabien Grisé ◽  
...  

The soft X-ray grating spectrometer on board the Off-plane Grating Rocket Experiment (OGRE) hopes to achieve the highest resolution soft X-ray spectrum of an astrophysical object when it is launched via suborbital rocket. Paramount to the success of the spectrometer are the performance of the [Formula: see text] reflection gratings populating its reflection grating assembly. To test current grating fabrication capabilities, a grating prototype for the payload was fabricated via electron-beam lithography at The Pennsylvania State University’s Materials Research Institute and was subsequently tested for performance at Max Planck Institute for Extraterrestrial Physics’ PANTER X-ray Test Facility. Bayesian modeling of the resulting data via Markov chain Monte Carlo (MCMC) sampling indicated that the grating achieved the OGRE single-grating resolution requirement of [Formula: see text] at the 94% confidence level. The resulting [Formula: see text] posterior probability distribution suggests that this confidence level is likely a conservative estimate though, since only a finite [Formula: see text] parameter space was sampled and the model could not constrain the upper bound of [Formula: see text] to less than infinity. Raytrace simulations of the tested system found that the observed data can be reproduced with a grating performing at [Formula: see text]. It is therefore postulated that the behavior of the obtained [Formula: see text] posterior probability distribution can be explained by a finite measurement limit of the system and not a finite limit on [Formula: see text]. Implications of these results and improvements to the test setup are discussed.


1992 ◽  
Vol 4 (3) ◽  
pp. 415-447 ◽  
Author(s):  
David J. C. MacKay

Although Bayesian analysis has been in use since Laplace, the Bayesian method of model-comparison has only recently been developed in depth. In this paper, the Bayesian approach to regularization and model-comparison is demonstrated by studying the inference problem of interpolating noisy data. The concepts and methods described are quite general and can be applied to many other data modeling problems. Regularizing constants are set by examining their posterior probability distribution. Alternative regularizers (priors) and alternative basis sets are objectively compared by evaluating the evidence for them. “Occam's razor” is automatically embodied by this process. The way in which Bayes infers the values of regularizing constants and noise levels has an elegant interpretation in terms of the effective number of parameters determined by the data set. This framework is due to Gull and Skilling.


2013 ◽  
Vol 807-809 ◽  
pp. 1570-1574 ◽  
Author(s):  
Hai Dong Yang ◽  
Dong Guo Shao ◽  
Bi Yu Liu

Pollution point source identification for the non-shore emission which is the main form of sudden water pollution incident is considered in this paper. Firstly, the source traceability of sudden water pollution accidents is taken as the Bayesian estimation problem; secondly, the posterior probability distribution of the source's parameters are deduced; thirdly, the marginal posterior probability density is obtained by using a new traceability method; finally, this proposed method is compared with Bayesian-MCMC by numerical experiments. The conclusions are as following: the new traceability method can reduce the iterations, improve the recognition accuracy, and reduce the overall average error obviously and it is more stable and robust than Bayesian-MCMC and can identify sudden water pollution accidents source effectively. Therefore, it provides a new idea and method to solve the difficulty of traceability problems in sudden water pollution accidents.


2014 ◽  
Vol 10 (S306) ◽  
pp. 273-275
Author(s):  
Pedro T. P. Viana

AbstractObservational data on clusters of galaxies holds relevant information that can be used to determine the relative plausibility of different models for the large-scale evolution of the Universe, or estimate the joint posterior probability distribution function of the parameters that pertain to each model. Within the next few years, several surveys of the sky will yield large galaxy cluster catalogues. In order to make use of the vast amount of information they will contain, their selection functions will have to be properly understood. We argue this, as well as the estimation of the full joint posterior probability distribution function of the most relevant cluster properties, can be best achieved in the framework of bayesian statistics.


2020 ◽  
Author(s):  
Xin Zhang ◽  
Andrew Curtis

<p><span>In a variety of geoscientific applications we require maps of subsurface properties together with the corresponding maps of uncertainties to assess their reliability. Seismic tomography is a method that is widely used to generate those maps. Since tomography is significantly nonlinear, Monte Carlo sampling methods are often used for this purpose, but they are generally computationally intractable for large data sets and high-dimensionality parameter spaces. To extend uncertainty analysis to larger systems, we introduce variational inference methods to conduct seismic tomography. In contrast to Monte Carlo sampling, variational methods solve the Bayesian inference problem as an optimization problem yet still provide fully nonlinear, probabilistic results. This is achieved by minimizing the Kullback-Leibler (KL) divergence between approximate and target probability distributions within a predefined family of probability distributions.</span></p><p><span>We introduce two variational inference methods: automatic differential variational inference (ADVI) and Stein variational gradient descent (SVGD). In ADVI a Gaussian probability distribution is assumed and optimized to approximate the posterior probability distribution. In SVGD a smooth transform is iteratively applied to an initial probability distribution to obtain an approximation to the posterior probability distribution. At each iteration the transform is determined by seeking the steepest descent direction that minimizes the KL-divergence. </span></p><p><span>We apply the two variational inference methods to 2D travel time tomography using both synthetic and real data, and compare the results to those obtained from two different Monte Carlo sampling methods: Metropolis-Hastings Markov chain Monte Carlo (MH-McMC) and reversible jump Markov chain Monte Carlo (rj-McMC). The results show that ADVI provides a biased approximation because of its Gaussian approximation, whereas SVGD produces more accurate approximations to the results of MH-McMC. In comparison rj-McMC produces smoother mean velocity models and lower standard deviations because the parameterization used in rj-McMC (Voronoi cells) imposes prior restrictions on the pixelated form of models: all pixels within each Voronoi cell have identical velocities. This suggests that the results of rj-McMC need to be interpreted in the light of the specific prior information imposed by the parameterization. Both variational methods estimate the posterior distribution at significantly lower computational cost, provided that gradients of parameters with respect to data can be calculated efficiently. We therefore expect that the methods can be applied fruitfully to many other types of geophysical inverse problems.</span></p>


2020 ◽  
Vol 76 (3) ◽  
pp. 238-247 ◽  
Author(s):  
Randy J. Read ◽  
Robert D. Oeffner ◽  
Airlie J. McCoy

The information gained by making a measurement, termed the Kullback–Leibler divergence, assesses how much more precisely the true quantity is known after the measurement was made (the posterior probability distribution) than before (the prior probability distribution). It provides an upper bound for the contribution that an observation can make to the total likelihood score in likelihood-based crystallographic algorithms. This makes information gain a natural criterion for deciding which data can legitimately be omitted from likelihood calculations. Many existing methods use an approximation for the effects of measurement error that breaks down for very weak and poorly measured data. For such methods a different (higher) information threshold is appropriate compared with methods that account well for even large measurement errors. Concerns are raised about a current trend to deposit data that have been corrected for anisotropy, sharpened and pruned without including the original unaltered measurements. If not checked, this trend will have serious consequences for the reuse of deposited data by those who hope to repeat calculations using improved new methods.


2019 ◽  
Vol 485 (3) ◽  
pp. 4343-4358
Author(s):  
Germán Chaparro-Molano ◽  
Juan Carlos Cuervo ◽  
Oscar Alberto Restrepo Gaitán ◽  
Sergio Torres Arzayús

ABSTRACT We propose the use of robust, Bayesian methods for estimating extragalactic distance errors in multimeasurement catalogues. We seek to improve upon the more commonly used frequentist propagation-of-error methods, as they fail to explain both the scatter between different measurements and the effects of skewness in the metric distance probability distribution. For individual galaxies, the most transparent way to assess the variance of redshift independent distances is to directly sample the posterior probability distribution obtained from the mixture of reported measurements. However, sampling the posterior can be cumbersome for catalogue-wide precision cosmology applications. We compare the performance of frequentist methods versus our proposed measures for estimating the true variance of the metric distance probability distribution. We provide pre-computed distance error data tables for galaxies in three catalogues: NED-D, HyperLEDA, and Cosmicflows-3. Additionally, we develop a Bayesian model that considers systematic and random effects in the estimation of errors for Tully–Fisher (TF) relation derived distances in NED-D. We validate this model with a Bayesian p-value computed using the Freeman–Tukey discrepancy measure as a posterior predictive check. We are then able to predict distance errors for 884 galaxies in the NED-D catalogue and 203 galaxies in the HyperLEDA catalogue that do not report TF distance modulus errors. Our goal is that our estimated and predicted errors are used in catalogue-wide applications that require acknowledging the true variance of extragalactic distance measurements.


Sign in / Sign up

Export Citation Format

Share Document