scholarly journals Optimal foraging and the information theory of gambling

2018 ◽  
Author(s):  
Roland J. Baddeley ◽  
Nigel R. Franks ◽  
Edmund R. Hunt

AbstractAt a macroscopic level, part of the ant colony life-cycle is simple: a colony collects resources; these resources are converted into more ants, and these ants in turn collect more resources. Because more ants collect more resources, this is a multiplicative process, and the expected logarithm of the amount of resources determines how successful the colony will be in the long run. Over 60 years ago, Kelly showed, using information theoretic techniques, that the rate of growth of resources for such a situation is optimised by a strategy of betting in proportion to the probability of payoff. Thus, in the case of ants the fraction of the colony foraging at a given location should be proportional to the probability that resources will be found there, a result widely applied in the mathematics of gambling. This theoretical optimum generates predictions for which collective ant movement strategies might have evolved. Here, we show how colony level optimal foraging behaviour can be achieved by mapping movement to Markov chain Monte Carlo methods, specifically Hamiltonian Markov chain Monte Carlo (HMC). This can be done by the ants following a (noisy) local measurement of the (logarithm of) the resource probability gradient (possibly supplemented with momentum, i.e. a propensity to move in the same direction). This maps the problem of foraging (via the information theory of gambling, stochastic dynamics and techniques employed within Bayesian statistics to efficiently sample from probability distributions) to simple models of ant foraging behaviour. This identification has broad applicability, facilitates the application of information theory approaches to understanding movement ecology, and unifies insights from existing biomechanical, cognitive, random and optimality movement paradigms. At the cost of requiring ants to obtain (noisy) resource gradient information, we show that this model is both efficient, and matches a number of characteristics of real ant exploration.

2019 ◽  
Vol 16 (157) ◽  
pp. 20190162 ◽  
Author(s):  
Roland J. Baddeley ◽  
Nigel R. Franks ◽  
Edmund R. Hunt

At a macroscopic level, part of the ant colony life cycle is simple: a colony collects resources; these resources are converted into more ants, and these ants in turn collect more resources. Because more ants collect more resources, this is a multiplicative process, and the expected logarithm of the amount of resources determines how successful the colony will be in the long run. Over 60 years ago, Kelly showed, using information theoretic techniques, that the rate of growth of resources for such a situation is optimized by a strategy of betting in proportion to the probability of pay-off. Thus, in the case of ants, the fraction of the colony foraging at a given location should be proportional to the probability that resources will be found there, a result widely applied in the mathematics of gambling. This theoretical optimum leads to predictions as to which collective ant movement strategies might have evolved. Here, we show how colony-level optimal foraging behaviour can be achieved by mapping movement to Markov chain Monte Carlo (MCMC) methods, specifically Hamiltonian Monte Carlo (HMC). This can be done by the ants following a (noisy) local measurement of the (logarithm of) resource probability gradient (possibly supplemented with momentum, i.e. a propensity to move in the same direction). This maps the problem of foraging (via the information theory of gambling, stochastic dynamics and techniques employed within Bayesian statistics to efficiently sample from probability distributions) to simple models of ant foraging behaviour. This identification has broad applicability, facilitates the application of information theory approaches to understand movement ecology and unifies insights from existing biomechanical, cognitive, random and optimality movement paradigms. At the cost of requiring ants to obtain (noisy) resource gradient information, we show that this model is both efficient and matches a number of characteristics of real ant exploration.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Juan P. Vargas ◽  
Jair C. Koppe ◽  
Sebastián Pérez ◽  
Juan P. Hurtado

Tunnels, drifts, drives, and other types of underground excavation are very common in mining as well as in the construction of roads, railways, dams, and other civil engineering projects. Planning is essential to the success of tunnel excavation, and construction time is one of the most important factors to be taken into account. This paper proposes a simulation algorithm based on a stochastic numerical method, the Markov chain Monte Carlo method, that can provide the best estimate of the opening excavation times for the classic method of drilling and blasting. Taking account of technical considerations that affect the tunnel excavation cycle, the simulation is developed through a computational algorithm. Using the Markov chain Monte Carlo method, the unit operations involved in the underground excavation cycle are identified and assigned probability distributions that, with random number input, make it possible to simulate the total excavation time. The results obtained with this method are compared with a real case of tunneling excavation. By incorporating variability in the planning, it is possible to determine with greater certainty the ranges over which the execution times of the unit operations fluctuate. In addition, the financial risks associated with planning errors can be reduced and the exploitation of resources maximized.


Author(s):  
Galin L. Jones ◽  
Qian Qin

Markov chain Monte Carlo (MCMC) is an essential set of tools for estimating features of probability distributions commonly encountered in modern applications. For MCMC simulation to produce reliable outcomes, it needs to generate observations representative of the target distribution, and it must be long enough so that the errors of Monte Carlo estimates are small. We review methods for assessing the reliability of the simulation effort, with an emphasis on those most useful in practically relevant settings. Both strengths and weaknesses of these methods are discussed. The methods are illustrated in several examples and in a detailed case study. Expected final online publication date for the Annual Review of Statistics, Volume 9 is March 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2014 ◽  
Vol 9 (S307) ◽  
pp. 123-124
Author(s):  
Jorge Melnick

AbstractThere are a number of stochastic effects that must be considered when comparing models to observations of starburst clusters: the IMF is never fully populated; the stars can never be strictly coeval; stars rotate and their photometric properties depend on orientation; a significant fraction of massive stars are in interacting binaries; and the extinction varies from star to star. The probability distributions of each of these effects are nota prioriknown, but must be extracted from the observations. Markov Chain Monte-Carlo methods appear to provide the best statistical approach. Here I present an example of stochastic age effects upon the upper mass limit of the IMF of the Arches cluster as derived from near-IR photometry.


2020 ◽  
Vol 52 (2) ◽  
pp. 377-403 ◽  
Author(s):  
Axel Finke ◽  
Arnaud Doucet ◽  
Adam M. Johansen

AbstractBoth sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as they empirically outperform SMC methods in some applications. We establish an $\mathbb{L}_r$ -inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we also provide conditions under which sequential MCMC methods can indeed outperform standard SMC methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.


2019 ◽  
Vol 489 (3) ◽  
pp. 4155-4160 ◽  
Author(s):  
Thomas McClintock ◽  
Eduardo Rozo

ABSTRACT Modern cosmological analyses constrain physical parameters using Markov Chain Monte Carlo (MCMC) or similar sampling techniques. Oftentimes, these techniques are computationally expensive to run and require up to thousands of CPU hours to complete. Here we present a method for reconstructing the log-probability distributions of completed experiments from an existing chain (or any set of posterior samples). The reconstruction is performed using Gaussian process regression for interpolating the log-probability. This allows for easy resampling, importance sampling, marginalization, testing different samplers, investigating chain convergence, and other operations. As an example use case, we reconstruct the posterior distribution of the most recent Planck 2018 analysis. We then resample the posterior, and generate a new chain with 40 times as many points in only 30 min. Our likelihood reconstruction tool is made publicly available online.


2001 ◽  
Vol 13 (11) ◽  
pp. 2549-2572 ◽  
Author(s):  
Mark Zlochin ◽  
Yoram Baram

We propose a new Markov Chain Monte Carlo algorithm, which is a generalization of the stochastic dynamics method. The algorithm performs exploration of the state-space using its intrinsic geometric structure, which facilitates efficient sampling of complex distributions. Applied to Bayesian learning in neural networks, our algorithm was found to produce results comparable to the best state-of-the-art method while consuming considerably less time.


Sign in / Sign up

Export Citation Format

Share Document