Large deviations for longest runs in Markov chains

2020 ◽  
Vol 26 (2) ◽  
pp. 309-314
Author(s):  
Zhenxia Liu ◽  
Yurong Zhu

AbstractWe continue our investigation on general large deviation principles (LDPs) for longest runs. Previously, a general LDP for the longest success run in a sequence of independent Bernoulli trails was derived in [Z. Liu and X. Yang, A general large deviation principle for longest runs, Statist. Probab. Lett. 110 2016, 128–132]. In the present note, we establish a general LDP for the longest success run in a two-state (success or failure) Markov chain which recovers the previous result in the aforementioned paper. The main new ingredient is to implement suitable estimates of the distribution function of the longest success run recently established in [Z. Liu and X. Yang, On the longest runs in Markov chains, Probab. Math. Statist. 38 2018, 2, 407–428].

2006 ◽  
Vol 06 (04) ◽  
pp. 487-520 ◽  
Author(s):  
FUQING GAO ◽  
JICHENG LIU

We prove large deviation principles for solutions of small perturbations of SDEs in Hölder norms and Sobolev norms, where the SDEs have non-Markovian coefficients. As an application, we obtain a large deviation principle for solutions of anticipating SDEs in terms of (r, p) capacities on the Wiener space.


1988 ◽  
Vol 25 (1) ◽  
pp. 106-119 ◽  
Author(s):  
Richard Arratia ◽  
Pricilla Morris ◽  
Michael S. Waterman

A derivation of a law of large numbers for the highest-scoring matching subsequence is given. Let Xk, Yk be i.i.d. q=(q(i))i∊S letters from a finite alphabet S and v=(v(i))i∊S be a sequence of non-negative real numbers assigned to the letters of S. Using a scoring system similar to that of the game Scrabble, the score of a word w=i1 · ·· im is defined to be V(w)=v(i1) + · ·· + v(im). Let Vn denote the value of the highest-scoring matching contiguous subsequence between X1X2 · ·· Xn and Y1Y2· ·· Yn. In this paper, we show that Vn/K log(n) → 1 a.s. where K ≡ K(q,v). The method employed here involves ‘stuttering’ the letters to construct a Markov chain and applying previous results for the length of the longest matching subsequence. An explicit form for β ∊Pr(S), where β (i) denotes the proportion of letter i found in the highest-scoring word, is given. A similar treatment for Markov chains is also included.Implicit in these results is a large-deviation result for the additive functional, H ≡ Σn < τv(Xn), for a Markov chain stopped at the hitting time τ of some state. We give this large deviation result explicitly, for Markov chains in discrete time and in continuous time.


2010 ◽  
Vol 47 (04) ◽  
pp. 967-975
Author(s):  
Joe Suzuki

In this paper we prove that the stationary distribution of populations in genetic algorithms focuses on the uniform population with the highest fitness value as the selective pressure goes to ∞ and the mutation probability goes to 0. The obtained sufficient condition is based on the work of Albuquerque and Mazza (2000), who, following Cerf (1998), applied the large deviation principle approach (Freidlin-Wentzell theory) to the Markov chain of genetic algorithms. The sufficient condition is more general than that of Albuquerque and Mazza, and covers a set of parameters which were not found by Cerf.


1990 ◽  
Vol 27 (1) ◽  
pp. 44-59 ◽  
Author(s):  
James A Bucklew ◽  
Peter Ney ◽  
John S. Sadowsky

Importance sampling is a Monte Carlo simulation technique in which the simulation distribution is different from the true underlying distribution. In order to obtain an unbiased Monte Carlo estimate of the desired parameter, simulated events are weighted to reflect their true relative frequency. In this paper, we consider the estimation via simulation of certain large deviations probabilities for time-homogeneous Markov chains. We first demonstrate that when the simulation distribution is also a homogeneous Markov chain, the estimator variance will vanish exponentially as the sample size n tends to∞. We then prove that the estimator variance is asymptotically minimized by the same exponentially twisted Markov chain which arises in large deviation theory, and furthermore, this optimization is unique among uniformly recurrent homogeneous Markov chain simulation distributions.


2014 ◽  
Vol 36 (1) ◽  
pp. 127-141 ◽  
Author(s):  
HUAIBIN LI

We show some level-2 large deviation principles for real and complex one-dimensional maps satisfying a weak form of hyperbolicity. More precisely, we prove a large deviation principle for the distribution of iterated preimages, periodic points, and Birkhoff averages.


2019 ◽  
Vol 51 (01) ◽  
pp. 136-167 ◽  
Author(s):  
Stephan Eckstein

AbstractWe consider discrete-time Markov chains with Polish state space. The large deviations principle for empirical measures of a Markov chain can equivalently be stated in Laplace principle form, which builds on the convex dual pair of relative entropy (or Kullback– Leibler divergence) and cumulant generating functional f ↦ ln ʃ exp (f). Following the approach by Lacker (2016) in the independent and identically distributed case, we generalize the Laplace principle to a greater class of convex dual pairs. We present in depth one application arising from this extension, which includes large deviation results and a weak law of large numbers for certain robust Markov chains—similar to Markov set chains—where we model robustness via the first Wasserstein distance. The setting and proof of the extended Laplace principle are based on the weak convergence approach to large deviations by Dupuis and Ellis (2011).


1994 ◽  
Vol 7 (3) ◽  
pp. 357-371 ◽  
Author(s):  
Vladimir V. Kalashnikov

Ergodicity, continuity, finite approximations and rare visits of general Markov chains are investigated. The obtained results permit further quantitative analysis of characteristics, such as, rates of convergence, continuity (measured as a distance between perturbed and non-perturbed characteristics), deviations between Markov chains, accuracy of approximations and bounds on the distribution function of the first visit time to a chosen subset, etc. The underlying techniques use the embedding of the general Markov chain into a wide sense regenerative process with the help of splitting construction.


2003 ◽  
Vol 40 (02) ◽  
pp. 346-360 ◽  
Author(s):  
James C. Fu ◽  
Liqun Wang ◽  
W. Y. Wendy Lou

Consider a sequence of outcomes from Markov dependent two-state (success-failure) trials. In this paper, the exact distributions are derived for three longest-run statistics: the longest failure run, longest success run, and the maximum of the two. The method of finite Markov chain imbedding is used to obtain these exact distributions, and their bounds and large deviation approximation are also studied. Numerical comparisons among the exact distributions, bounds, and approximations are provided to illustrate the theoretical results. With some modifications, we show that the results can be easily extended to Markov dependent multistate trials.


2012 ◽  
Vol 12 (03) ◽  
pp. 1150022 ◽  
Author(s):  
STEFFEN DEREICH ◽  
GEORGI DIMITROFF

In this paper, stochastic flows driven by Kunita-type stochastic differential equations are studied, focusing on support theorems (ST) and large deviation principles (LDP). We establish a new ST and LDP for Brownian flows with respect to a fine Hölder topology. Our approach is based on recent advances in rough paths theory, which is the natural framework for proving ST and LDP. Nevertheless, while rigorous, our presentation stays rather clear from the rough paths technicalities and is accessible for readers not familiar with them. We view the localized Brownian stochastic flow as a projection of the solution of a rough path differential equation implying the ST and LDP. In a second step the results are generalized for the global flow.


Sign in / Sign up

Export Citation Format

Share Document