scholarly journals SPECTRAL ANALYSIS OF HYPOELLIPTIC RANDOM WALKS

2014 ◽  
Vol 14 (3) ◽  
pp. 451-491
Author(s):  
Gilles Lebeau ◽  
Laurent Michel

We study the spectral theory of a reversible Markov chain This random walk depends on a parameter $h\in ]0,h_{0}]$ which is roughly the size of each step of the walk. We prove uniform bounds with respect to $h$ on the rate of convergence to equilibrium, and the convergence when $h\rightarrow 0$ to the associated hypoelliptic diffusion.

10.37236/1284 ◽  
1996 ◽  
Vol 3 (2) ◽  
Author(s):  
Phil Hanlon

Let $B$ be a Ferrers board, i.e., the board obtained by removing the Ferrers diagram of a partition from the top right corner of an $n\times n$ chessboard. We consider a Markov chain on the set $R$ of rook placements on $B$ in which you can move from one placement to any other legal placement obtained by switching the columns in which two rooks sit. We give sharp estimates for the rate of convergence of this Markov chain using spectral methods. As part of this analysis we give a complete combinatorial description of the eigenvalues of the transition matrix for this chain. We show that two extremes cases of this Markov chain correspond to random walks on groups which are analyzed in the literature. Our estimates for rates of convergence interpolate between those two results.


2011 ◽  
Vol 43 (3) ◽  
pp. 782-813 ◽  
Author(s):  
M. Jara ◽  
T. Komorowski

In this paper we consider the scaled limit of a continuous-time random walk (CTRW) based on a Markov chain {Xn,n≥ 0} and two observables, τ(∙) andV(∙), corresponding to the renewal times and jump sizes. Assuming that these observables belong to the domains of attraction of some stable laws, we give sufficient conditions on the chain that guarantee the existence of the scaled limits for CTRWs. An application of the results to a process that arises in quantum transport theory is provided. The results obtained in this paper generalize earlier results contained in Becker-Kern, Meerschaert and Scheffler (2004) and Meerschaert and Scheffler (2008), and the recent results of Henry and Straka (2011) and Jurlewicz, Kern, Meerschaert and Scheffler (2010), where {Xn,n≥ 0} is a sequence of independent and identically distributed random variables.


2010 ◽  
Vol 10 (5&6) ◽  
pp. 509-524
Author(s):  
M. Mc Gettrick

We investigate the quantum versions of a one-dimensional random walk, whose corresponding Markov Chain is of order 2. This corresponds to the walk having a memory of one previous step. We derive the amplitudes and probabilities for these walks, and point out how they differ from both classical random walks, and quantum walks without memory.


2020 ◽  
Vol 02 (01) ◽  
pp. 2050004
Author(s):  
Je-Young Choi

Several methods have been developed in order to solve electrical circuits consisting of resistors and an ideal voltage source. A correspondence with random walks avoids difficulties caused by choosing directions of currents and signs in potential differences. Starting from the random-walk method, we introduce a reduced transition matrix of the associated Markov chain whose dominant eigenvector alone determines the electric potentials at all nodes of the circuit and the equivalent resistance between the nodes connected to the terminals of the voltage source. Various means to find the eigenvector are developed from its definition. A few example circuits are solved in order to show the usefulness of the present approach.


2009 ◽  
Vol 41 (01) ◽  
pp. 270-291 ◽  
Author(s):  
Hua Zhou ◽  
Kenneth Lange

Suppose that n identical particles evolve according to the same marginal Markov chain. In this setting we study chains such as the Ehrenfest chain that move a prescribed number of randomly chosen particles at each epoch. The product chain constructed by this device inherits its eigenstructure from the marginal chain. There is a further chain derived from the product chain called the composition chain that ignores particle labels and tracks the numbers of particles in the various states. The composition chain in turn inherits its eigenstructure and various properties such as reversibility from the product chain. The equilibrium distribution of the composition chain is multinomial. The current paper proves these facts in the well-known framework of state lumping and identifies the column eigenvectors of the composition chain with the multivariate Krawtchouk polynomials of Griffiths. The advantages of knowing the full spectral decomposition of the composition chain include (a) detailed estimates of the rate of convergence to equilibrium, (b) construction of martingales that allow calculation of the moments of the particle counts, and (c) explicit expressions for mean coalescence times in multi-person random walks. These possibilities are illustrated by applications to Ehrenfest chains, the Hoare and Rahman chain, Kimura's continuous-time chain for DNA evolution, a light bulb chain, and random walks on some specific graphs.


2017 ◽  
Vol 114 (11) ◽  
pp. 2860-2864 ◽  
Author(s):  
Maria Chikina ◽  
Alan Frieze ◽  
Wesley Pegden

We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a p value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a 0.1% outlier compared with the sampled ranks (its rank is in the bottom 0.1% of sampled ranks), then this observation should correspond to a p value of 0.001. This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an ε-outlier on the walk is significant at p=2ε under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at p≈ε is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting.


2010 ◽  
Vol 10 (5&6) ◽  
pp. 420-434
Author(s):  
C.-F. Chiang ◽  
D. Nagaj ◽  
P. Wocjan

We present an efficient general method for realizing a quantum walk operator corresponding to an arbitrary sparse classical random walk. Our approach is based on Grover and Rudolph's method for preparing coherent versions of efficiently integrable probability distributions \cite{GroverRudolph}. This method is intended for use in quantum walk algorithms with polynomial speedups, whose complexity is usually measured in terms of how many times we have to apply a step of a quantum walk \cite{Szegedy}, compared to the number of necessary classical Markov chain steps. We consider a finer notion of complexity including the number of elementary gates it takes to implement each step of the quantum walk with some desired accuracy. The difference in complexity for various implementation approaches is that our method scales linearly in the sparsity parameter and poly-logarithmically with the inverse of the desired precision. The best previously known general methods either scale quadratically in the sparsity parameter, or polynomially in the inverse precision. Our approach is especially relevant for implementing quantum walks corresponding to classical random walks like those used in the classical algorithms for approximating permanents \cite{Vigoda, Vazirani} and sampling from binary contingency tables \cite{Stefankovi}. In those algorithms, the sparsity parameter grows with the problem size, while maintaining high precision is required.


2019 ◽  
Vol 19 (02) ◽  
pp. 2050023 ◽  
Author(s):  
Paula Cadavid ◽  
Mary Luz Rodiño Montoya ◽  
Pablo M. Rodriguez

Evolution algebras are a new type of non-associative algebras which are inspired from biological phenomena. A special class of such algebras, called Markov evolution algebras, is strongly related to the theory of discrete time Markov chains. The winning of this relation is that many results coming from Probability Theory may be stated in the context of Abstract Algebra. In this paper, we explore the connection between evolution algebras, random walks and graphs. More precisely, we study the relationships between the evolution algebra induced by a random walk on a graph and the evolution algebra determined by the same graph. Given that any Markov chain may be seen as a random walk on a graph, we believe that our results may add a new landscape in the study of Markov evolution algebras.


2009 ◽  
Vol 2009 ◽  
pp. 1-4 ◽  
Author(s):  
José Luis Palacios

Using classical arguments we derive a formula for the moments of hitting times for an ergodic Markov chain. We apply this formula to the case of simple random walk on trees and show, with an elementary electric argument, that all the moments are natural numbers.


2008 ◽  
Vol 40 (01) ◽  
pp. 206-228 ◽  
Author(s):  
Alex Iksanov ◽  
Martin Möhle

LetS0:= 0 andSk:=ξ1+ ··· +ξkfork∈ ℕ := {1, 2, …}, where {ξk:k∈ ℕ} are independent copies of a random variableξwith values in ℕ and distributionpk:= P{ξ=k},k∈ ℕ. We interpret the random walk {Sk:k= 0, 1, 2, …} as a particle jumping to the right through integer positions. Fixn∈ ℕ and modify the process by requiring that the particle is bumped back to its current state each time a jump would bring the particle to a state larger than or equal ton. This constraint defines an increasing Markov chain {Rk(n):k= 0, 1, 2, …} which never reaches the staten. We call this process a random walk with barriern. LetMndenote the number of jumps of the random walk with barriern. This paper focuses on the asymptotics ofMnasntends to ∞. A key observation is that, underp1> 0, {Mn:n∈ ℕ} satisfies the distributional recursionM1= 0 andforn= 2, 3, …, whereInis independent ofM2, …,Mn−1with distribution P{In=k} =pk/ (p1+ ··· +pn−1),k∈ {1, …,n− 1}. Depending on the tail behavior of the distribution ofξ, several scalings forMnand corresponding limiting distributions come into play, including stable distributions and distributions of exponential integrals of subordinators. The methods used in this paper are mainly probabilistic. The key tool is to compare (couple) the number of jumps,Mn, with the first time,Nn, when the unrestricted random walk {Sk:k= 0, 1, …} reaches a state larger than or equal ton. The results are applied to derive the asymptotics of the number of collision events (that take place until there is just a single block) forβ(a,b)-coalescent processes with parameters 0 <a< 2 andb= 1.


Sign in / Sign up

Export Citation Format

Share Document