Convergence of quasi-stationary to stationary distributions for stochastically monotone Markov processes

1986 ◽  
Vol 23 (1) ◽  
pp. 215-220 ◽  
Author(s):  
Moshe Pollak ◽  
David Siegmund

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.

1986 ◽  
Vol 23 (01) ◽  
pp. 215-220 ◽  
Author(s):  
Moshe Pollak ◽  
David Siegmund

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.


1995 ◽  
Vol 27 (01) ◽  
pp. 120-145 ◽  
Author(s):  
Anthony G. Pakes

Under consideration is a continuous-time Markov process with non-negative integer state space and a single absorbing state 0. Let T be the hitting time of zero and suppose P i (T < ∞) ≡ 1 and (*) lim i→∞ P i (T > t) = 1 for all t > 0. Most known cases satisfy (*). The Markov process has a quasi-stationary distribution iff E i (e ∊T ) < ∞ for some ∊ > 0. The published proof of this fact makes crucial use of (*). By means of examples it is shown that (*) can be violated in quite drastic ways without destroying the existence of a quasi-stationary distribution.


1995 ◽  
Vol 27 (1) ◽  
pp. 120-145 ◽  
Author(s):  
Anthony G. Pakes

Under consideration is a continuous-time Markov process with non-negative integer state space and a single absorbing state 0. Let T be the hitting time of zero and suppose Pi(T < ∞) ≡ 1 and (*) limi→∞Pi(T > t) = 1 for all t > 0. Most known cases satisfy (*). The Markov process has a quasi-stationary distribution iff Ei (e∊T) < ∞ for some ∊ > 0.The published proof of this fact makes crucial use of (*). By means of examples it is shown that (*) can be violated in quite drastic ways without destroying the existence of a quasi-stationary distribution.


1994 ◽  
Vol 31 (3) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


1994 ◽  
Vol 31 (03) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


1978 ◽  
Vol 10 (03) ◽  
pp. 570-586 ◽  
Author(s):  
James A. Cavender

Letqn(t) be the conditioned probability of finding a birth-and-death process in statenat timet,given that absorption into state 0 has not occurred by then. A family {q1(t),q2(t), · · ·} that is constant in time is a quasi-stationary distribution. If any exist, the quasi-stationary distributions comprise a one-parameter family related to quasi-stationary distributions of finite state-space approximations to the process.


1970 ◽  
Vol 7 (02) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij (t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.


2016 ◽  
Vol 195 ◽  
pp. 469-495 ◽  
Author(s):  
Giacomo Di Gesù ◽  
Tony Lelièvre ◽  
Dorian Le Peutrec ◽  
Boris Nectoux

We are interested in the connection between a metastable continuous state space Markov process (satisfyinge.g.the Langevin or overdamped Langevin equation) and a jump Markov process in a discrete state space. More precisely, we use the notion of quasi-stationary distribution within a metastable state for the continuous state space Markov process to parametrize the exit event from the state. This approach is useful to analyze and justify methods which use the jump Markov process underlying a metastable dynamics as a support to efficiently sample the state-to-state dynamics (accelerated dynamics techniques). Moreover, it is possible by this approach to quantify the error on the exit event when the parametrization of the jump Markov model is based on the Eyring–Kramers formula. This therefore provides a mathematical framework to justify the use of transition state theory and the Eyring–Kramers formula to build kinetic Monte Carlo or Markov state models.


1970 ◽  
Vol 7 (2) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij(t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.


1978 ◽  
Vol 10 (3) ◽  
pp. 570-586 ◽  
Author(s):  
James A. Cavender

Let qn(t) be the conditioned probability of finding a birth-and-death process in state n at time t, given that absorption into state 0 has not occurred by then. A family {q1(t), q2(t), · · ·} that is constant in time is a quasi-stationary distribution. If any exist, the quasi-stationary distributions comprise a one-parameter family related to quasi-stationary distributions of finite state-space approximations to the process.


Sign in / Sign up

Export Citation Format

Share Document