scholarly journals Exact aggregation of absorbing Markov processes using the quasi-stationary distribution

1994 ◽  
Vol 31 (03) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.

1994 ◽  
Vol 31 (3) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


1995 ◽  
Vol 27 (01) ◽  
pp. 120-145 ◽  
Author(s):  
Anthony G. Pakes

Under consideration is a continuous-time Markov process with non-negative integer state space and a single absorbing state 0. Let T be the hitting time of zero and suppose P i (T < ∞) ≡ 1 and (*) lim i→∞ P i (T > t) = 1 for all t > 0. Most known cases satisfy (*). The Markov process has a quasi-stationary distribution iff E i (e ∊T ) < ∞ for some ∊ > 0. The published proof of this fact makes crucial use of (*). By means of examples it is shown that (*) can be violated in quite drastic ways without destroying the existence of a quasi-stationary distribution.


1995 ◽  
Vol 27 (1) ◽  
pp. 120-145 ◽  
Author(s):  
Anthony G. Pakes

Under consideration is a continuous-time Markov process with non-negative integer state space and a single absorbing state 0. Let T be the hitting time of zero and suppose Pi(T < ∞) ≡ 1 and (*) limi→∞Pi(T > t) = 1 for all t > 0. Most known cases satisfy (*). The Markov process has a quasi-stationary distribution iff Ei (e∊T) < ∞ for some ∊ > 0.The published proof of this fact makes crucial use of (*). By means of examples it is shown that (*) can be violated in quite drastic ways without destroying the existence of a quasi-stationary distribution.


1986 ◽  
Vol 23 (1) ◽  
pp. 215-220 ◽  
Author(s):  
Moshe Pollak ◽  
David Siegmund

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.


1986 ◽  
Vol 23 (01) ◽  
pp. 215-220 ◽  
Author(s):  
Moshe Pollak ◽  
David Siegmund

It is shown that if a stochastically monotone Markov process on [0,∞) with stationary distribution H has its state space truncated by making all states in [B,∞) absorbing, then the quasi-stationary distribution of the new process converges to H as B →∞.


1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


2017 ◽  
Vol 13 (3) ◽  
pp. 7244-7256
Author(s):  
Mi los lawa Sokol

The matrices of non-homogeneous Markov processes consist of time-dependent functions whose values at time form typical intensity matrices. For solvingsome problems they must be changed into stochastic matrices. A stochas-tic matrix for non-homogeneous Markov process consists of time-dependent functions, whose values are probabilities and it depend on assumed time pe- riod. In this paper formulas for these functions are derived. Although the formula is not simple, it allows proving some theorems for Markov stochastic processes, well known for homogeneous processes, but for non-homogeneous ones the proofs of them turned out shorter.


2016 ◽  
Vol 195 ◽  
pp. 469-495 ◽  
Author(s):  
Giacomo Di Gesù ◽  
Tony Lelièvre ◽  
Dorian Le Peutrec ◽  
Boris Nectoux

We are interested in the connection between a metastable continuous state space Markov process (satisfyinge.g.the Langevin or overdamped Langevin equation) and a jump Markov process in a discrete state space. More precisely, we use the notion of quasi-stationary distribution within a metastable state for the continuous state space Markov process to parametrize the exit event from the state. This approach is useful to analyze and justify methods which use the jump Markov process underlying a metastable dynamics as a support to efficiently sample the state-to-state dynamics (accelerated dynamics techniques). Moreover, it is possible by this approach to quantify the error on the exit event when the parametrization of the jump Markov model is based on the Eyring–Kramers formula. This therefore provides a mathematical framework to justify the use of transition state theory and the Eyring–Kramers formula to build kinetic Monte Carlo or Markov state models.


1983 ◽  
Vol 20 (1) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1993 ◽  
Vol 25 (01) ◽  
pp. 82-102
Author(s):  
M. G. Nair ◽  
P. K. Pollett

In a recent paper, van Doorn (1991) explained how quasi-stationary distributions for an absorbing birth-death process could be determined from the transition rates of the process, thus generalizing earlier work of Cavender (1978). In this paper we shall show that many of van Doorn's results can be extended to deal with an arbitrary continuous-time Markov chain over a countable state space, consisting of an irreducible class, C, and an absorbing state, 0, which is accessible from C. Some of our results are extensions of theorems proved for honest chains in Pollett and Vere-Jones (1992). In Section 3 we prove that a probability distribution on C is a quasi-stationary distribution if and only if it is a µ-invariant measure for the transition function, P. We shall also show that if m is a quasi-stationary distribution for P, then a necessary and sufficient condition for m to be µ-invariant for Q is that P satisfies the Kolmogorov forward equations over C. When the remaining forward equations hold, the quasi-stationary distribution must satisfy a set of ‘residual equations' involving the transition rates into the absorbing state. The residual equations allow us to determine the value of µ for which the quasi-stationary distribution is µ-invariant for P. We also prove some more general results giving bounds on the values of µ for which a convergent measure can be a µ-subinvariant and then µ-invariant measure for P. The remainder of the paper is devoted to the question of when a convergent µ-subinvariant measure, m, for Q is a quasi-stationary distribution. Section 4 establishes a necessary and sufficient condition for m to be a quasi-stationary distribution for the minimal chain. In Section 5 we consider ‘single-exit' chains. We derive a necessary and sufficient condition for there to exist a process for which m is a quasi-stationary distribution. Under this condition all such processes can be specified explicitly through their resolvents. The results proved here allow us to conclude that the bounds for µ obtained in Section 3 are, in fact, tight. Finally, in Section 6, we illustrate our results by way of two examples: regular birth-death processes and a pure-birth process with absorption.


Sign in / Sign up

Export Citation Format

Share Document