Proportional intensities and strong ergodicity for Markov processes

1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.

1983 ◽  
Vol 20 (1) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


2017 ◽  
Vol 13 (3) ◽  
pp. 7244-7256
Author(s):  
Mi los lawa Sokol

The matrices of non-homogeneous Markov processes consist of time-dependent functions whose values at time form typical intensity matrices. For solvingsome problems they must be changed into stochastic matrices. A stochas-tic matrix for non-homogeneous Markov process consists of time-dependent functions, whose values are probabilities and it depend on assumed time pe- riod. In this paper formulas for these functions are derived. Although the formula is not simple, it allows proving some theorems for Markov stochastic processes, well known for homogeneous processes, but for non-homogeneous ones the proofs of them turned out shorter.


1973 ◽  
Vol 10 (01) ◽  
pp. 84-99 ◽  
Author(s):  
Richard L. Tweedie

The problem considered is that of estimating the limit probability distribution (equilibrium distribution) πof a denumerable continuous time Markov process using only the matrix Q of derivatives of transition functions at the origin. We utilise relationships between the limit vector πand invariant measures for the jump-chain of the process (whose transition matrix we write P∗), and apply truncation theorems from Tweedie (1971) to P∗. When Q is regular, we derive algorithms for estimating πfrom truncations of Q; these extend results in Tweedie (1971), Section 4, from q-bounded processes to arbitrary regular processes. Finally, we show that this method can be extended even to non-regular chains of a certain type.


1989 ◽  
Vol 26 (4) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.


1989 ◽  
Vol 26 (04) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.


1994 ◽  
Vol 31 (3) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


1973 ◽  
Vol 10 (1) ◽  
pp. 84-99 ◽  
Author(s):  
Richard L. Tweedie

The problem considered is that of estimating the limit probability distribution (equilibrium distribution) πof a denumerable continuous time Markov process using only the matrix Q of derivatives of transition functions at the origin. We utilise relationships between the limit vector πand invariant measures for the jump-chain of the process (whose transition matrix we write P∗), and apply truncation theorems from Tweedie (1971) to P∗. When Q is regular, we derive algorithms for estimating πfrom truncations of Q; these extend results in Tweedie (1971), Section 4, from q-bounded processes to arbitrary regular processes. Finally, we show that this method can be extended even to non-regular chains of a certain type.


1994 ◽  
Vol 31 (03) ◽  
pp. 626-634 ◽  
Author(s):  
James Ledoux ◽  
Gerardo Rubino ◽  
Bruno Sericola

We characterize the conditions under which an absorbing Markovian finite process (in discrete or continuous time) can be transformed into a new aggregated process conserving the Markovian property, whose states are elements of a given partition of the original state space. To obtain this characterization, a key tool is the quasi-stationary distribution associated with absorbing processes. It allows the absorbing case to be related to the irreducible one. We are able to calculate the set of all initial distributions of the starting process leading to an aggregated homogeneous Markov process by means of a finite algorithm. Finally, it is shown that the continuous-time case can always be reduced to the discrete one using the uniformization technique.


1999 ◽  
Vol 36 (01) ◽  
pp. 48-59 ◽  
Author(s):  
George V. Moustakides

Let ξ0,ξ1,ξ2,… be a homogeneous Markov process and let S n denote the partial sum S n = θ(ξ1) + … + θ(ξ n ), where θ(ξ) is a scalar nonlinearity. If N is a stopping time with 𝔼N < ∞ and the Markov process satisfies certain ergodicity properties, we then show that 𝔼S N = [lim n→∞𝔼θ(ξ n )]𝔼N + 𝔼ω(ξ0) − 𝔼ω(ξ N ). The function ω(ξ) is a well defined scalar nonlinearity directly related to θ(ξ) through a Poisson integral equation, with the characteristic that ω(ξ) becomes zero in the i.i.d. case. Consequently our result constitutes an extension to Wald's first lemma for the case of Markov processes. We also show that, when 𝔼N → ∞, the correction term is negligible as compared to 𝔼N in the sense that 𝔼ω(ξ0) − 𝔼ω(ξ N ) = o(𝔼N).


Sign in / Sign up

Export Citation Format

Share Document