The calculation of limit probabilities for denumerable Markov processes from infinitesimal properties

1973 ◽  
Vol 10 (1) ◽  
pp. 84-99 ◽  
Author(s):  
Richard L. Tweedie

The problem considered is that of estimating the limit probability distribution (equilibrium distribution) πof a denumerable continuous time Markov process using only the matrix Q of derivatives of transition functions at the origin. We utilise relationships between the limit vector πand invariant measures for the jump-chain of the process (whose transition matrix we write P∗), and apply truncation theorems from Tweedie (1971) to P∗. When Q is regular, we derive algorithms for estimating πfrom truncations of Q; these extend results in Tweedie (1971), Section 4, from q-bounded processes to arbitrary regular processes. Finally, we show that this method can be extended even to non-regular chains of a certain type.

1973 ◽  
Vol 10 (01) ◽  
pp. 84-99 ◽  
Author(s):  
Richard L. Tweedie

The problem considered is that of estimating the limit probability distribution (equilibrium distribution) πof a denumerable continuous time Markov process using only the matrix Q of derivatives of transition functions at the origin. We utilise relationships between the limit vector πand invariant measures for the jump-chain of the process (whose transition matrix we write P∗), and apply truncation theorems from Tweedie (1971) to P∗. When Q is regular, we derive algorithms for estimating πfrom truncations of Q; these extend results in Tweedie (1971), Section 4, from q-bounded processes to arbitrary regular processes. Finally, we show that this method can be extended even to non-regular chains of a certain type.


1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1983 ◽  
Vol 20 (1) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1979 ◽  
Vol 11 (2) ◽  
pp. 397-421 ◽  
Author(s):  
M. Yadin ◽  
R. Syski

The matrix of intensities of a Markov process with discrete state space and continuous time parameter undergoes random changes in time in such a way that it stays constant between random instants. The resulting non-Markovian process is analyzed with the help of supplementary process defined in terms of variations of the intensity matrix. Several examples are presented.


1971 ◽  
Vol 8 (2) ◽  
pp. 423-427 ◽  
Author(s):  
Arne Jensen ◽  
David Kendall

1. Let the (honest) Markov process with transition functions (pij(0)) have transition rates (qij) and suppose that, for some M, so that the matrix Q = (qij) determines a bounded operator on the Banach space l1 by right-multiplication. Then in the terminology of [8], (pp. 12 and 19) Q will be bounded and ΩF will be a closed restriction of Q with dense domain, so that ΩF = Q; that is, we shall have a process whose associated semigroup has a bounded generator. In these circumstances Theorem 10.3.2 of [2] applies and the matrix Pt = (pij(t)) is given by where exp{·} denotes the function defined by the exponential power-series. We shall be interested here (as in [5] and [9]) in the determination of the limit matrix P∞ = (limt→∞pij(t)).


Author(s):  
Franklin Lowenthal ◽  
Massoud Malek

<p class="MsoNormal" style="text-align: justify; margin: 0in 44.1pt 0pt 0.5in; mso-layout-grid-align: none;"><span style="font-size: 10pt;"><span style="font-family: Times New Roman;">It is well known that a Markov process whose transition matrix is regular approaches a steady-state distribution, or equilibrium distribution. To find these steady-state probabilities requires the solution of a system of linear homogenous equations. However, the matrix of this system is singular and thus the system has infinitely many solutions. This obstacle is overcome by replacing one of the equations of the linear homogenous system by the linear non-homogeneous equation that simply expresses the requirement that the steady-state probabilities sum to one. But which equation of the original system should be chosen to be the one replaced. This brief article demonstrates that any of the equations of the original linear system can be selected as the one to be replaced; no matter which one is selected for replacement; the revised linear system will have the same unique solution.</span></span></p>


Author(s):  
Azam A. Imomov ◽  

The paper discusses the continuous-time Markov Branching Process allowing Immigration. We are considering a critical case for which the second moment of offspring law and the first moment of immigration law are possibly infinite. Assuming that the nonlinear parts of the appropriate generating functions are regularly varying in the sense of Karamata, we prove theorems on convergence of transition functions of the process to invariant measures. We deduce the speed rate of these convergence providing that slowly varying factors are with remainder


1976 ◽  
Vol 24 (1_suppl) ◽  
pp. 135-142
Author(s):  
James Stafford

We propose to treat the growth of Alberta urban areas as an absorbing Markov chain. The transition matrix is derived from the experience of all urban areas in Canada from 1901 to 1951. The matrix is tested using Alberta urban areas from 1931 to 1971 and found to have a fairly good fit. A projection is made to urban areas in Alberta in 1981.


1979 ◽  
Vol 11 (02) ◽  
pp. 397-421
Author(s):  
M. Yadin ◽  
R. Syski

The matrix of intensities of a Markov process with discrete state space and continuous time parameter undergoes random changes in time in such a way that it stays constant between random instants. The resulting non-Markovian process is analyzed with the help of supplementary process defined in terms of variations of the intensity matrix. Several examples are presented.


Sign in / Sign up

Export Citation Format

Share Document