scholarly journals Yaglom limits can depend on the starting state

2018 ◽  
Vol 50 (01) ◽  
pp. 1-34
Author(s):  
R. D. Foley ◽  
D. R. McDonald

AbstractWe construct a simple example, surely known to Harry Kesten, of anR-transient Markov chain on a countable state spaceS∪ {δ}, where δ is absorbing. The transition matrixKonSis irreducible and strictly substochastic. We determine the Yaglom limit, that is, the limiting conditional behavior given nonabsorption. Each starting statex∈Sresults in a different Yaglom limit. Each Yaglom limit is anR-1-invariant quasi-stationary distribution, whereRis the convergence parameter ofK. Yaglom limits that depend on the starting state are related to a nontrivialR-1-Martin boundary.

2000 ◽  
Vol 14 (1) ◽  
pp. 57-79 ◽  
Author(s):  
Jean-François Dantzer ◽  
Mostafa Haddani ◽  
Philippe Robert

The stability properties of the bandwidth allocation algorithm First Fit are analyzed for some distributions on the sizes of the requests. Fluid limits are used to get the ergodicity results. When there are two possible sizes, the description of the transient behavior involves a finite Markov chain on the exit states of a transient Markov chain on a countable state space. The explicit expression of this exit matrix is given.


Author(s):  
OMER ANGEL ◽  
YINON SPINKA

Abstract Consider an ergodic Markov chain on a countable state space for which the return times have exponential tails. We show that the stationary version of any such chain is a finitary factor of an independent and identically distributed (i.i.d.) process. A key step is to show that any stationary renewal process whose jump distribution has exponential tails and is not supported on a proper subgroup of ℤ is a finitary factor of an i.i.d. process.


1987 ◽  
Vol 24 (02) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn ), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn ) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


1973 ◽  
Vol 73 (1) ◽  
pp. 119-138 ◽  
Author(s):  
Gerald S. Goodman ◽  
S. Johansen

1. SummaryWe shall consider a non-stationary Markov chain on a countable state space E. The transition probabilities {P(s, t), 0 ≤ s ≤ t <t0 ≤ ∞} are assumed to be continuous in (s, t) uniformly in the state i ε E.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1998 ◽  
Vol 12 (3) ◽  
pp. 387-391
Author(s):  
Jean B. Lasserre

Given a Markov chain on a countable state space, we present a Lyapunov (sufficient) condition for existence of an invariant probability with a geometric tail.


1978 ◽  
Vol 10 (04) ◽  
pp. 764-787
Author(s):  
J. N. McDonald ◽  
N. A. Weiss

At times n = 0, 1, 2, · · · a Poisson number of particles enter each state of a countable state space. The particles then move independently according to the transition law of a Markov chain, until their death which occurs at a random time. Several limit theorems are then proved for various functionals of this infinite particle system. In particular, laws of large numbers and central limit theorems are proved.


1992 ◽  
Vol 29 (01) ◽  
pp. 21-36 ◽  
Author(s):  
Masaaki Kijima

Let {Xn, n= 0, 1, 2, ···} be a transient Markov chain which, when restricted to the state space 𝒩+= {1, 2, ···}, is governed by an irreducible, aperiodic and strictly substochastic matrix𝐏= (pij), and letpij(n) =P∈Xn=j, Xk∈ 𝒩+fork= 0, 1, ···,n|X0=i],i, j𝒩+. The prime concern of this paper is conditions for the existence of the limits,qijsay, ofasn →∞. Ifthe distribution (qij) is called the quasi-stationary distribution of {Xn} and has considerable practical importance. It will be shown that, under some conditions, if a non-negative non-trivial vectorx= (xi) satisfyingrxT=xT𝐏andexists, whereris the convergence norm of𝐏, i.e.r=R–1andand T denotes transpose, then it is unique, positive elementwise, andqij(n) necessarily converge toxjasn →∞.Unlike existing results in the literature, our results can be applied even to theR-null andR-transient cases. Finally, an application to a left-continuous random walk whose governing substochastic matrix isR-transient is discussed to demonstrate the usefulness of our results.


2000 ◽  
Vol 37 (4) ◽  
pp. 1157-1163 ◽  
Author(s):  
F. P. Machado ◽  
S. Yu. Popov

We study a one-dimensional supercritical branching random walk in a non-i.i.d. random environment, which considers both the branching mechanism and the step transition. This random environment is constructed using a recurrent Markov chain on a finite or countable state space. Criteria of (strong) recurrence and transience are presented for this model.


1969 ◽  
Vol 1 (02) ◽  
pp. 123-187 ◽  
Author(s):  
Erhan Çinlar

Consider a stochastic process X(t) (t ≧ 0) taking values in a countable state space, say, {1, 2,3, …}. To be picturesque we think of X(t) as the state which a particle is in at epoch t. Suppose the particle moves from state to state in such a way that the successive states visited form a Markov chain, and that the particle stays in a given state a random amount of time depending on the state it is in as well as on the state to be visited next. Below is a possible realization of such a process.


Sign in / Sign up

Export Citation Format

Share Document