Markov chains with binomial time change

1986 ◽  
Vol 23 (02) ◽  
pp. 519-523
Author(s):  
Kyle Siegrist

The effect of a binomial time change on a given Markov chain is studied. Results are obtained for the hitting times, potential operator, and transience and recurrence properties of the time-changed chain. The limiting behavior is considered as the binomial parameter approaches 0 and the time variable approaches∞.

1986 ◽  
Vol 23 (2) ◽  
pp. 519-523 ◽  
Author(s):  
Kyle Siegrist

The effect of a binomial time change on a given Markov chain is studied. Results are obtained for the hitting times, potential operator, and transience and recurrence properties of the time-changed chain. The limiting behavior is considered as the binomial parameter approaches 0 and the time variable approaches∞.


1996 ◽  
Vol 33 (3) ◽  
pp. 640-653 ◽  
Author(s):  
Tobias Rydén

An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.


Author(s):  
Xiaoming Duan ◽  
Francesco Bullo

This article surveys recent advancements in strategy designs for persistent robotic surveillance tasks, with a focus on stochastic approaches. The problem describes how mobile robots stochastically patrol a graph in an efficient way, where the efficiency is defined with respect to relevant underlying performance metrics. We start by reviewing the basics of Markov chains, which are the primary motion models for stochastic robotic surveillance. We then discuss the two main criteria regarding the speed and unpredictability of surveillance strategies. The central objects that appear throughout the treatment are the hitting times of Markov chains, their distributions, and their expectations. We formulate various optimization problems based on the relevant metrics in different scenarios and establish their respective properties. Expected final online publication date for the Annual Review of Control, Robotics, and Autonomous Systems, Volume 4 is May 3, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2021 ◽  
Vol 58 (1) ◽  
pp. 177-196
Author(s):  
Servet Martínez

AbstractWe consider a strictly substochastic matrix or a stochastic matrix with absorbing states. By using quasi-stationary distributions we show that there is an associated canonical Markov chain that is built from the resurrected chain, the absorbing states, and the hitting times, together with a random walk on the absorbing states, which is necessary for achieving time stationarity. Based upon the 2-stringing representation of the resurrected chain, we supply a stationary representation of the killed and the absorbed chains. The entropies of these representations have a clear meaning when one identifies the probability measure of natural factors. The balance between the entropies of these representations and the entropy of the canonical chain serves to check the correctness of the whole construction.


1996 ◽  
Vol 33 (03) ◽  
pp. 640-653 ◽  
Author(s):  
Tobias Rydén

An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.


1990 ◽  
Vol 27 (03) ◽  
pp. 545-556 ◽  
Author(s):  
S. Kalpazidou

The asymptotic behaviour of the sequence (𝒞 n (ω), wc,n (ω)/n), is studied where 𝒞 n (ω) is the class of all cycles c occurring along the trajectory ωof a recurrent strictly stationary Markov chain (ξ n ) until time n and wc,n (ω) is the number of occurrences of the cycle c until time n. The previous sequence of sample weighted classes converges almost surely to a class of directed weighted cycles (𝒞∞, ω c ) which represents uniquely the chain (ξ n ) as a circuit chain, and ω c is given a probabilistic interpretation.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


2021 ◽  
Author(s):  
Andrea Marin ◽  
Carla Piazza ◽  
Sabina Rossi

AbstractIn this paper, we deal with the lumpability approach to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method is based on a state aggregation technique and applies to Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the stationary performance indices when the model is actually lumpable. The notion of quasi-lumpability is based on the idea that a Markov chain can be altered by relatively small perturbations of the transition rates in such a way that the new resulting Markov chain is lumpable. In this case, only upper and lower bounds on the performance indices can be derived. Here, we introduce a novel notion of quasi-lumpability, named proportional lumpability, which extends the original definition of lumpability but, differently from the general definition of quasi-lumpability, it allows one to derive exact stationary performance indices for the original process. We then introduce the notion of proportional bisimilarity for the terms of the performance process algebra PEPA. Proportional bisimilarity induces a proportional lumpability on the underlying continuous-time Markov chains. Finally, we prove some compositionality results and show the applicability of our theory through examples.


1963 ◽  
Vol s1-38 (1) ◽  
pp. 359-371 ◽  
Author(s):  
John G. Kemeny ◽  
J. Laurie Snell

Sign in / Sign up

Export Citation Format

Share Document