On aggregated Markov processes

1986 ◽  
Vol 23 (01) ◽  
pp. 208-214 ◽  
Author(s):  
Donald R. Fredkin ◽  
John A. Rice

A finite-state Markov process is aggregated into several groups. What can be learned about the underlying process from the aggregated one? We provide some partial answers to this question.

1986 ◽  
Vol 23 (1) ◽  
pp. 208-214 ◽  
Author(s):  
Donald R. Fredkin ◽  
John A. Rice

A finite-state Markov process is aggregated into several groups. What can be learned about the underlying process from the aggregated one? We provide some partial answers to this question.


Author(s):  
Peter J. Sherman

Health condition monitoring often entails monitoring and detecting changes in the structure of associated random processes. A common trigger for an alarm is when the process amplitude exceeds a specified threshold for a certain period of time. A less common trigger for an alarm is when the process bandwidth changes significantly. This latter type of change occurs, for example, in EEG just prior to the onset of an epileptic seizure [Sherman (2008)]. One can monitor the process directly, or one can convert the process to a 0/1 process where 0 denotes ‘within’ and 1 denotes ‘outside of’ a specified tolerance. Such a process is given many names. One is a binary process, another is a Bernoulli process. If the underlying process is a Gauss-Markov process, then the associated 0/1 process becomes a Markov 0/1 process. The main parameters associated with such a process are the following probabilities: (i), (ii), and (iii). In this work we use the variance expressions for the estimators of these probabilities that were reported in Sherman (2011), in order to detect changes with specified false alarm probabilities. We demonstrate their value in detecting bandwidth change via zero-crossing estimates, and detecting amplitude change via threshold excursion estimates.


Author(s):  
Steven N. Evans

SynopsisWe consider a class of measure-valued Markov processes constructed by taking a superprocess over some underlying Markov process and conditioning it to stay alive forever. We obtain two representations of such a process. The first representation is in terms of an “immortal particle” that moves around according to the underlying Markov process and throws off pieces of mass, which then proceed to evolve in the same way that mass evolves for the unconditioned superprocess. As a consequence of this representation, we show that the tail σ-field of the conditioned superprocess is trivial if the tail σ-field of the underlying process is trivial. The second representation is analogous to one obtained by LeGall in the unconditioned case. It represents the conditioned superprocess in terms of a certain process taking values in the path space of the underlying process. This representation is useful for studying the “transience” and “recurrence” properties of the closed support process.


1989 ◽  
Vol 26 (4) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.


1989 ◽  
Vol 26 (04) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.


2001 ◽  
Vol 38 (1) ◽  
pp. 195-208 ◽  
Author(s):  
Sophie Bloch-Mercier

We consider a repairable system with a finite state space which evolves in time according to a Markov process as long as it is working. We assume that this system is getting worse and worse while running: if the up-states are ranked according to their degree of increasing degradation, this is expressed by the fact that the Markov process is assumed to be monotone with respect to the reversed hazard rate and to have an upper triangular generator. We study this kind of process and apply the results to derive some properties of the stationary availability of the system. Namely, we show that, if the duration of the repair is independent of its completeness degree, then the more complete the repair, the higher the stationary availability, where the completeness degree of the repair is measured with the reversed hazard rate ordering.


1998 ◽  
Vol 35 (02) ◽  
pp. 313-324 ◽  
Author(s):  
Bret Larget

A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.


2001 ◽  
Vol 38 (01) ◽  
pp. 195-208 ◽  
Author(s):  
Sophie Bloch-Mercier

We consider a repairable system with a finite state space which evolves in time according to a Markov process as long as it is working. We assume that this system is getting worse and worse while running: if the up-states are ranked according to their degree of increasing degradation, this is expressed by the fact that the Markov process is assumed to be monotone with respect to the reversed hazard rate and to have an upper triangular generator. We study this kind of process and apply the results to derive some properties of the stationary availability of the system. Namely, we show that, if the duration of the repair is independent of its completeness degree, then the more complete the repair, the higher the stationary availability, where the completeness degree of the repair is measured with the reversed hazard rate ordering.


1998 ◽  
Vol 35 (2) ◽  
pp. 313-324 ◽  
Author(s):  
Bret Larget

A deterministic function of a Markov process is called an aggregated Markov process. We give necessary and sufficient conditions for the equivalence of continuous-time aggregated Markov processes. For both discrete- and continuous-time, we show that any aggregated Markov process which satisfies mild regularity conditions can be directly converted to a canonical representation which is unique for each class of equivalent models, and furthermore, is a minimal parameterization of all that can be identified about the underlying Markov process. Hidden Markov models on finite state spaces may be framed as aggregated Markov processes by expanding the state space and thus also have canonical representations.


Author(s):  
UWE FRANZ

We show how classical Markov processes can be obtained from quantum Lévy processes. It is shown that quantum Lévy processes are quantum Markov processes, and sufficient conditions for restrictions to subalgebras to remain quantum Markov processes are given. A classical Markov process (which has the same time-ordered moments as the quantum process in the vacuum state) exists whenever we can restrict to a commutative subalgebra without losing the quantum Markov property.8 Several examples, including the Azéma martingale, with explicit calculations are presented. In particular, the action of the generator of the classical Markov processes on polynomials or their moments are calculated using Hopf algebra duality.


Sign in / Sign up

Export Citation Format

Share Document