Optimality of the one step look-ahead stopping times

1977 ◽  
Vol 14 (1) ◽  
pp. 162-169 ◽  
Author(s):  
M. Abdel-Hameed

The optimality of the one step look-ahead stopping rule is shown to hold under conditions different from those discussed by Chow, Robbins and Seigmund [5]. These results are corollaries of the following theorem: Let {Xn, n = 0, 1, …}; X0 = x be a discrete-time homogeneous Markov process with state space (E, ℬ). For any ℬ-measurable function g and α in (0, 1], define Aαg(x) = αExg(X1) – g(x) to be the infinitesimal generator of g. If τ is any stopping time satisfying the conditions: Ex[αNg(XN)I(τ > N)]→0 as as N → ∞, then Applications of the results are considered.

1977 ◽  
Vol 14 (01) ◽  
pp. 162-169 ◽  
Author(s):  
M. Abdel-Hameed

The optimality of the one step look-ahead stopping rule is shown to hold under conditions different from those discussed by Chow, Robbins and Seigmund [5]. These results are corollaries of the following theorem: Let {Xn , n = 0, 1, …}; X 0 = x be a discrete-time homogeneous Markov process with state space (E, ℬ). For any ℬ-measurable function g and α in (0, 1], define Aαg(x) = αExg(X 1) – g(x) to be the infinitesimal generator of g. If τ is any stopping time satisfying the conditions: Ex [αNg(XN )I(τ > N)]→0 as as N → ∞, then Applications of the results are considered.


1999 ◽  
Vol 36 (01) ◽  
pp. 48-59 ◽  
Author(s):  
George V. Moustakides

Let ξ0,ξ1,ξ2,… be a homogeneous Markov process and let S n denote the partial sum S n = θ(ξ1) + … + θ(ξ n ), where θ(ξ) is a scalar nonlinearity. If N is a stopping time with 𝔼N < ∞ and the Markov process satisfies certain ergodicity properties, we then show that 𝔼S N = [lim n→∞𝔼θ(ξ n )]𝔼N + 𝔼ω(ξ0) − 𝔼ω(ξ N ). The function ω(ξ) is a well defined scalar nonlinearity directly related to θ(ξ) through a Poisson integral equation, with the characteristic that ω(ξ) becomes zero in the i.i.d. case. Consequently our result constitutes an extension to Wald's first lemma for the case of Markov processes. We also show that, when 𝔼N → ∞, the correction term is negligible as compared to 𝔼N in the sense that 𝔼ω(ξ0) − 𝔼ω(ξ N ) = o(𝔼N).


1983 ◽  
Vol 20 (01) ◽  
pp. 185-190 ◽  
Author(s):  
Mark Scott ◽  
Dean L. Isaacson

By assuming the proportionality of the intensity functions at each time point for a continuous-time non-homogeneous Markov process, strong ergodicity for the process is determined through strong ergodicity of a related discrete-time Markov process. For processes having proportional intensities, strong ergodicity implies having the limiting matrix L satisfy L · P(s, t) = L, where P(s, t) is the matrix of transition functions.


1988 ◽  
Vol 25 (3) ◽  
pp. 544-552 ◽  
Author(s):  
Masami Yasuda

This paper treats stopping problems on Markov chains in which the OLA (one-step look ahead) policy is optimal. Its associated optimal value can be explicitly expressed by a potential for a charge function of the difference between the immediate reward and the one-step-after reward. As an application to the best choice problem, we shall obtain the value of three problems: the classical secretary problem, a problem with a refusal probability and a problem with a random number of objects.


2015 ◽  
Vol 713-715 ◽  
pp. 760-763
Author(s):  
Jia Lei Zhang ◽  
Zhen Lin Jin ◽  
Dong Mei Zhao

We have analyzed some reliability problems of the 2UPS+UP mechanism using continuous Markov repairable model in our previous work. According to the check and repair of the robot is periodic, the discrete time Markov repairable model should be more appropriate. Firstly we built up the discrete time repairable model and got the one step transition probability matrix. Secondly solved the steady state equations and got the steady state availability of the mechanical leg, by the solution of the difference equations the reliability and the mean time to first failure were obtained. In the end we compared the reliability indexes with the continuous model.


1969 ◽  
Vol 6 (3) ◽  
pp. 704-707 ◽  
Author(s):  
Thomas L. Vlach ◽  
Ralph L. Disney

The departure process from the GI/G/1 queue is shown to be a semi-Markov process imbedded at departure points with a two-dimensional state space. Transition probabilities for this process are defined and derived from the distributions of the arrival and service processes. The one step transition probabilities and a stationary distribution are obtained for the imbedded two-dimensional Markov chain.


1985 ◽  
Vol 8 (3) ◽  
pp. 549-554
Author(s):  
Z. Govindarajulu

A simple proof of the exponential boundedness of the stopping time of the one-sample sequential probability ratio tests (SPRT's) is obtained.


2003 ◽  
Vol 40 (4) ◽  
pp. 1147-1154 ◽  
Author(s):  
Aiko Kurushima ◽  
Katsunori Ano

This note studies a Poisson arrival selection problem for the full-information case with an unknown intensity λ which has a Gamma prior density G(r, 1/a), where a>0 and r is a natural number. For the no-information case with the same setting, the problem is monotone and the one-step look-ahead rule is an optimal stopping rule; in contrast, this note proves that the full-information case is not a monotone stopping problem.


1999 ◽  
Vol 36 (1) ◽  
pp. 48-59 ◽  
Author(s):  
George V. Moustakides

Let ξ0,ξ1,ξ2,… be a homogeneous Markov process and let Sn denote the partial sum Sn = θ(ξ1) + … + θ(ξn), where θ(ξ) is a scalar nonlinearity. If N is a stopping time with 𝔼N < ∞ and the Markov process satisfies certain ergodicity properties, we then show that 𝔼SN = [limn→∞𝔼θ(ξn)]𝔼N + 𝔼ω(ξ0) − 𝔼ω(ξN). The function ω(ξ) is a well defined scalar nonlinearity directly related to θ(ξ) through a Poisson integral equation, with the characteristic that ω(ξ) becomes zero in the i.i.d. case. Consequently our result constitutes an extension to Wald's first lemma for the case of Markov processes. We also show that, when 𝔼N → ∞, the correction term is negligible as compared to 𝔼N in the sense that 𝔼ω(ξ0) − 𝔼ω(ξN) = o(𝔼N).


1988 ◽  
Vol 25 (03) ◽  
pp. 544-552 ◽  
Author(s):  
Masami Yasuda

This paper treats stopping problems on Markov chains in which the OLA (one-step look ahead) policy is optimal. Its associated optimal value can be explicitly expressed by a potential for a charge function of the difference between the immediate reward and the one-step-after reward. As an application to the best choice problem, we shall obtain the value of three problems: the classical secretary problem, a problem with a refusal probability and a problem with a random number of objects.


Sign in / Sign up

Export Citation Format

Share Document