Bayesian analysis of the two component mixture of inverted exponential distribution under quadratic loss function

2012 ◽  
Vol 7 (9) ◽  
Author(s):  
Muhammad Younas Majeed
Author(s):  
Terna Godfrey Ieren ◽  
Adana’a Felix Chama ◽  
Olateju Alao Bamigbala ◽  
Jerry Joel ◽  
Felix M. Kromtit ◽  
...  

The Gompertz inverse exponential distribution is a three-parameter lifetime model with greater flexibility and performance for analyzing real life data. It has one scale parameter and two shape parameters responsible for the flexibility of the distribution. Despite the importance and necessity of parameter estimation in model fitting and application, it has not been established that a particular estimation method is better for any of these three parameters of the Gompertz inverse exponential distribution. This article focuses on the development of Bayesian estimators for a shape of the Gompertz inverse exponential distribution using two non-informative prior distributions (Jeffery and Uniform) and one informative prior distribution (Gamma prior) under Square error loss function (SELF), Quadratic loss function (QLF) and Precautionary loss function (PLF). These results are compared with the maximum likelihood counterpart using Monte Carlo simulations. Our results indicate that Bayesian estimators under Quadratic loss function (QLF) with any of the three prior distributions provide the smallest mean square error for all sample sizes and different values of parameters.


2012 ◽  
Vol 190-191 ◽  
pp. 977-981 ◽  
Author(s):  
Xian Bin Wu

This paper presents the Bayesian analysis of the zero-failure data with double hyper parameters a, b. We take prior distribution of failure probability pi be its conjugated distribution—Beta (pi-1, 1, 1, b) and hyper parameter b as the uniform distribution in (1, c). With quadratic loss function, If pi  (pi-1, 1), the E-Bayesian estimation of pi is . When 0 < c < si, and satisfy (I) ;(II) . The results satisfy . The properties of E-Bayesian estimation are given. A Simulation example is discussed, which shows that the method is both efficiency and easy to operate.


2015 ◽  
Vol 38 (2) ◽  
pp. 431-452
Author(s):  
Muhammad Tahir ◽  
Muhammad Aslam

Bayesian analysis of the 3-component mixture of an Exponential distribution under type-I right censoring scheme is considered in this paper. The Bayes estimators and posterior risks for the unknown parameters are derived under squared error loss function, precautionary loss function and DeGroot loss function assuming the non-informative (uniform and Jeffreys') priors. The Bayes estimators and posterior risks are viewed as a function of the test termination time. A simulation study is given to highlight and compare the properties of the Bayes estimates.


Author(s):  
Elizabeth Cudney ◽  
Bonnie Paris

Using the quadratic loss function is one way to quantify a fundamental value in the provision of health care services: we must provide the best care and best service to every patient, every time. Sole reliance on specification limits leads to a focus on “acceptable” performance rather than “ideal” performance. This paper presents the application of the quadratic loss function to quantify improvement opportunities in the healthcare industry.


1997 ◽  
Vol 9 (6) ◽  
pp. 1211-1243 ◽  
Author(s):  
David H. Wolpert

This article presents several additive corrections to the conventional quadratic loss bias-plus-variance formula. One of these corrections is appropriate when both the target is not fixed (as in Bayesian analysis) and training sets are averaged over (as in the conventional bias plus variance formula). Another additive correction casts conventional fixed-trainingset Bayesian analysis directly in terms of bias plus variance. Another correction is appropriate for measuring full generalization error over a test set rather than (as with conventional bias plus variance) error at a single point. Yet another correction can help explain the recent counterintuitive bias-variance decomposition of Friedman for zero-one loss. After presenting these corrections, this article discusses some other loss function-specific aspects of supervised learning. In particular, there is a discussion of the fact that if the loss function is a metric (e.g., zero-one loss), then there is bound on the change in generalization error accompanying changing the algorithm's guess from h1 to h2, a bound that depends only on h1 and h2 and not on the target. This article ends by presenting versions of the bias-plus-variance formula appropriate for logarithmic and quadratic scoring, and then all the additive corrections appropriate to those formulas. All the correction terms presented are a covariance, between the learning algorithm and the posterior distribution over targets. Accordingly, in the (very common) contexts in which those terms apply, there is not a “bias-variance trade-off” or a “bias-variance dilemma,” as one often hears. Rather there is a bias-variance-covariance trade-off.


Sign in / Sign up

Export Citation Format

Share Document