Algorithm AS 248: Empirical Distribution Function Goodness-of-Fit Tests

Author(s):  
Charles S. Davis ◽  
Michael A. Stephens
2017 ◽  
Vol 40 (2) ◽  
pp. 223-241 ◽  
Author(s):  
Ehsan Zamanzade ◽  
Mahdi Mahdizadeh

This article deals with entropy estimation using ranked set sampling (RSS). Some estimators are developed based on the empirical distribution function and its nonparametric maximum likelihood competitor. The suggested entropy estimators have smaller root mean squared errors than the other entropy estimators in the literature. The proposed estimators are then used to construct goodness of fit tests for inverse Gaussian distribution.


2003 ◽  
Vol 2003 (9) ◽  
pp. 587-592
Author(s):  
Khoan T. Dinh ◽  
Nhu T. Nguyen ◽  
Truc T. Nguyen

We give a new characterization of inverse Gaussian distributions using the regression of a suitable statistic based on a given random sample. A corollary of this result is a characterization of inverse Gaussian distribution based on a conditional joint density function of the sample. Application of this corollary as a transformation in the procedure to construct EDF (empirical distribution function) goodness-of-fit tests for inverse Gaussian distributions is also studied.


2009 ◽  
Vol 12 (02) ◽  
pp. 157-167 ◽  
Author(s):  
MARCO CAPASSO ◽  
LUCIA ALESSI ◽  
MATTEO BARIGOZZI ◽  
GIORGIO FAGIOLO

This paper discusses some problems possibly arising when approximating via Monte-Carlo simulations the distributions of goodness-of-fit test statistics based on the empirical distribution function. We argue that failing to re-estimate unknown parameters on each simulated Monte-Carlo sample — and thus avoiding to employ this information to build the test statistic — may lead to wrong, overly-conservative. Furthermore, we present some simple examples suggesting that the impact of this possible mistake may turn out to be dramatic and does not vanish as the sample size increases.


Sign in / Sign up

Export Citation Format

Share Document