Measurement Error and Our Data

Author(s):  
Harold L. Cole

This chapter introduces the idea that all data are measured with error. It then uses a standard normally distributed error formulation within a state and measurement context to derive the optimal estimate of the measurement error given a data measurement.

Dose-Response ◽  
2005 ◽  
Vol 3 (4) ◽  
pp. dose-response.0 ◽  
Author(s):  
Kenny S. Crump

Although statistical analyses of epidemiological data usually treat the exposure variable as being known without error, estimated exposures in epidemiological studies often involve considerable uncertainty. This paper investigates the theoretical effect of random errors in exposure measurement upon the observed shape of the exposure response. The model utilized assumes that true exposures are log-normally distributed, and multiplicative measurement errors are also log-normally distributed and independent of the true exposures. Under these conditions it is shown that whenever the true exposure response is proportional to exposure to a power r, the observed exposure response is proportional to exposure to a power K, where K < r. This implies that the observed exposure response exaggerates risk, and by arbitrarily large amounts, at sufficiently small exposures. It also follows that a truly linear exposure response will appear to be supra-linear—i.e., a linear function of exposure raised to the K-th power, where K is less than 1.0. These conclusions hold generally under the stated log-normal assumptions whenever there is any amount of measurement error, including, in particular, when the measurement error is unbiased either in the natural or log scales. Equations are provided that express the observed exposure response in terms of the parameters of the underlying log-normal distribution. A limited investigation suggests that these conclusions do not depend upon the log-normal assumptions, but hold more widely. Because of this problem, in addition to other problems in exposure measurement, shapes of exposure responses derived empirically from epidemiological data should be treated very cautiously. In particular, one should be cautious in concluding that the true exposure response is supra-linear on the basis of an observed supra-linear form.


2017 ◽  
Vol 2017 ◽  
pp. 1-5 ◽  
Author(s):  
Matthew B. Wolf

The hemoglobin-dilution method (HDM) has been used to estimate changes in vascular volumes in patients because direct measurements with radioisotopes are time-consuming and not practical in many facilities. The HDM requires an assumption of initial blood volume, repeated measurements of plasma hemoglobin concentration, and the calculation of the ratio of hemoglobin measurements. The statistics of these ratio distributions resulting from measurement error are ill-defined even when the errors are normally distributed. This study uses a “Monte Carlo” approach to determine the distribution of these errors. The finding was that these errors could be closely approximated with a log-normal distribution that can be parameterized by a geometric mean (X) and a dispersion factor (S). When the ratio of successive Hb concentrations is used to estimate blood volume, normally distributed hemoglobin measurement errors tend to produce exponentially higher values ofXandSas the SD of the measurement error increases. The longer tail of the distribution to the right could produce much greater overestimations than would be expected from the SD values of the measurement error; however, it was found that averaging duplicate and triplicate hemoglobin measurements on a blood sample greatly improved the accuracy.


1999 ◽  
Vol 15 (2) ◽  
pp. 91-98 ◽  
Author(s):  
Lutz F. Hornke

Summary: Item parameters for several hundreds of items were estimated based on empirical data from several thousands of subjects. The logistic one-parameter (1PL) and two-parameter (2PL) model estimates were evaluated. However, model fit showed that only a subset of items complied sufficiently, so that the remaining ones were assembled in well-fitting item banks. In several simulation studies 5000 simulated responses were generated in accordance with a computerized adaptive test procedure along with person parameters. A general reliability of .80 or a standard error of measurement of .44 was used as a stopping rule to end CAT testing. We also recorded how often each item was used by all simulees. Person-parameter estimates based on CAT correlated higher than .90 with true values simulated. For all 1PL fitting item banks most simulees used more than 20 items but less than 30 items to reach the pre-set level of measurement error. However, testing based on item banks that complied to the 2PL revealed that, on average, only 10 items were sufficient to end testing at the same measurement error level. Both clearly demonstrate the precision and economy of computerized adaptive testing. Empirical evaluations from everyday uses will show whether these trends will hold up in practice. If so, CAT will become possible and reasonable with some 150 well-calibrated 2PL items.


1968 ◽  
Vol 78 (2, Pt.1) ◽  
pp. 269-275 ◽  
Author(s):  
Wesley M. DuCharme ◽  
Cameron R. Peterson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document