scholarly journals An Introduction to Phylogenetically Based Statistical Methods, with a New Method for Confidence Intervals on Ancestral Values

1999 ◽  
Vol 39 (2) ◽  
pp. 374-388 ◽  
Author(s):  
THEODORE GARLAND ◽  
PETER E. MIDFORD ◽  
ANTHONY R. IVES
2018 ◽  
Vol 14 (1) ◽  
pp. 43-50 ◽  
Author(s):  
Anna Fitzpatrick ◽  
Joseph A Stone ◽  
Simon Choppin ◽  
John Kelley

Performance analysis and identifying performance characteristics associated with success are of great importance to players and coaches in any sport. However, while large amounts of data are available within elite tennis, very few players employ an analyst or attempt to exploit the data to enhance their performance; this is partly attributable to the considerable time and complex techniques required to interpret these large datasets. Using data from the 2016 and 2017 French Open tournaments, we tested the agreement between the results of a simple new method for identifying important performance characteristics (the Percentage of matches in which the Winner Outscored the Loser, PWOL) and the results of two standard statistical methods to establish the validity of the simple method. Spearman’s rank-order correlations between the results of the three methods demonstrated excellent agreement, with all methods identifying the same three performance characteristics ( points won of 0–4 rally length, baseline points won and first serve points won) as strongly associated with success. Consequently, we propose that the PWOL method is valid for identifying performance characteristics associated with success in tennis, and is therefore a suitable alternative to more complex statistical methods, as it is simpler to calculate, interpret and contextualise.


Author(s):  
Karl Schmedders ◽  
Charlotte Snyder ◽  
Ute Schaedel

Wall Street hedge fund manager Kim Meyer is considering investing in an SFA (slate financing arrangement) in Hollywood. Dave Griffith, a Hollywood producer, is pitching for the investment and has conducted a broad analysis of recent movie data to determine the important drivers of a movie’s success. In order to convince Meyer to invest in an SFA, Griffith must anticipate possible questions to maximize his persuasiveness.Students will analyze the factors driving a movie’s revenue using various statistical methods, including calculating point estimates, computing confidence intervals, conducting hypothesis tests, and developing regression models (in which they must both choose the relevant set of independent variables as well as determine an appropriate functional form for the regression equation). The case also requires the interpretation of the quantitative findings in the context of the application.


2021 ◽  
Author(s):  
Monsurul Hoq ◽  
Susan Donath ◽  
Paul Monagle ◽  
John Carlin

Abstract Background: Reference intervals (RIs), which are used as an assessment tool in laboratory medicine, change with age for most biomarkers in children. Addressing this, RIs that vary continuously with age have been developed using a range of curve-fitting approaches. The choice of statistical method may be important as different methods may produce substantially different RIs. Hence, we developed a simulation study to investigate the performance of statistical methods for estimating continuous paediatric RIs.Methods: We compared four methods for estimating age-varying RIs. These were Cole’s LMS, the Generalised Additive Model for Location Scale and Shape (GAMLSS), Royston’s method based on fractional polynomials and exponential transformation, and a new method applying quantile regression using power variables in age selected by fractional polynomial regression for the mean. Data were generated using hypothetical true curves based on five biomarkers with varying complexity of association with age, i.e. linear or nonlinear, constant or nonconstant variation across age, and for four sample sizes (100, 200, 400 and 1000). Root mean square error (RMSE) was used as the primary performance measure for comparison. Results: Regression-based parametric methods performed better in most scenarios. Royston’s and the new method performed consistently well in all scenarios for sample sizes of at least 400, while the new method had the smallest average RMSE in scenarios with nonconstant variation across age. Conclusions: We recommend methods based on flexible parametric models for estimating continuous paediatric RIs, irrespective of the complexity of the association between biomarkers and age, for at least 400 samples.


2019 ◽  
Vol 12 (1) ◽  
pp. 205979911982651
Author(s):  
Michael Wood

In many fields of research, null hypothesis significance tests and p values are the accepted way of assessing the degree of certainty with which research results can be extrapolated beyond the sample studied. However, there are very serious concerns about the suitability of p values for this purpose. An alternative approach is to cite confidence intervals for a statistic of interest, but this does not directly tell readers how certain a hypothesis is. Here, I suggest how the framework used for confidence intervals could easily be extended to derive confidence levels, or “tentative probabilities,” for hypotheses. I also outline four quick methods for estimating these. This allows researchers to state their confidence in a hypothesis as a direct probability, instead of circuitously by p values referring to a hypothetical null hypothesis—which is usually not even stated explicitly. The inevitable difficulties of statistical inference mean that these probabilities can only be tentative, but probabilities are the natural way to express uncertainties, so, arguably, researchers using statistical methods have an obligation to estimate how probable their hypotheses are by the best available method. Otherwise, misinterpretations will fill the void.


2004 ◽  
Vol 20 (7) ◽  
pp. 651-665 ◽  
Author(s):  
Michael Perakis ◽  
Evdokia Xekalaki

2017 ◽  
Vol 74 (2) ◽  
pp. 393-407 ◽  
Author(s):  
Ding Ma ◽  
Pedram Hassanzadeh ◽  
Zhiming Kuang

Abstract A linear response function (LRF) that relates the temporal tendency of zonal-mean temperature and zonal wind to their anomalies and external forcing is used to accurately quantify the strength of the eddy–jet feedback associated with the annular mode in an idealized GCM. Following a simple feedback model, the results confirm the presence of a positive eddy–jet feedback in the annular mode dynamics, with a feedback strength of 0.137 day−1 in the idealized GCM. Statistical methods proposed by earlier studies to quantify the feedback strength are evaluated against results from the LRF. It is argued that the mean-state-independent eddy forcing reduces the accuracy of these statistical methods because of the quasi-oscillatory nature of the eddy forcing. Assuming the mean-state-independent eddy forcing is sufficiently weak at the low-frequency limit, a new method is proposed to approximate the feedback strength as the regression coefficient of low-pass-filtered eddy forcing onto the low-pass-filtered annular mode index. When time scales longer than 200 days are used for the low-pass filtering, the new method produces accurate results in the idealized GCM compared to the value calculated from the LRF. The estimated feedback strength in the southern annular mode converges to 0.121 day−1 in reanalysis data using the new method. This work also highlights the significant contribution of medium-scale waves, which have periods less than 2 days, to the annular mode dynamics. Such waves are filtered out if eddy forcing is calculated from daily mean data. The present study provides a framework to quantify the eddy–jet feedback strength in GCMs and reanalysis data.


Atmosphere ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 1623
Author(s):  
Armin Auf der Maur ◽  
Urs Germann

Grossversuch IV is a large and well documented experiment on hail suppression by silver iodide seeding. The original 1986 evaluation remained vague, although indicating a tendency to increase hail when seeding. The strategy to deal with distributions of hail energy far from normal was not optimal. The present re-evaluation sticks to the question asked and avoids both misleading transformations and unsatisfactory meteorological predictors. The raw data show an increase by about a factor of 3 for the hail energy when seeding. This is the opposite of what seeding is supposed to do. The probability to obtain such a result by chance is below 1%, calculated by permutation and bootstrap techniques applied on the raw data. Confidence intervals were approximated by bootstrapping as well as by a new method called “correlation imposed permutation” (CIP).


Sign in / Sign up

Export Citation Format

Share Document