scholarly journals Algorithm for Sample Availability Prediction in a Hospital based Epidemiological Study: Spreadsheet-based Sample Availability Calculator

Author(s):  
Amrit Sudershan ◽  
Parvinder Kumar ◽  
kanak Mahajan

Abstract Looking at population’s behavior by taking samples is quite uncertain due to its big and dynamic structure and unimaginable variability. All quantitative sampling approaches aim to draw representative sample from the population so that the results of the studying samples can then be generalized back to the population. The probability of detecting a true effect of a study largely depends on the sample size and if taking small samples will give lowers statistical power, thus having higher risk of missing a meaningful underlying difference. There have lot of online sample size calculators which are based on population size, allele frequency which tell us about the number of samples required for the study but none will help in setting a threshold for the availability of the sample from a single hospital in a particular period. This study aims to provide an efficient calculation method for setting a threshold for the availability of samples during a specific period of a research study which is an important question to be answered during the research study design. So we have designed a spreadsheet-based sample availability calculator tool implemented in MS-Excel 2007.

Scientifica ◽  
2016 ◽  
Vol 2016 ◽  
pp. 1-5 ◽  
Author(s):  
R. Eric Heidel

Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by ana priorisample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up ana priorisample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.


Pharmaceutics ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1159
Author(s):  
Zhengguo Xu ◽  
Víctor Mangas-Sanjuán ◽  
Matilde Merino-Sanjuán ◽  
Virginia Merino ◽  
Alfredo García-Arieta

Inter- and intra-batch variability of the quality attributes contribute to the uncertainty for demonstrating equivalent microstructure of post-approval changes and generic/hybrids of semisolid topical products. Selecting a representative sample size to describe accurately the in vitro properties of semisolids and to reach enough statistical power to demonstrate similarity between two semisolid topical products is currently challenging. The objective of this work is to establish the number of batches and units per batch to be compared based on different inter-batch and intra-batch variability to demonstrate equivalence in the physical characteristics of the products that ensure a similar microstructure of the semisolid. This investigation shows that the minimum number of batches to be compared of each product is 3 and the minimum number of units per batch could be 6 in the case of low intra- and inter-batch variability. If the products are not identical, i.e., 2.5–5% differences that are expected due to differences in the manufacturing process or the suppliers of excipients, 12 units and 6 batches are needed. If intra- or inter-batch variability is larger than 10%, the number of batches and/or the number of units needs to be increased. As the interplay between inter- and intra-batch variability is complex, the sample size required for each combination of inter- and intra-batch variability and expected difference between products can be obtained in the attached tables.


2017 ◽  
Author(s):  
Benjamin O. Turner ◽  
Erick J. Paul ◽  
Michael B. Miller ◽  
Aron K. Barbey

Despite a growing body of research suggesting that task-based functional magnetic resonance imaging (fMRI) studies often suffer from a lack of statistical power due to too-small samples, the proliferation of such underpowered studies continues unabated. Using large independent samples across eleven distinct tasks, we demonstrate the impact of sample size on replicability, assessed at different levels of analysis relevant to fMRI researchers. We find that the degree of replicability for typical sample sizes is modest and that sample sizes much larger than typical (e.g., N = 100) produce results that fall well short of perfectly replicable. Thus, our results join the existing line of work advocating for larger sample sizes. Moreover, because we test sample sizes over a fairly large range and use intuitive metrics of replicability, our hope is that our results are more understandable and convincing to researchers who may have found previous results advocating for larger samples inaccessible.


Author(s):  
Les Beach

To test the efficacy of the Personal Orientation Inventory in assessing growth in self-actualization in relation to encounter groups and to provide a more powerful measure of such changes, pre- and posttest data from 3 highly comparable encounter groups (N = 43) were combined for analysis. Results indicated that the Personal Orientation Inventory is a sensitive instrument for assessing personal growth in encounter groups and that a larger total sample size provides more significant results than those reported for small samples (e. g., fewer than 15 participants).


2021 ◽  
pp. 1-8
Author(s):  
Norin Ahmed ◽  
Jessica K. Bone ◽  
Gemma Lewis ◽  
Nick Freemantle ◽  
Catherine J. Harmer ◽  
...  

Abstract Background According to the cognitive neuropsychological model, antidepressants reduce symptoms of depression and anxiety by increasing positive relative to negative information processing. Most studies of whether antidepressants alter emotional processing use small samples of healthy individuals, which lead to low statistical power and selection bias and are difficult to generalise to clinical practice. We tested whether the selective serotonin reuptake inhibitor (SSRI) sertraline altered recall of positive and negative information in a large randomised controlled trial (RCT) of patients with depressive symptoms recruited from primary care. Methods The PANDA trial was a pragmatic multicentre double-blind RCT comparing sertraline with placebo. Memory for personality descriptors was tested at baseline and 2 and 6 weeks after randomisation using a computerised emotional categorisation task followed by a free recall. We measured the number of positive and negative words correctly recalled (hits). Poisson mixed models were used to analyse longitudinal associations between treatment allocation and hits. Results A total of 576 participants (88% of those randomised) completed the recall task at 2 and 6 weeks. We found no evidence that positive or negative hits differed according to treatment allocation at 2 or 6 weeks (adjusted positive hits ratio = 0.97, 95% CI 0.90–1.05, p = 0.52; adjusted negative hits ratio = 0.99, 95% CI 0.90–1.08, p = 0.76). Conclusions In the largest individual placebo-controlled trial of an antidepressant not funded by the pharmaceutical industry, we found no evidence that sertraline altered positive or negative recall early in treatment. These findings challenge some assumptions of the cognitive neuropsychological model of antidepressant action.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2008 ◽  
Vol 4 ◽  
pp. T263-T264
Author(s):  
Steven D. Edland ◽  
Linda K. McEvoy ◽  
Dominic Holland ◽  
John C. Roddey ◽  
Christine Fennema-Notestine ◽  
...  

2011 ◽  
Vol 6 (2) ◽  
pp. 252-277 ◽  
Author(s):  
Stephen T. Ziliak

AbstractStudent's exacting theory of errors, both random and real, marked a significant advance over ambiguous reports of plant life and fermentation asserted by chemists from Priestley and Lavoisier down to Pasteur and Johannsen, working at the Carlsberg Laboratory. One reason seems to be that William Sealy Gosset (1876–1937) aka “Student” – he of Student'st-table and test of statistical significance – rejected artificial rules about sample size, experimental design, and the level of significance, and took instead an economic approach to the logic of decisions made under uncertainty. In his job as Apprentice Brewer, Head Experimental Brewer, and finally Head Brewer of Guinness, Student produced small samples of experimental barley, malt, and hops, seeking guidance for industrial quality control and maximum expected profit at the large scale brewery. In the process Student invented or inspired half of modern statistics. This article draws on original archival evidence, shedding light on several core yet neglected aspects of Student's methods, that is, Guinnessometrics, not discussed by Ronald A. Fisher (1890–1962). The focus is on Student's small sample, economic approach to real error minimization, particularly in field and laboratory experiments he conducted on barley and malt, 1904 to 1937. Balanced designs of experiments, he found, are more efficient than random and have higher power to detect large and real treatment differences in a series of repeated and independent experiments. Student's world-class achievement poses a challenge to every science. Should statistical methods – such as the choice of sample size, experimental design, and level of significance – follow the purpose of the experiment, rather than the other way around? (JEL classification codes: C10, C90, C93, L66)


1990 ◽  
Vol 47 (1) ◽  
pp. 2-15 ◽  
Author(s):  
Randall M. Peterman

Ninety-eight percent of recently surveyed papers in fisheries and aquatic sciences that did not reject some null hypothesis (H0) failed to report β, the probability of making a type II error (not rejecting H0 when it should have been), or statistical power (1 – β). However, 52% of those papers drew conclusions as if H0 were true. A false H0 could have been missed because of a low-power experiment, caused by small sample size or large sampling variability. Costs of type II errors can be large (for example, for cases that fail to detect harmful effects of some industrial effluent or a significant effect of fishing on stock depletion). Past statistical power analyses show that abundance estimation techniques usually have high β and that only large effects are detectable. I review relationships among β, power, detectable effect size, sample size, and sampling variability. I show how statistical power analysis can help interpret past results and improve designs of future experiments, impact assessments, and management regulations. I make recommendations for researchers and decision makers, including routine application of power analysis, more cautious management, and reversal of the burden of proof to put it on industry, not management agencies.


Sign in / Sign up

Export Citation Format

Share Document