scholarly journals Using predicted length of stay to define treatment and model costs in hospitalized adults with serious illness: an evaluation of palliative care

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Peter May ◽  
Charles Normand ◽  
Danielle Noreika ◽  
Nevena Skoro ◽  
J. Brian Cassel

Abstract Background Economic research on hospital palliative care faces major challenges. Observational studies using routine data encounter difficulties because treatment timing is not under investigator control and unobserved patient complexity is endemic. An individual’s predicted LOS at admission offers potential advantages in this context. Methods We conducted a retrospective cohort study on adults admitted to a large cancer center in the United States between 2009 and 2015. We defined a derivation sample to estimate predicted LOS using baseline factors (N = 16,425) and an analytic sample for our primary analyses (N = 2674) based on diagnosis of a terminal illness and high risk of hospital mortality. We modelled our treatment variable according to the timing of first palliative care interaction as a function of predicted LOS, and we employed predicted LOS as an additional covariate in regression as a proxy for complexity alongside diagnosis and comorbidity index. We evaluated models based on predictive accuracy in and out of sample, on Akaike and Bayesian Information Criteria, and precision of treatment effect estimate. Results Our approach using an additional covariate yielded major improvement in model accuracy: R2 increased from 0.14 to 0.23, and model performance also improved on predictive accuracy and information criteria. Treatment effect estimates and conclusions were unaffected. Our approach with respect to treatment variable yielded no substantial improvements in model performance, but post hoc analyses show an association between treatment effect estimate and estimated LOS at baseline. Conclusion Allocation of scarce palliative care capacity and value-based reimbursement models should take into consideration when and for whom the intervention has the largest impact on treatment choices. An individual’s predicted LOS at baseline is useful in this context for accurately predicting costs, and potentially has further benefits in modelling treatment effects.

2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Nicole Bohme Carnegie ◽  
Rui Wang ◽  
Victor De Gruttola

AbstractAn issue that remains challenging in the field of causal inference is how to relax the assumption of no interference between units. Interference occurs when the treatment of one unit can affect the outcome of another, a situation which is likely to arise with outcomes that may depend on social interactions, such as occurrence of infectious disease. Existing methods to accommodate interference largely depend upon an assumption of “partial interference” – interference only within identifiable groups but not among them. There remains a considerable need for development of methods that allow further relaxation of the no-interference assumption. This paper focuses on an estimand that is the difference in the outcome that one would observe if the treatment were provided to all clusters compared to that outcome if treatment were provided to none – referred as the overall treatment effect. In trials of infectious disease prevention, the randomized treatment effect estimate will be attenuated relative to this overall treatment effect if a fraction of the exposures in the treatment clusters come from individuals who are outside these clusters. This source of interference – contacts sufficient for transmission that are with treated clusters – is potentially measurable. In this manuscript, we leverage epidemic models to infer the way in which a given level of interference affects the incidence of infection in clusters. This leads naturally to an estimator of the overall treatment effect that is easily implemented using existing software.


2020 ◽  
Vol 39 ◽  
pp. 101865
Author(s):  
Katherine Riester ◽  
Ludwig Kappos ◽  
Krzysztof Selmaj ◽  
Stacy Lindborg ◽  
Ilya Lipkovich ◽  
...  

2019 ◽  
pp. 004912411985237
Author(s):  
Roberto V. Penaloza ◽  
Mark Berends

To measure “treatment” effects, social science researchers typically rely on nonexperimental data. In education, school and teacher effects on students are often measured through value-added models (VAMs) that are not fully understood. We propose a framework that relates to the education production function in its most flexible form and connects with the basic VAMs without using untenable assumptions. We illustrate how, due to measurement error (ME), cross-group imbalances created by nonrandom group assignment cause correlations that drive the models’ treatment-effect estimate bias. We derive formulas to calculate bias and rank the models and show that no model is better in all situations. The framework and formulas’ workings are verified and illustrated via simulation. We also evaluate the performance of latent variable/errors-in-variables models that handle ME and study the role of extra covariates including lags of the outcome.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 610
Author(s):  
Theodoros Papakonstantinou ◽  
Adriani Nikolakopoulou ◽  
Gerta Rücker ◽  
Anna Chaimani ◽  
Guido Schwarzer ◽  
...  

In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The percentage contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the percentage that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to percentage contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the percentage contributions of direct evidence from individual studies to network treatment effects.


2014 ◽  
Vol 20 (11) ◽  
pp. 1494-1501 ◽  
Author(s):  
J Zhang ◽  
E Waubant ◽  
G Cutter ◽  
JS Wolinsky ◽  
D Leppert

Background: The Expanded Disability Status Scale (EDSS) has low sensitivity and reliability for detecting sustained disability progression (SDP) in multiple sclerosis (MS) trials. Objective: This study evaluated composite disability end points as alternatives to EDSS alone. Methods: SDP rates were determined using 96-week data from the Olympus trial (rituximab in patients with primary progressive MS). SDP was analyzed using composite disability end points: SDP in EDSS, timed 25-foot walk test (T25FWT), or 9-hole peg test (9HPT) (composite A); SDP in T25FWT or 9HPT (composite B); SDP in EDSS and (T25FWT or 9HPT) (composite C); and SDP in any two (EDSS, T25FWT, and 9HPT) (composite D). Results: Overall agreements between EDSS and other disability measures in defining SDP were 66%−73%. Composite A showed similar treatment effect estimate versus EDSS alone with much higher SDP rates. Composite B, C, and D all showed larger treatment effect estimate with different or similar SDP rates versus EDSS alone. Using composite A (24-week confirmation only), B, C, or D could reduce sample sizes needed for MS trials. Conclusion: Composite end points including multiple accepted disability measures could be superior to EDSS alone in analyzing disability progression and should be considered in future MS trials.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 610 ◽  
Author(s):  
Theodoros Papakonstantinou ◽  
Adriani Nikolakopoulou ◽  
Gerta Rücker ◽  
Anna Chaimani ◽  
Guido Schwarzer ◽  
...  

In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The proportion contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the proportion that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to proportion contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the proportion contributions of direct evidence from individual studies to network treatment effects.


F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 610 ◽  
Author(s):  
Theodoros Papakonstantinou ◽  
Adriani Nikolakopoulou ◽  
Gerta Rücker ◽  
Anna Chaimani ◽  
Guido Schwarzer ◽  
...  

In network meta-analysis, it is important to assess the influence of the limitations or other characteristics of individual studies on the estimates obtained from the network. The percentage contribution matrix, which shows how much each direct treatment effect contributes to each treatment effect estimate from network meta-analysis, is crucial in this context. We use ideas from graph theory to derive the percentage that is contributed by each direct treatment effect. We start with the ‘projection’ matrix in a two-step network meta-analysis model, called the H matrix, which is analogous to the hat matrix in a linear regression model. We develop a method to translate H entries to percentage contributions based on the observation that the rows of H can be interpreted as flow networks, where a stream is defined as the composition of a path and its associated flow. We present an algorithm that identifies the flow of evidence in each path and decomposes it into direct comparisons. To illustrate the methodology, we use two published networks of interventions. The first compares no treatment, quinolone antibiotics, non-quinolone antibiotics and antiseptics for underlying eardrum perforations and the second compares 14 antimanic drugs. We believe that this approach is a useful and novel addition to network meta-analysis methodology, which allows the consistent derivation of the percentage contributions of direct evidence from individual studies to network treatment effects.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Stella Erdmann ◽  
Marietta Kirchner ◽  
Heiko Götte ◽  
Meinhard Kieser

Abstract Background Go/no-go decisions after phase II and sample size chosen for phase III are usually based on phase II results (e.g., the treatment effect estimate of phase II). Due to the decision rule (only promising phase II results lead to phase III), treatment effect estimates from phase II that initiate a phase III trial commonly overestimate the true treatment effect. Underpowered phase III trials are the consequence. Optimistic findings may then not be reproduced, leading to the failure of potentially expensive drug development programs. For some disease areas these failure rates are described to be quite high: 62.5%. Methods We integrate the ideas of multiplicative and additive adjustment of treatment effect estimates after go decisions in a utility-based framework for optimizing drug development programs. The design of a phase II/III program, i.e., the “right amount of adjustment”, the allocation of the resources to phase II and III in terms of sample size, and the rule applied to decide whether to stop or to proceed with phase III influences its success considerably. Given specific drug development program characteristics (e.g., fixed and variable per patient costs for phase II and III, probable gain in case of market launch), optimal designs with respect to the maximal expected utility can be identified by the proposed Bayesian-frequentist approach. The method will be illustrated by application to practical examples characteristic for oncological studies. Results In general, our results show that the program set-ups with adjusted treatment effect estimate used for phase III planning are superior to the “naïve” program set-ups with respect to the maximal expected utility. Therefore, we recommend considering an adjusted phase II treatment effect estimate for the phase III sample size calculation. However, there is no one-fits-all design. Conclusion Individual drug development planning for a specific program is necessary to find the optimal design. The optimal choice of the design parameters for a specific drug development program at hand can be found by our user friendly R Shiny application and package (both assessable open-source via [1]).


Sign in / Sign up

Export Citation Format

Share Document