scholarly journals Development of a Real-Time Risk Model (RTRM) for Predicting In-Hospital COVID-19 Mortality

Author(s):  
Daniel Schlauch ◽  
Arielle M. Fisher ◽  
Jessica Correia ◽  
Xiaotong Fu ◽  
Casey Martin ◽  
...  

ABSTRACTBackgroundWith over 83 million cases and 1.8 million deaths reported worldwide by the end of 2020 for SARS-CoV-2 (COVID-19), there is an urgent need to enhance identification of high-risk populations to properly evaluate therapy effectiveness with real-world evidence and improve outcomes.MethodsBaseline and daily indicators were evaluated using electronic health records for 46,971 patients hospitalized with COVID-19 from 176 HCA Healthcare-affiliated hospitals, presenting from March to September 2020, to develop a real-time risk model (RTRM) of all-cause, hospitalized mortality. Patient facility, dates-of-care, clinico-demographics, comorbidities, vitals, laboratory markers, and respiratory support findings were aggregated in a logistic regression model.FindingsThe RTRM predicted overall mortality as well as mortality 1, 3, and 7 days in advance with an area under the receiver operating characteristic curve (AUCROC) of 0.905, 0.911, 0.905, and 0.901 respectively, significantly outperforming a combined model of age and daily modified WHO progression scale (all p<0.0001; AUCROC, 0.846, 0.848, 0.850, and 0.852). The RTRM delineated risk at presentation from ongoing risk associated with medical care and showed that mortality rates decreased over time due to both decreased severity and changes in care.InterpretationTo our knowledge, this study is the largest of its kind to comprehensively evaluate predictors and incorporate daily risk measures of COVID-19 mortality. The RTRM validates current literature trends in mortality across time and allows direct translation for research and clinical applications.Research in contextEvidence before this studyDue to the rapidly evolving nature of the COVID-19 pandemic, the body of evidence and published literature was considered prior to study initiation and throughout the course of the study. Although at study initiation there was a growing consensus that age and disease severity at presentation were the greatest contributors to predicting in-hospital mortality, there was less of a consensus on the key demographics, comorbidities, vitals and laboratory values. In addition, early on, most potential predictors of in-hospital mortality had been assessed by univariable analysis. In April of 2020, a systematic review of prediction studies for COVID-19 revealed that there were only 8 publications for prognosis of hospital mortality. All were deemed to have high potential for bias due to low sample size, model overfitting, vague reporting and/or insufficient follow-up. Over the duration of the study, in-hospital prediction models were published ranging from simplified scores to machine learning. There were at least 8 prediction studies that were published during the course of our own that had comparable sample size or extensive multivariable analysis with the greatest accuracy of prediction reported as 74%. Moreover, a report in December of 2020 independently validated 4 simple prediction models, with none achieving greater than an AUCROC of 0.72%. Lastly, an eight-variable score developed by a UK consortium on a comparable sample size demonstrated an AUCROC of 0.77. To our knowledge, however, none to-date have modeled daily risk beyond baseline.We frequently assessed World Health Organization (WHO) resources as well as queried both MedRXIV and PubMed with the search terms “COVID”, “prediction”, “hospital” and “mortality” to ensure we were assessing all potential predictors of hospitalized mortality. The last search was performed on January 5, 2021 with the addition of “multi”, “daily”, “real time” or “longitudinal” terms to confirm the novelty of our study. No date restrictions or language filters were applied.Added value of this studyTo our knowledge, this study is the largest and most geographically diverse of its kind to comprehensively evaluate predictors of in-hospital COVID-19 mortality that are available retrospectively in electronic health records and to incorporate longitudinal, daily risk measures to create risk trajectories over the entire hospital stay. Not only does our Real-Time Risk Model (RTRM) validate current literature, demonstrating reduced mortality over the course of the COVID-19 pandemic and identifying age and WHO severity as major drivers of mortality in regards to baseline characteristics, but it also outperforms a model of age and daily WHO score combined, achieving an AUCROC of 0.91 on the test set. Furthermore, the fact that the RTRM delineates risk at baseline from risk over the course of care allows more granular interpretation of the impact of various parameters on mortality risk, as demonstrated in the current study using both sex disparity and calendar epochs that were based on evolving treatment recommendations as proofs-of-principle.Implications of all the available evidenceThe goal of the RTRM was to create a flexible tool that could be used to assess intervention and treatment efficacy in real-world, evidence-based studies as well as provide real-time risk assessment to aid clinical decisions and resourcing with further development. Implications of this work are broad. The depth of the multi-facility, harmonized electronic health record (EHR) dataset coupled with the transparency we provide in the RTRM results provides a resource for others to interpret impact of markers of interest and utilize data that is relevant to their own studies. The RTRM will allow optimal matching in retrospective cohort studies and provide a more granular endpoint for evaluation of interventions beyond general effectiveness, such as optimal delivery, including dosing and timing, and identification of the population/s benefiting from an intervention or combination of interventions. In addition, beyond the scope of the current study, the RTRM and its resultant daily risk scores allow for flexibility in developing prediction models for other clinical outcomes, such as progression of pulmonary disease, need for invasive mechanical ventilation, and development of sepsis and/or multiorgan failure, all of which could provide a framework for real-time personalized care.

2021 ◽  
Vol 4 (3) ◽  
pp. e211428
Author(s):  
Colin G. Walsh ◽  
Kevin B. Johnson ◽  
Michael Ripperger ◽  
Sarah Sperry ◽  
Joyce Harris ◽  
...  

Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
William Ratliff ◽  
Zachary K Wegermann ◽  
Harvey Shi ◽  
Michael Gao ◽  
mark sendak ◽  
...  

Introduction: Early identification of cardiac decompensation remains critical for improved patient outcomes. Digital phenotypes using real-time electronic health record (EHR) data offer an unbiased method to detect decompensation in at-risk individuals. Methods: Phenotypes designed to detect cardiac decompensation and its sequelae were retrospectively evaluated in 108,697 adult patient hospitalizations at a single center from October 2015-August 2018. The 6 phenotypes included hypotension, end organ dysfunction (EOD), hypoperfusion (concomitant hypotension and EOD), escalating vasoactive medication use (vasoactive meds), respiratory decline, and respiratory intervention. Median time from admission to phenotype development was measured in hours. In-hospital mortality and unanticipated ICU transfers were determined across all phenotypes and phenotype combinations. Results: Prevalence and time to detection varied across all six phenotypes (Table 1), with EOD found most frequently (35.7%) and detected earliest (3.4h, IQR 0.9-26.2h). Among individual phenotypes, patients with hypoperfusion had the highest rates of unanticipated ICU transfer (20.62%) and in-hospital mortality (20.99%). Patients meeting at least one phenotype had a 5.90% ICU transfer rate and 5.04% in-hospital mortality rate, compared to 0.62% mortality and 2.19% ICU transfer rates for patients meeting zero phenotypes. Among the 41 measured phenotype combinations, patients meeting all 6 phenotypes had the highest rates of unanticipated ICU transfer (28.75%) and in-hospital mortality (36.45%). Conclusions: Digital phenotypes of decompensation using real-world EHR data identify patients at higher risk of unexpected ICU transfer and in-hospital mortality at early times points in the hospitalization. Further studies will evaluate if implementation of a digital phenotype detection tool can improve care pathways and outcomes.


2021 ◽  
Vol 13 (11) ◽  
pp. 2179
Author(s):  
Pedro Mateus ◽  
Virgílio B. Mendes ◽  
Sandra M. Plecha

The neutral atmospheric delay is one of the major error sources in Space Geodesy techniques such as Global Navigation Satellite Systems (GNSS), and its modeling for high accuracy applications can be challenging. Improving the modeling of the atmospheric delays (hydrostatic and non-hydrostatic) also leads to a more accurate and precise precipitable water vapor estimation (PWV), mostly in real-time applications, where models play an important role, since numerical weather prediction models cannot be used for real-time processing or forecasting. This study developed an improved version of the Hourly Global Pressure and Temperature (HGPT) model, the HGPT2. It is based on 20 years of ERA5 reanalysis data at full spatial (0.25° × 0.25°) and temporal resolution (1-h). Apart from surface air temperature, surface pressure, zenith hydrostatic delay, and weighted mean temperature, the updated model also provides information regarding the relative humidity, zenith non-hydrostatic delay, and precipitable water vapor. The HGPT2 is based on the time-segmentation concept and uses the annual, semi-annual, and quarterly periodicities to calculate the relative humidity anywhere on the Earth’s surface. Data from 282 moisture sensors located close to GNSS stations during 1 year (2020) were used to assess the model coefficients. The HGPT2 meteorological parameters were used to process 35 GNSS sites belonging to the International GNSS Service (IGS) using the GAMIT/GLOBK software package. Results show a decreased root-mean-square error (RMSE) and bias values relative to the most used zenith delay models, with a significant impact on the height component. The HGPT2 was developed to be applied in the most diverse areas that can significantly benefit from an ERA5 full-resolution model.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
James T. H. Teo ◽  
Vlad Dinu ◽  
William Bernal ◽  
Phil Davidson ◽  
Vitaliy Oliynyk ◽  
...  

AbstractAnalyses of search engine and social media feeds have been attempted for infectious disease outbreaks, but have been found to be susceptible to artefactual distortions from health scares or keyword spamming in social media or the public internet. We describe an approach using real-time aggregation of keywords and phrases of freetext from real-time clinician-generated documentation in electronic health records to produce a customisable real-time viral pneumonia signal providing up to 4 days warning for secondary care capacity planning. This low-cost approach is open-source, is locally customisable, is not dependent on any specific electronic health record system and can provide an ensemble of signals if deployed at multiple organisational scales.


Risks ◽  
2018 ◽  
Vol 6 (3) ◽  
pp. 85 ◽  
Author(s):  
Mohamed Lkabous ◽  
Jean-François Renaud

In this short paper, we study a VaR-type risk measure introduced by Guérin and Renaud and which is based on cumulative Parisian ruin. We derive some properties of this risk measure and we compare it to the risk measures of Trufin et al. and Loisel and Trufin.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Yaghoub Dabiri ◽  
Alex Van der Velden ◽  
Kevin L. Sack ◽  
Jenny S. Choy ◽  
Julius M. Guccione ◽  
...  

AbstractAn understanding of left ventricle (LV) mechanics is fundamental for designing better preventive, diagnostic, and treatment strategies for improved heart function. Because of the costs of clinical and experimental studies to treat and understand heart function, respectively, in-silico models play an important role. Finite element (FE) models, which have been used to create in-silico LV models for different cardiac health and disease conditions, as well as cardiac device design, are time-consuming and require powerful computational resources, which limits their use when real-time results are needed. As an alternative, we sought to use deep learning (DL) for LV in-silico modeling. We used 80 four-chamber heart FE models for feed forward, as well as recurrent neural network (RNN) with long short-term memory (LSTM) models for LV pressure and volume. We used 120 LV-only FE models for training LV stress predictions. The active material properties of the myocardium and time were features for the LV pressure and volume training, and passive material properties and element centroid coordinates were features of the LV stress prediction models. For six test FE models, the DL error for LV volume was 1.599 ± 1.227 ml, and the error for pressure was 1.257 ± 0.488 mmHg; for 20 LV FE test examples, the mean absolute errors were, respectively, 0.179 ± 0.050 for myofiber, 0.049 ± 0.017 for cross-fiber, and 0.039 ± 0.011 kPa for shear stress. After training, the DL runtime was in the order of seconds whereas equivalent FE runtime was in the order of several hours (pressure and volume) or 20 min (stress). We conclude that using DL, LV in-silico simulations can be provided for applications requiring real-time results.


Author(s):  
Krishna K Patel ◽  
Suzanne V Arnold ◽  
Paul S Chan ◽  
Yuanyuan Tang ◽  
Yashashwi Pokharel ◽  
...  

Introduction: In SPRINT (Systolic blood PRessure INtervention Trial), non-diabetic patients with hypertension at high cardiovascular risk treated with intensive blood pressure (BP) control (<120mmHg) had fewer major adverse cardiovascular events (MACE) and all-cause deaths but higher rates of serious adverse events (SAE) compared with patients treated with standard BP control (<140mmHg). However, the degree of benefit or harm for an individual patient could vary due to heterogeneity in treatment effect. Methods: Using patient-level data from SPRINT, we developed predictive models for benefit (freedom from death or MACE) and harm (increased SAE) to allow for individualized BP treatment goals based on projected risk-benefit for each patient. Interactions between candidate variable and treatment were evaluated in the models to identify differential treatment effects. We performed 10 fold cross-validation for both the models. Results: Among 9361 patients, 8606 (92%) patients had no MACE or death event (benefit) and 3529 (38%) patients had a SAE (harm) over a median follow-up of 3.3 years. The benefit model showed good discrimination (c-index= 0.72; cross-validated c-index= 0.72) with treatment interactions of age, sex, and baseline systolic BP (Figure A), with more benefit of intensive BP treatment in patients who are older, male, and have lower baseline SBP. The SAE risk model showed moderate discrimination (c-index=0.66; cross-validated c-index= 0.65) with a treatment interaction of baseline renal function (Figure B), indicating less harm of intensive treatment in patients with a higher baseline creatinine. The mean predicted absolute benefit of intensive BP treatment was of 2.2% ± 2.5% compared with standard treatment, but ranged from 10.7% lower benefit to 17% greater benefit in individual patients. Similarly, mean predicted absolute harm with intensive treatment was 1.0% ± 1.9%, but ranged from 15.9% lesser harm to 4.9% more harm. Conclusion: Among non-diabetic patients with hypertension at high cardiovascular risk, we developed prediction models using basic clinical data that can identify patients with higher likelihood of benefit vs. harm with BP treatment strategies. These models could be used to tailor the treatment approach based on the projected risk and benefit for each unique patient.


2017 ◽  
Vol 28 (1) ◽  
pp. 309-320 ◽  
Author(s):  
Scott Powers ◽  
Valerie McGuire ◽  
Leslie Bernstein ◽  
Alison J Canchola ◽  
Alice S Whittemore

Personal predictive models for disease development play important roles in chronic disease prevention. The performance of these models is evaluated by applying them to the baseline covariates of participants in external cohort studies, with model predictions compared to subjects' subsequent disease incidence. However, the covariate distribution among participants in a validation cohort may differ from that of the population for which the model will be used. Since estimates of predictive model performance depend on the distribution of covariates among the subjects to which it is applied, such differences can cause misleading estimates of model performance in the target population. We propose a method for addressing this problem by weighting the cohort subjects to make their covariate distribution better match that of the target population. Simulations show that the method provides accurate estimates of model performance in the target population, while un-weighted estimates may not. We illustrate the method by applying it to evaluate an ovarian cancer prediction model targeted to US women, using cohort data from participants in the California Teachers Study. The methods can be implemented using open-source code for public use as the R-package RMAP (Risk Model Assessment Package) available at http://stanford.edu/~ggong/rmap/ .


2018 ◽  
Vol 09 (04) ◽  
pp. 841-848
Author(s):  
Kevin King ◽  
John Quarles ◽  
Vaishnavi Ravi ◽  
Tanvir Chowdhury ◽  
Donia Friday ◽  
...  

Background Through the Health Information Technology for Economic and Clinical Health Act of 2009, the federal government invested $26 billion in electronic health records (EHRs) to improve physician performance and patient safety; however, these systems have not met expectations. One of the cited issues with EHRs is the human–computer interaction, as exhibited by the excessive number of interactions with the interface, which reduces clinician efficiency. In contrast, real-time location systems (RTLS)—technologies that can track the location of people and objects—have been shown to increase clinician efficiency. RTLS can improve patient flow in part through the optimization of patient verification activities. However, the data collected by RTLS have not been effectively applied to optimize interaction with EHR systems. Objectives We conducted a pilot study with the intention of improving the human–computer interaction of EHR systems by incorporating a RTLS. The aim of this study is to determine the impact of RTLS on process metrics (i.e., provider time, number of rooms searched to find a patient, and the number of interactions with the computer interface), and the outcome metric of patient identification accuracy Methods A pilot study was conducted in a simulated emergency department using a locally developed camera-based RTLS-equipped EHR that detected the proximity of subjects to simulated patients and displayed patient information when subjects entered the exam rooms. Ten volunteers participated in 10 patient encounters with the RTLS activated (RTLS-A) and then deactivated (RTLS-D). Each volunteer was monitored and actions recorded by trained observers. We sought a 50% improvement in time to locate patients, number of rooms searched to locate patients, and the number of mouse clicks necessary to perform those tasks. Results The time required to locate patients (RTLS-A = 11.9 ± 2.0 seconds vs. RTLS-D = 36.0 ± 5.7 seconds, p < 0.001), rooms searched to find patient (RTLS-A = 1.0 ± 1.06 vs. RTLS-D = 3.8 ± 0.5, p < 0.001), and number of clicks to access patient data (RTLS-A = 1.0 ± 0.06 vs. RTLS-D = 4.1 ± 0.13, p < 0.001) were significantly reduced with RTLS-A relative to RTLS-D. There was no significant difference between RTLS-A and RTLS-D for patient identification accuracy. Conclusion This pilot demonstrated in simulation that an EHR equipped with real-time location services improved performance in locating patients and reduced error compared with an EHR without RTLS. Furthermore, RTLS decreased the number of mouse clicks required to access information. This study suggests EHRs equipped with real-time location services that automates patient location and other repetitive tasks may improve physician efficiency, and ultimately, patient safety.


Sign in / Sign up

Export Citation Format

Share Document