scholarly journals Use of Electronic Data to Identify Risk Factors Associated with Clostridium difficile Infection (CDI) and to Develop CDI Risk Scores

2017 ◽  
Vol 4 (suppl_1) ◽  
pp. S403-S403
Author(s):  
Laurie Aukes ◽  
Bruce Fireman ◽  
Edwin Lewis ◽  
Julius Timbol ◽  
John Hansen ◽  
...  

Abstract Background Clostridium difficile is a major cause of severe diarrhea in the U.S. We described characteristics of Kaiser Permanente Northern California (KPNC) members with C. difficile infection (CDI), identified risk factors associated with CDI, and developed risk scores to predict who may develop CDI. Methods Retrospective cohort study with all KPNC members ≥18 years old from May 2011 to July 2014 comparing demographic and clinical characteristics for those with and without lab-confirmed incident CDI. We included CDI risk factors in logistic regression models to estimate the risk of developing future CDI after an Identification Recruitment Date (IRD), a time when an individual might be a good candidate for a C. difficile vaccine clinical trial. Two risk score models were created and cross validated (70% of the data used for development and 30% for testing). Results During the study period, there were 9,986 CDI cases and 2,230,354 members without CDI. CDI cases tended to be ≥65 years old (59% vs.. 21%), female (61% vs. 53%), and white race (70% vs. 53%), with more hospitalizations (42% vs. 3%), emergency room visits (51% vs. 14%), and skilled nursing facility stays (25% vs. 0.6%) in the year prior to CDI compared with members without CDI. At least 10 office visits within the prior year (53% vs. 16%), use of antibiotics in last 12 weeks (81% vs. 11%), proton pump inhibitors in the last year (36% vs. 7%), and multiple medical conditions within the prior year (e.g., chronic kidney disease, congestive heart failure, and pneumonia) were important risk factors for CDI. Using a hospital discharge event as the IRD, our risk score model yielded excellent performance in predicting the likelihood of developing CDI in the subsequent 31 – 365 days (C-statistic of 0.851). Using a random date as the IRD, our model also predicted CDI risk in the subsequent 1–30 days (C-statistic 0.658) and 31–365 days (C-statistic 0.722) reasonably well. Conclusion CDI can be predicted by increasing age, medications, comorbidities and healthcare exposure, particularly ≥10 office visits, hospitalizations, and skilled nursing stays in the prior year and recent antibiotics. Such risk factors can be used to identify high-risk populations for C. difficile vaccine clinical studies. Disclosures H. Yu, Pfizer, Inc.: Employee, Salary; B. Cai, Pfizer, Inc.: Employee, Salary; E. Gonzalez, Pfizer, Inc.: Employee, Salary; J. Lawrence, Pfizer, Inc.: Employee, Salary; N. P. Klein, GSK: Investigator, Grant recipient; sanofi pasteur: Investigator, Grant recipient; Merck & Co: Investigator, Grant recipient; MedImmune: Investigator, Grant recipient; Protein Sciences: Investigator, Grant recipient; Pfizer: Investigator, Grant recipient

Author(s):  
Laurie Aukes ◽  
Bruce Fireman ◽  
Edwin Lewis ◽  
Julius Timbol ◽  
John Hansen ◽  
...  

Abstract Background Clostridioides difficile infection (CDI) is a major cause of severe diarrhea. In this retrospective study, we identified CDI risk factors by comparing demographic and clinical characteristics for Kaiser Permanente Northern California (KPNC) members ≥18 years old with and without lab-confirmed incident CDI. Methods We included these risk factors in logistic regression models to develop two risk scores that predict future CDI after an Index Date for Risk Score Assessment (IDRSA), marking the beginning of a period for which we estimated CDI risk. Results During May 2011 to July 2014, we included 9,986 CDI cases and 2,230,354 members without CDI. CDI cases tended to be older, female, white race, and have more hospitalizations, emergency department and office visits, skilled nursing facility stays, antibiotic and proton pump inhibitor use, and specific comorbidities. Using hospital discharge as the IDRSA, our risk score model yielded excellent performance in predicting the likelihood of developing CDI in the subsequent 31 – 365 days (C-statistic of 0.848). Using a random date as the IDRSA, our model also predicted CDI risk in the subsequent 31 - 365 days reasonably well (C–statistic 0.722). Conclusions These results can be used to identify high risk populations for enrollment in C. difficile vaccine trials and facilitate study feasibility regarding sample size and time to completion.


2015 ◽  
Vol 36 (12) ◽  
pp. 1409-1416 ◽  
Author(s):  
Sara Y. Tartof ◽  
Gunter K. Rieg ◽  
Rong Wei ◽  
Hung Fu Tseng ◽  
Steven J. Jacobsen ◽  
...  

BACKGROUNDLimitations in sample size, overly inclusive antibiotic classes, lack of adjustment of key risk variables, and inadequate assessment of cases contribute to widely ranging estimates of risk factors for Clostridium difficile infection (CDI).OBJECTIVETo incorporate all key CDI risk factors in addition to 27 antibiotic classes into a single comprehensive model.DESIGNRetrospective cohort study.SETTINGKaiser Permanente Southern California.PATIENTSMembers of Kaiser Permanente Southern California at least 18 years old admitted to any of its 14 hospitals from January 1, 2011, through December 31, 2012.METHODSHospital-acquired CDI cases were identified by polymerase chain reaction assay. Exposure to major outpatient antibiotics (10 classes) and those administered during inpatient stays (27 classes) was assessed. Age, sex, self-identified race/ethnicity, Charlson Comorbidity Score, previous hospitalization, transfer from a skilled nursing facility, number of different antibiotic classes, statin use, and proton pump inhibitor use were also assessed. Poisson regression estimated adjusted risk of CDI.RESULTSA total of 401,234 patients with 2,638 cases of incident CDI (0.7%) were detected. The final model demonstrated highest CDI risk associated with increasing age, exposure to multiple antibiotic classes, and skilled nursing facility transfer. Factors conferring the most reduced CDI risk were inpatient exposure to tetracyclines and first-generation cephalosporins, and outpatient macrolides.CONCLUSIONSAlthough type and aggregate antibiotic exposure are important, the factors that increase the likelihood of environmental spore acquisition should not be underestimated. Operationally, our findings have implications for antibiotic stewardship efforts and can inform empirical and culture-driven treatment approaches.Infect. Control Hosp. Epidemiol. 2015;36(12):1409–1416


2019 ◽  
Vol 6 (Supplement_2) ◽  
pp. S546-S546
Author(s):  
Abhishek Deshpande ◽  
Marya Zilberberg ◽  
Pei-Chun Yu ◽  
Peter Imrey ◽  
Michael Rothberg

Abstract Background Patients with community-acquired pneumonia (CAP) are often prescribed broad-spectrum antibiotics, putting them at risk for developing Clostridium difficile infection (CDI). Previous studies of risk factors for CDI in this population have suffered from small sample sizes. We examined the risk factors for CDI in patients hospitalized with CAP using a large US database. Methods We included adult patients admitted with CAP 2010–2015 to 175 US hospitals participating in Premier and providing administrative and microbiological data. Patients were identified as having CAP if they had a diagnosis of pneumonia, a chest radiograph, and were treated with antimicrobials on day 1 and for ≥3 days. Incident CDI was identified with ICD-9 diagnosis code (not present on admission) and a positive laboratory test. We used descriptive statistics and mixed multiple logistic regression modeling to mutually adjust and evaluate risk factors previously suggested in the CDI literature. Results Among 148,417 inpatients with pneumonia treated with antibiotics, 789 (0.53%) developed CDI. The median age was 75 years, and 53% were female. Compared with patients with no CDI, those with CDI were older (75 vs. 72 years), had more comorbidities (5 vs. 3), and were more likely to be admitted from SNF (15.7% vs. 7.3%) or hospitalized in the past 3 months (11.8% vs. 7.1) (all comparisons P < 0.001). After multivariable adjustment, factors significantly associated with development of CDI included increasing age, admission from a skilled nursing facility, and receipt of piperacillin/tazobactam, aztreonam or intravenous vancomycin (Figure 1). Receipt of third-generation cephalosporins or fluoroquinolones was not an independent predictor of CDI. Conclusion In a large US inpatient sample hospitalized for pneumonia and treated with antimicrobials, only 0.53% of the patients developed CDI as defined by an ICD-9 code and positive laboratory test. Reducing the exposure to healthcare facilities and certain high-risk antibiotics may reduce the burden of CDI in patients with CAP. Disclosures All authors: No reported disclosures.


2017 ◽  
Vol 145 (9) ◽  
pp. 1805-1814 ◽  
Author(s):  
X.-M. WANG ◽  
S.-H. YIN ◽  
J. DU ◽  
M.-L. DU ◽  
P.-Y. WANG ◽  
...  

SUMMARYRetreatment of tuberculosis (TB) often fails in China, yet the risk factors associated with the failure remain unclear. To identify risk factors for the treatment failure of retreated pulmonary tuberculosis (PTB) patients, we analyzed the data of 395 retreated PTB patients who received retreatment between July 2009 and July 2011 in China. PTB patients were categorized into ‘success’ and ‘failure’ groups by their treatment outcome. Univariable and multivariable logistic regression were used to evaluate the association between treatment outcome and socio-demographic as well as clinical factors. We also created an optimized risk score model to evaluate the predictive values of these risk factors on treatment failure. Of 395 patients, 99 (25·1%) were diagnosed as retreatment failure. Our results showed that risk factors associated with treatment failure included drug resistance, low education level, low body mass index (<18·5), long duration of previous treatment (>6 months), standard treatment regimen, retreatment type, positive culture result after 2 months of treatment, and the place where the first medicine was taken. An Optimized Framingham risk model was then used to calculate the risk scores of these factors. Place where first medicine was taken (temporary living places) received a score of 6, which was highest among all the factors. The predicted probability of treatment failure increases as risk score increases. Ten out of 359 patients had a risk score >9, which corresponded to an estimated probability of treatment failure >70%. In conclusion, we have identified multiple clinical and socio-demographic factors that are associated with treatment failure of retreated PTB patients. We also created an optimized risk score model that was effective in predicting the retreatment failure. These results provide novel insights for the prognosis and improvement of treatment for retreated PTB patients.


2009 ◽  
Vol 37 (3) ◽  
pp. 392-398 ◽  
Author(s):  
D. A. Story ◽  
M. Fink ◽  
K. Leslie ◽  
P. S. Myles ◽  
S.-J. Yap ◽  
...  

We developed a risk score for 30-day postoperative mortality: the Perioperative Mortality risk score. We used a derivation cohort from a previous study of surgical patients aged 70 years or more at three large metropolitan teaching hospitals, using the significant risk factors for 30-day mortality from multivariate analysis. We summed the risk score for each of six factors creating an overall Perioperative Mortality score. We included 1012 patients and the 30-day mortality was 6%. The three preoperative factors and risk scores were (“three A's”): 1) age, years: 70 to 79=1, 80 to 89=3, 90+=6; 2) ASA physical status: ASA I or II=0, ASA III=3, ASA IV=6, ASA V=15; and 3) preoperative albumin <30 g/l=2.5. The three postoperative factors and risk scores were (“three I's”) 1) unplanned intensive care unit admission =4.0; 2) systemic inflammation =3; and 3) acute renal impairment=2.5. Scores and mortality were: <5=1%, 5 to 9.5=7% and ≥10=26%. We also used a preliminary validation cohort of 256 patients from a regional hospital. The area under the receiver operating characteristic curve (C-statistic) for the derivation cohort was 0.80 (95% CI 0.74 to 0.86) similar to the validation C-statistic: 0.79 (95% CI 0.70 to 0.88), P=0.88. The Hosmer-Lemeshow test (P=0.35) indicated good calibration in the validation cohort. The Perioperative Mortality score is straightforward and may assist progressive risk assessment and management during the perioperative period. Risk associated with surgical complexity and urgency could be added to this baseline patient factor Perioperative Mortality score.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kai Saito ◽  
Hitoshi Sugawara ◽  
Tamami Watanabe ◽  
Akira Ishii ◽  
Takahiko Fukuchi

AbstractRisk factors associated with 72-h mortality in patients with extremely high serum aspartate aminotransferase levels (AST; ≥ 3000 U/L) are unknown. This single-centre, retrospective, case-controlled, cross-sectional study obtained data from medical records of adult patients treated at Saitama Medical Center, Japan, from 2005 to 2019. We conducted a multivariate logistic after adjusting for age, sex, height, weight, body mass index, Brinkman Index, vital signs, biochemical values, updated Charlson Comorbidity Index (CCI) score, CCI components, and underlying causes. A logistic regression model with selected validity risks and higher C-statistic for predicting 72-h mortality was established. During the 15-year period, 428 patients (133 non-survivors and 295 survivors [cases and controls by survival < 72 and ≥ 72 h, respectively]) with AST levels ≥ 3000 U/L were identified. The 72-h mortality rate was 133/428 (31.1%). The model used for predicting 72-h mortality through the assessment of alkaline phosphatase, creatine kinase, serum sodium, potassium, and phosphorus levels had a C-statistic value of 0.852 (sensitivity and specificity, 76.6%). The main independent risk factors associated with 72-h mortality among patients with AST levels ≥ 3000 U/L included higher serum values of alkaline phosphatase, creatine kinase, serum sodium, potassium, and phosphorus.


2021 ◽  
Vol 12 ◽  
pp. 215013272110185
Author(s):  
Sanjeev Nanda ◽  
Audry S. Chacin Suarez ◽  
Loren Toussaint ◽  
Ann Vincent ◽  
Karen M. Fischer ◽  
...  

Purpose The purpose of the present study was to investigate body mass index, multi-morbidity, and COVID-19 Risk Score as predictors of severe COVID-19 outcomes. Patients Patients from this study are from a well-characterized patient cohort collected at Mayo Clinic between January 1, 2020 and May 23, 2020; with confirmed COVID-19 diagnosis defined as a positive result on reverse-transcriptase-polymerase-chain-reaction (RT-PCR) assays from nasopharyngeal swab specimens. Measures Demographic and clinical data were extracted from the electronic medical record. The data included: date of birth, gender, ethnicity, race, marital status, medications (active COVID-19 agents), weight and height (from which the Body Mass Index (BMI) was calculated, history of smoking, and comorbid conditions to calculate the Charlson Comorbidity Index (CCI) and the U.S Department of Health and Human Services (DHHS) multi-morbidity score. An additional COVID-19 Risk Score was also included. Outcomes included hospital admission, ICU admission, and death. Results Cox proportional hazards models were used to determine the impact on mortality or hospital admission. Age, sex, and race (white/Latino, white/non-Latino, other, did not disclose) were adjusted for in the model. Patients with higher COVID-19 Risk Scores had a significantly higher likelihood of being at least admitted to the hospital (HR = 1.80; 95% CI = 1.30, 2.50; P < .001), or experiencing death or inpatient admission (includes ICU admissions) (HR = 1.20; 95% CI = 1.02, 1.42; P = .028). Age was the only statistically significant demographic predictor, but obesity was not a significant predictor of any of the outcomes. Conclusion Age and COVID-19 Risk Scores were significant predictors of severe COVID-19 outcomes. Further work should examine the properties of the COVID-19 Risk Factors Scale.


2021 ◽  
pp. 1-14
Author(s):  
Magdalena I. Tolea ◽  
Jaeyeong Heo ◽  
Stephanie Chrisphonte ◽  
James E. Galvin

Background: Although an efficacious dementia-risk score system, Cardiovascular Risk Factors, Aging, and Dementia (CAIDE) was derived using midlife risk factors in a population with low educational attainment that does not reflect today’s US population, and requires laboratory biomarkers, which are not always available. Objective: Develop and validate a modified CAIDE (mCAIDE) system and test its ability to predict presence, severity, and etiology of cognitive impairment in older adults. Methods: Population consisted of 449 participants in dementia research (N = 230; community sample; 67.9±10.0 years old, 29.6%male, 13.7±4.1 years education) or receiving dementia clinical services (N = 219; clinical sample; 74.3±9.8 years old, 50.2%male, 15.5±2.6 years education). The mCAIDE, which includes self-reported and performance-based rather than blood-derived measures, was developed in the community sample and tested in the independent clinical sample. Validity against Framingham, Hachinski, and CAIDE risk scores was assessed. Results: Higher mCAIDE quartiles were associated with lower performance on global and domain-specific cognitive tests. Each one-point increase in mCAIDE increased the odds of mild cognitive impairment (MCI) by up to 65%, those of AD by 69%, and those for non-AD dementia by >  85%, with highest scores in cases with vascular etiologies. Being in the highest mCAIDE risk group improved ability to discriminate dementia from MCI and controls and MCI from controls, with a cut-off of ≥7 points offering the highest sensitivity, specificity, and positive and negative predictive values. Conclusion: mCAIDE is a robust indicator of cognitive impairment in community-dwelling seniors, which can discriminate well between dementia severity including MCI versus controls. The mCAIDE may be a valuable tool for case ascertainment in research studies, helping flag primary care patients for cognitive testing, and identify those in need of lifestyle interventions for symptomatic control.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Xin Hui Choo ◽  
Chee Wai Ku ◽  
Yin Bun Cheung ◽  
Keith M. Godfrey ◽  
Yap-Seng Chong ◽  
...  

AbstractSpontaneous miscarriage is one of the most common complications of pregnancy. Even though some risk factors are well documented, there is a paucity of risk scoring tools during preconception. In the S-PRESTO cohort study, Asian women attempting to conceive, aged 18-45 years, were recruited. Multivariable logistic regression model coefficients were used to determine risk estimates for age, ethnicity, history of pregnancy loss, body mass index, smoking status, alcohol intake and dietary supplement intake; from these we derived a risk score ranging from 0 to 17. Miscarriage before 16 weeks of gestation, determined clinically or via ultrasound. Among 465 included women, 59 had miscarriages and 406 had pregnancy ≥ 16 weeks of gestation. Higher rates of miscarriage were observed at higher risk scores (5.3% at score ≤ 3, 17.0% at score 4–6, 40.0% at score 7–8 and 46.2% at score ≥ 9). Women with scores ≤ 3 were defined as low-risk level (< 10% miscarriage); scores 4–6 as intermediate-risk level (10% to < 40% miscarriage); scores ≥ 7 as high-risk level (≥ 40% miscarriage). The risk score yielded an area under the receiver-operating-characteristic curve of 0.74 (95% confidence interval 0.67, 0.81; p < 0.001). This novel scoring tool allows women to self-evaluate their miscarriage risk level, which facilitates lifestyle changes to optimize modifiable risk factors in the preconception period and reduces risk of spontaneous miscarriage.


Sign in / Sign up

Export Citation Format

Share Document