Assessing Medical Evidence

Author(s):  
Jacob Stegenga

Medical scientists employ ‘quality assessment tools’ to assess evidence from medical research, especially from randomized trials. These tools are designed to take into account methodological details of studies, including randomization, subject allocation concealment, and other features of studies deemed relevant to minimizing bias. There are dozens of such tools available. They differ widely from each other, and empirical studies show that they have low inter-rater reliability and low inter-tool reliability. This is an instance of a more general problem called here the underdetermination of evidential significance. Disagreements about the quality of evidence can be due to different—but in principle equally good—weightings of the methodological features that constitute quality assessment tools. Thus, the malleability of empirical research in medicine is deep: in addition to the malleability of first-order empirical methods, such as randomized trials, there is malleability in the tools used to evaluate first-order methods.

2002 ◽  
Vol 30 (01) ◽  
pp. 173-176 ◽  
Author(s):  
Jianping Liu ◽  
Lise Lotte Kjaergard ◽  
Christian Gluud

The quality of randomization of Chinese randomized trials on herbal medicines for hepatitis B was assessed. Search strategy and inclusion criteria were based on the published protocol. One hundred and seventy-six randomized clinical trials (RCTs) involving 20,452 patients with chronic hepatitis B virus (HBV) infection were identified that tested Chinese medicinal herbs. They were published in 49 Chinese journals. Only 10% (18/176) of the studies reported the method by which they randomized patients. Only two reported allocation concealment and were considered as adequate. Twenty percent (30/150) of the studies were imbalanced at the 0.05 level of probability for the two treatments and 13.3% (20/150) imbalanced at the 0.01 level in the randomization. It is suggested that there may exist misunderstanding of the concept and the misuse of randomization based on the review.


2021 ◽  
Vol 108 (Supplement_6) ◽  
Author(s):  
G Brown ◽  
A Young ◽  
R Rymell

Abstract Aim MDT discussion is the gold standard for cancer care in the UK. With the cancer incidence and complexity of treatments both increasing, demand for MDT discussion is growing. The need for efficiency, whilst maintaining high standards, is therefore clear. Paper-based MDT quality assessment tools and discussion checklists may represent a practical method of monitoring and improving MDT practice. This review aims to describe and appraise these tools, as well as consider their value to quality improvement. Method MEDLINE, Embase and PsycInfo were searched using pre-defined terms. PRISMA methodology was followed throughout. Studies were included if they described the development of a relevant tool/checklist, or if an element of the methodology further informed tool quality assessment. To investigate efficacy, studies using a tool as a method of quality improvement in MDT practice were also included. Study quality was appraised using the COSMIN risk of bias checklist or the Newcastle-Ottawa scale, depending on study type. Results The search returned 6888 results. 17 studies were included, and 6 different tools were identified. Overall, methodological quality in tool development was adequate to very good for assessed aspects of validity and reliability. Clinician feedback was positive. In one study, the introduction of a discussion checklist improved MDT ability to reach a decision from 82.2% to 92.7%. Improvement was also noted in the quality of information presented and the quality of teamwork. Conclusions Several tools for assessing and guiding MDT discussions are available. Although limited, current evidence indicates sufficient rigour in their development and their potential for quality improvement.


Author(s):  
Claudio Luchini ◽  
Nicola Veronese ◽  
Alessia Nottegar ◽  
Jae Il Shin ◽  
Giovanni Gentile ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Linh Tran ◽  
Dao Ngoc Hien Tam ◽  
Abdelrahman Elshafay ◽  
Thao Dang ◽  
Kenji Hirayama ◽  
...  

Abstract Background Systematic reviews (SRs) and meta-analyses (MAs) are commonly conducted to evaluate and summarize medical literature. This is especially useful in assessing in vitro studies for consistency. Our study aims to systematically review all available quality assessment (QA) tools employed on in vitro SRs/MAs. Method A search on four databases, including PubMed, Scopus, Virtual Health Library and Web of Science, was conducted from 2006 to 2020. The available SRs/MAs of in vitro studies were evaluated. DARE tool was applied to assess the risk of bias of included articles. Our protocol was developed and uploaded to ResearchGate in June 2016. Results Our findings reported an increasing trend in publication of in vitro SRs/MAs from 2007 to 2020. Among the 244 included SRs/MAs, 126 articles (51.6%) had conducted the QA procedure. Overall, 51 QA tools were identified; 26 of them (51%) were developed by the authors specifically, whereas 25 (49%) were pre-constructed tools. SRs/MAs in dentistry frequently had their own QA tool developed by the authors, while SRs/MAs in other topics applied various QA tools. Many pre-structured tools in these in vitro SRs/MAs were modified from QA tools of in vivo or clinical trials, therefore, they had various criteria. Conclusion Many different QA tools currently exist in the literature; however, none cover all critical aspects of in vitro SRs/MAs. There is a need for a comprehensive guideline to ensure the quality of SR/MA due to their precise nature.


2021 ◽  
Author(s):  
Elizabeth A. Terhune ◽  
Patricia C Heyn ◽  
Christi R Piper ◽  
Nancy Hadley-Miller

Abstract BackgroundAdolescent idiopathic scoliosis (AIS) is a structural lateral spinal curvature of ≥ 10° with rotation. Approximately 2–3% of children in most populations are affected with AIS, and this condition is responsible for approximately $1.1 billion in surgical costs to the U.S. healthcare system. Although a genetic factor for AIS has been demonstrated for decades, with multiple loci identified across populations, treatment options have remained limited to bracing and surgery.MethodsThe databases MEDLINE (via PubMed), Embase, Google Scholar, and Ovid MEDLINE will be searched and limited to articles in English. We will conduct title and abstract, full-text, and data extraction screening through Covidence, followed by data transfer to a custom REDCap database. Quality assessment will be confirmed by multiple reviewers. Studies containing variant-level data (i.e. GWAS, exome sequencing) for AIS subjects and controls will be considered. Outcomes of interest will include presence/absence of AIS, scoliosis curve severity, scoliosis curve progression, and presence/absence of nucleotide-level variants. Analyses will include odds ratios and relative risk assessments, and subgroup analysis (i.e. males vs. females, age groups) may be applied. Quality assessment tools will include GRADE and Q-Genie for genetic studies.DiscussionIn this systematic review we seek to evaluate the quality of genetic evidence for AIS to better inform research efforts, to ultimately improve the quality of patient care and diagnosis.Systematic review registrationPROSPERO registration #CRD42021243253


2009 ◽  
Vol 25 (4) ◽  
pp. 479-484 ◽  
Author(s):  
Yurong Duan ◽  
Jing Li ◽  
Changlin Ai ◽  
Yaolong Chen ◽  
Peixian Chen ◽  
...  

Objectives: Clear, transparent, and sufficiently detailed abstracts of journal articles and conference abstracts are important because readers often base their assessment of a trial on such information. There are concerns over the reliability and quality of trials published only in the proceedings of scientific meetings. This study aims to assess the reporting quality of abstracts of randomized trials published in Chinese medical conference abstracts.Methods: Conference abstracts reporting randomized trials included in the China National Knowledge Infrastructure (CNKI) in 2007 were identified. A revised checklist (based on the CONSORT extension for reporting randomized controlled trials in journal and conference abstracts) was used to assess the reporting quality of these conference abstracts.Results: A total of 567 conference abstracts of randomized trials were identified. Some aspects were well reported, including 94 percent of authors contact details, 83 percent of trial interventions and 78 percent of control interventions, 62 percent of participant eligibility criteria, and 66 percent the number of participants randomized to each group. Other areas were very poorly reported: only 1 percent identified the study as randomized in the abstract title, 2 percent reported the trial design, and only 7 percent reported on blinding. No details of allocation concealment, trial registration, or funding were reported.Conclusion: The information given for trials in conference proceedings in China is very poor, especially in some aspects of methodological quality, trial registration, and funding source. The quality of conference abstracts for trials should be improved to further facilitate understanding of their conduct and validity.


2016 ◽  
Vol 32 (5) ◽  
pp. 362-369 ◽  
Author(s):  
Anna Mae Scott ◽  
Kenneth Bond ◽  
Iñaki Gutiérrez-Ibarluzea ◽  
Björn Hofmann ◽  
Lars Sandman

Objectives: Although consideration of ethical issues is recognized as a crucial part of health technology assessment, ethics analysis for HTA is generally perceived as methodologically underdeveloped in comparison to other HTA domains. The aim of our study is (i) to verify existing tools for quality assessment of ethics analyses for HTA, (ii) to consider some arguments for and against the need for quality assessment tools for ethics analyses for HTA, and (iii) to propose a preliminary set of criteria that could be used for assessing the quality of ethics analyses for HTA.Methods: We systematically reviewed the literature, reviewed HTA organizations’ Web sites, and solicited views from thirty-two experts in the field of ethics for HTA.Results: The database and HTA agency Web site searches yielded 420 references (413 from databases, seven from HTA Web sites). No formal instruments for assessing the quality of ethics analyses for HTA purposes were identified. Thirty-two experts in the field of ethics for HTA from ten countries, who were brought together at two workshops held in Edmonton (Canada) and Cologne (Germany) confirmed the findings from the literature.Conclusions: Generating a quality assessment tool for ethics analyses in HTA would confer considerable benefits, including methodological alignment with other areas of HTA, increase in transparency and transferability of ethics analyses, and provision of common language between the various participants in the HTA process. We propose key characteristics of quality assessment tools for this purpose, which can be applied to ethics analyses for HTA purposes.


2021 ◽  
Author(s):  
George T F Brown ◽  
Hilary L Bekker ◽  
Alastair L Young

Abstract Background MDT discussion is the gold standard for cancer care in the UK. With the incidence of cancer on the rise, demand for MDT discussion is increasing. The need for efficiency, whilst maintaining high standards, is therefore clear. Paper-based MDT quality assessment tools and discussion checklists may represent a practical method of monitoring and improving MDT practice. This reviews aims to describe and appraise these tools, as well as consider their value to quality improvement. Methods Medline, EMBASE and PsycInfo were searched using pre-defined terms. The PRISMA model was followed throughout. Studies were included if they described the development of a relevant tool, or if an element of the methodology further informed tool quality assessment. To investigate efficacy, studies using a tool as a method of quality improvement in MDT practice were also included. Study quality was appraised using the COSMIN risk of bias checklist or the Newcastle-Ottawa scale, depending on study type. Results The search returned 6888 results. 17 studies were included. In total 6 tools were identified. Overall, methodological quality in tool development was adequate to very good for assessed aspects of validity and reliability. Clinician feedback was positive. In one study, the introduction of a discussion checklist improved MDT ability to reach a decision from 82.2–92.7%. Improvement was also noted in the quality of information presented and the quality of teamwork. Conclusions Several tools for assessment and guidance of MDTs are available. Although limited, current evidence indicates sufficient rigour in their development and their potential for quality improvement.


2016 ◽  
Vol 11 (2) ◽  
pp. 149 ◽  
Author(s):  
Michelle Maden ◽  
Eleanor Kotas

Objective – Systematic reviews are becoming increasingly popular within the Library and Information Science (LIS) domain. This paper has three aims: to review approaches to quality assessment in published LIS systematic reviews in order to assess whether and how LIS reviewers report on quality assessment a priori in systematic reviews, to model the different quality assessment aids used by LIS reviewers, and to explore if and how LIS reviewers report on and incorporate the quality of included studies into the systematic review analysis and conclusions. Methods – The authors undertook a methodological study of published LIS systematic reviews using a known cohort of published systematic reviews of LIS-related research. Studies were included if they were reported as a “systematic review” in the title, abstract, or methods section. Meta-analyses that did not incorporate a systematic review and studies in which the systematic review was not a main objective were excluded. Two reviewers independently assessed the studies. Data were extracted on the type of synthesis, whether quality assessment was planned and undertaken, the number of reviewers involved in assessing quality, the types of tools or criteria used to assess the quality of the included studies, how quality assessment was assessed and reported in the systematic review, and whether the quality of the included studies was considered in the analysis and conclusions of the review. In order to determine the quality of the reporting and incorporation of quality assessment in LIS systematic reviews, each study was assessed against criteria relating to quality assessment in the PRISMA reporting guidelines for systematic reviews and meta-analyses (Moher, Liberati, Tetzlaff, Altman, & The PRISMA Group, 2009) and the AMSTAR tool (Shea et al., 2007). Results – Forty studies met the inclusion criteria. The results demonstrate great variation on the breadth, depth, and transparency of the quality assessment process in LIS systematic reviews. Nearly one third of the LIS systematic reviews included in this study did not report on quality assessment in the methods, and less than one quarter adequately incorporated quality assessment in the analysis, conclusions, and recommendations. Only nine of the 26 systematic reviews that undertook some form of quality assessment incorporated considerations of how the quality of the included studies impacted on the validity of the review findings in the analysis, conclusion, and recommendations. The large number of different quality assessment tools identified reflects not only the disparate nature of the LIS evidence base (Brettle, 2009) but also a lack of consensus around criteria on which to assess the quality of LIS research. Conclusion – Greater clarity, definition, and understanding of the methodology and concept of “quality” in the systematic review process are required not only by LIS reviewers but also by editors of journals in accepting such studies for publication. Further research and guidance is needed on identifying the best tools and approaches to incorporate considerations of quality in LIS systematic reviews. LIS reviewers need to improve the robustness and transparency with which quality assessment is undertaken and reported in systematic reviews. Above all, LIS reviewers need to be explicit in coming to a conclusion on how the quality of the included studies may impact on their review findings.


Sign in / Sign up

Export Citation Format

Share Document