Abstract
Background
Use of virtual patient educational tools could fill the current gap in the teaching of clinical reasoning skills. However, there is a limited understanding of their effectiveness. The aim of this study was to synthesise the evidence to understand the effectiveness of virtual patient tools aimed at improving undergraduate medical students’ clinical reasoning skills.
Methods
We searched MEDLINE, EMBASE, CINAHL, ERIC, Scopus, Web of Science and PsycINFO from 1990 to October 2020, to identify all experimental articles testing the effectiveness of virtual patient educational tools on medical students’ clinical reasoning skills. Quality of the articles was assessed using an adapted form of the MERSQI and the Newcastle-Ottawa Scale. A narrative synthesis summarised intervention features, how virtual patient tools were evaluated and reported effectiveness.
Results
The search revealed 7,290 articles, with 20 articles meeting the inclusion criteria. Average study quality was moderate (M=7.1, SD=2.5), with around a third not reporting any measurement of validity or reliability for their clinical reasoning outcome measure (7/20, 35%). Eleven articles found a positive effect of virtual patient tools on reasoning (11/20, 55%). Seven (7/20, 35%) reported no significant effect or mixed effects and two found a significantly negative effect (2/20, 10%). Several domains of clinical reasoning were evaluated. Data gathering, ideas about diagnosis and patient management were more often found to improve after virtual patient use (27/46 analyses, 59%) than knowledge, flexibility in thinking, problem-solving, and critical thinking (4/10 analyses, 40%).
Conclusions
Using virtual patient tools could effectively complement current teaching especially if opportunities for face-to-face teaching or other methods are limited, as there was some evidence that virtual patient educational tools can improve undergraduate medical students’ clinical reasoning skills. Evaluations that measured more case specific clinical reasoning domains, such as data gathering, showed more consistent improvement than general measures like problem-solving. Case specific measures might be more sensitive to change given the context dependent nature of clinical reasoning. Consistent use of validated clinical reasoning measures is needed to enable a meta-analysis to estimate effectiveness.