advice quality
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Susanne Gaube ◽  
Harini Suresh ◽  
Martina Raue ◽  
Alexander Merritt ◽  
Seth J. Berkowitz ◽  
...  

AbstractArtificial intelligence (AI) models for decision support have been developed for clinical settings such as radiology, but little work evaluates the potential impact of such systems. In this study, physicians received chest X-rays and diagnostic advice, some of which was inaccurate, and were asked to evaluate advice quality and make diagnoses. All advice was generated by human experts, but some was labeled as coming from an AI system. As a group, radiologists rated advice as lower quality when it appeared to come from an AI system; physicians with less task-expertise did not. Diagnostic accuracy was significantly worse when participants received inaccurate advice, regardless of the purported source. This work raises important considerations for how advice, AI and non-AI, should be deployed in clinical environments.


2020 ◽  
Vol 14 (2) ◽  
pp. P31-P39
Author(s):  
Nicole S. Wright ◽  
Sudip Bhattacharjee

SUMMARY When subject matter experts are consulted during an audit, the quality of the expert's advice depends upon their ability to fully understand and incorporate client-specific facts into their advice. PCAOB inspection reports suggest that auditors are neglecting to perform the required work to assess the quality of experts' recommendations. This article summarizes a recent study by Wright and Bhattacharjee (2018) examining how receiving expert advice of different levels of quality and the timing of communication making auditors aware of the eventual use an expert, impact auditors' judgments. Auditors who were aware that an expert was going to be used put forth more effort before receiving the expert's advice, and were less in agreement with management's position, than auditors who were unaware. Upon receiving the advice, aware auditors were more discerning and accurate than unaware auditors, providing that the timing and communication of consulting decisions affect auditors' assessments of expert advice.


2020 ◽  
Vol 39 (3) ◽  
pp. 349-374
Author(s):  
Kasey A. Foley ◽  
Erina L. MacGeorge ◽  
David L. Brinker ◽  
Yuwei Li ◽  
Yanmengqian Zhou

Antibiotic-resistant infections, fueled by unwarranted antibiotic prescribing, are an increasing threat to public health. Reducing overprescribing and promoting antibiotic stewardship requires managing patient expectations for and understanding about the utility of antibiotics. One hotspot for overprescribing is upper respiratory tract infections, for which the best treatment is often non-antibiotic symptom management behaviors. Guided by advice response theory, the current study examines how providers’ reason-giving for symptom management advice affected perceptions of advice quality, efficacy for symptom monitoring and management, and satisfaction with care for patients who were not prescribed antibiotics for their upper respiratory tract infections. Transcribed medical visits were coded for symptom management advice reason-giving and patients completed post-visit surveys. Greater provider elaboration about instruction was independently and positively associated with evaluations of advice quality. Results also indicate several significant interactions between types of reason-giving. Implications of these findings for advice theory and clinical practice are addressed in the discussion.


2020 ◽  
Vol 39 (3) ◽  
pp. 334-348
Author(s):  
Yining Zhou Malloch ◽  
Bo Feng ◽  
Bingqing Wang ◽  
Chelsea Kim

The integrated model of advice giving (IMA) proposes that advising in supportive interactions should be carried out in three sequential moves: emotional support—problem inquiry and analysis—advice (EPA). Prior research indicates the utility of this framework for effective advising in supportive interactions. The current project proposed and tested an extended integrated model of advice giving, adding eSteem support (S) as a fourth move in the sequence. Two experiments were conducted. Study 1 included 371 participants recruited from Amazon Mechanical Turk. Results showed that the emotional support—problem inquiry and analysis—advice—eSteem support (EPAS) sequence did not elicit significantly higher evaluations of advice quality compared with the EPA or emotional support—problem inquiry and analysis—eSteem support—advice (EPSA) sequence. Study 2 replicated Study 1 with 364 college students and found that, compared with the other two sequences, the EPAS sequence did not produce significantly higher evaluations of advice quality or intention to follow advice. Theoretical implications and directions for future research are discussed.


2020 ◽  
Vol 39 (3) ◽  
pp. 397-413 ◽  
Author(s):  
V. Skye Wingate ◽  
Bo Feng ◽  
Chelsea Kim ◽  
Wenjing Pan ◽  
JooYoung Jang

Advice of varying quality can be provided to support seekers online. This study examined whether the type of self-disclosure (demographic vs. self-concept) included in a support-seeking post elicits varying levels of advice quality in support provision. Participants ( N = 624) read and responded to an online support-seeking post. Their advice messages were assessed for quality as indexed by the use of reasoning and the sequencing of advice relative to other elements of supportive interactions (emotional support and problem inquiry and analysis). Overall, results suggested that most advice messages were behavior-oriented and did not contain reasoning or additional supportive acts. The type of self-disclosure did not affect advice quality. Theoretical implications and directions for future research are discussed.


10.2196/13534 ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. e13534
Author(s):  
Fatemeh Ameri ◽  
Kathleen Keeling ◽  
Reza Salehnejad

Background Seeking health information on the internet is very popular despite the debatable ability of lay users to evaluate the quality of health information and uneven quality of information available on the Web. Consulting the internet for health information is pervasive, particularly when other sources are inaccessible because of time, distance, and money constraints or when sensitive or embarrassing questions are to be explored. Question and answer (Q&A) platforms are Web-based services that provide personalized health advice upon the information seekers’ request. However, it is not clear how the quality of health advices is ensured on these platforms. Objective The objective of this study was to identify how platform design impacts the quality of Web-based health advices and equal access to health information on the internet. Methods A total of 900 Q&As were collected from 9 Q&A platforms with different design features. Data on the design features for each platform were generated. Paid physicians evaluated the data to quantify the quality of health advices. Guided by the literature, the design features that affected information quality were identified and recorded for each Q&A platform. The least absolute shrinkage and selection operator and unbiased regression tree methods were used for the analysis. Results Q&A platform design and health advice quality were related. Expertise of information providers (beta=.48; P=.001), financial incentive (beta=.4; P=.001), external reputation (beta=.28; P=.002), and question quality (beta=.12; P=.001) best predicted health advice quality. Virtual incentive, Web 2.0 mechanisms, and reputation systems were not associated with health advice quality. Conclusions Access to high-quality health advices on the internet is unequal and skewed toward high-income and high-literacy groups. However, there are possibilities to generate high-quality health advices for free.


2019 ◽  
Author(s):  
Fatemeh Ameri ◽  
Kathleen Keeling ◽  
Reza Salehnejad

BACKGROUND Seeking health information on the internet is very popular despite the debatable ability of lay users to evaluate the quality of health information and uneven quality of information available on the Web. Consulting the internet for health information is pervasive, particularly when other sources are inaccessible because of time, distance, and money constraints or when sensitive or embarrassing questions are to be explored. Question and answer (Q&amp;A) platforms are Web-based services that provide personalized health advice upon the information seekers’ request. However, it is not clear how the quality of health advices is ensured on these platforms. OBJECTIVE The objective of this study was to identify how platform design impacts the quality of Web-based health advices and equal access to health information on the internet. METHODS A total of 900 Q&amp;As were collected from 9 Q&amp;A platforms with different design features. Data on the design features for each platform were generated. Paid physicians evaluated the data to quantify the quality of health advices. Guided by the literature, the design features that affected information quality were identified and recorded for each Q&amp;A platform. The least absolute shrinkage and selection operator and unbiased regression tree methods were used for the analysis. RESULTS Q&amp;A platform design and health advice quality were related. Expertise of information providers (beta=.48; <italic>P</italic>=.001), financial incentive (beta=.4; <italic>P</italic>=.001), external reputation (beta=.28; <italic>P</italic>=.002), and question quality (beta=.12; <italic>P</italic>=.001) best predicted health advice quality. Virtual incentive, Web 2.0 mechanisms, and reputation systems were not associated with health advice quality. CONCLUSIONS Access to high-quality health advices on the internet is unequal and skewed toward high-income and high-literacy groups. However, there are possibilities to generate high-quality health advices for free.


2017 ◽  
Vol 1 (1) ◽  
pp. 21-42 ◽  
Author(s):  
Anestis Fachantidis ◽  
Matthew Taylor ◽  
Ioannis Vlahavas

In this article, we study the transfer learning model of action advice under a budget. We focus on reinforcement learning teachers providing action advice to heterogeneous students playing the game of Pac-Man under a limited advice budget. First, we examine several critical factors affecting advice quality in this setting, such as the average performance of the teacher, its variance and the importance of reward discounting in advising. The experiments show that the best performers are not always the best teachers and reveal the non-trivial importance of the coefficient of variation (CV) as a statistic for choosing policies that generate advice. The CV statistic relates variance to the corresponding mean. Second, the article studies policy learning for distributing advice under a budget. Whereas most methods in the relevant literature rely on heuristics for advice distribution, we formulate the problem as a learning one and propose a novel reinforcement learning algorithm capable of learning when to advise or not. The proposed algorithm is able to advise even when it does not have knowledge of the student’s intended action and needs significantly less training time compared to previous learning approaches. Finally, in this article, we argue that learning to advise under a budget is an instance of a more generic learning problem: Constrained Exploitation Reinforcement Learning.


2017 ◽  
Vol 84 (4) ◽  
pp. 488-509 ◽  
Author(s):  
Lisa M. Guntzviller ◽  
Erina L. MacGeorge ◽  
David L. Brinker
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document