Understanding, Explaining, and Utilizing Medical Artificial Intelligence
Medical artificial intelligence is cost-effective, scalable, and often outperforms human providers. One important barrier to its adoption is the perception that algorithms are a “black box”—people do not subjectively understand how algorithms make medical decisions, and we find this impairs their utilization. We argue a second barrier is that people also overestimate their objective understanding of medical decisions made by human healthcare providers. In five pre- registered experiments with convenience and nationally representative samples (N = 2,699), we find that people exhibit such an illusory understanding of human medical decision making (Study 1). This leads people to claim greater understanding of decisions made by human than algorithmic healthcare providers (Studies 2A-B), which makes people more reluctant to utilize algorithmic providers (Studies 3A-B). Fortunately, we find that asking people to explain the mechanisms underlying medical decision making reduces this illusory gap in subjective understanding (Study 1). Moreover, we test brief interventions that, by increasing subjective understanding of algorithmic decision processes, increase willingness to utilize algorithmic healthcare providers without undermining utilization of human providers (Studies 3A-B). Corroborating these results, a study on Google testing ads for an algorithmic skin cancer detection app shows that interventions that increase subjective understanding of algorithmic decision processes lead to a higher ad click-through rate (Study 4). Our findings show how reluctance to utilize medical algorithms is driven both by the difficulty of understanding algorithms, and an illusory understanding of human decision making.