Understanding Human Impressions of Artificial Intelligence
Artificial intelligence increasingly suffuses everyday life. However, people are frequently reluctant to interact with A.I. systems. This challenges both the deployment of beneficial A.I. technology and the development of deep learning systems that depend on humans for oversight, direction, and training. Previously neglected but fundamental, social-cognitive processes guide human interactions with A.I. systems. In five behavioral studies (N = 3,099), warmth and competence feature prominently in participants’ impressions of artificially intelligent systems. Judgments of warmth and competence systematically depend on human-A.I. interdependence. In particular, participants perceive systems that optimize interests aligned with human interests as warmer and systems that operate independently from human direction as more competent. Finally, a prisoner’s dilemma game shows that warmth and competence judgments predict participants’ willingness to cooperate with a deep learning system. These results demonstrate the generality of intent detection to interactions with technological actors. Researchers and developers should carefully consider the degree and alignment of interdependence between humans and new artificial intelligence systems.