scholarly journals Towards Mixed-Initiative Human–Robot Interaction: Assessment of Discriminative Physiological and Behavioral Features for Performance Prediction

Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 296 ◽  
Author(s):  
Caroline P. C. Chanel ◽  
Raphaëlle N. Roy ◽  
Frédéric Dehais ◽  
Nicolas Drougard

The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose.

2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


2007 ◽  
Vol 8 (3) ◽  
pp. 391-410 ◽  
Author(s):  
Justine Cassell ◽  
Andrea Tartaro

What is the hallmark of success in human–agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent’s behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human–robot interaction.


Technologies ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 119 ◽  
Author(s):  
Konstantinos Tsiakas ◽  
Maria Kyrarini ◽  
Vangelis Karkaletsis ◽  
Fillia Makedon ◽  
Oliver Korn

In this article, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human–Robot Interaction which focuses on how robotic agents and devices can be used to enhance user’s performance during a cognitive or physical training task. Robot-Assisted Training systems have been successfully deployed to enhance the effects of a training session in various contexts, i.e., rehabilitation systems, educational environments, vocational settings, etc. The proposed taxonomy suggests a set of categories and parameters that can be used to characterize such systems, considering the current research trends and needs for the design, development and evaluation of Robot-Assisted Training systems. To this end, we review recent works and applications in Robot-Assisted Training systems, as well as related taxonomies in Human–Robot Interaction. The goal is to identify and discuss open challenges, highlighting the different aspects of a Robot-Assisted Training system, considering both robot perception and behavior control.


Author(s):  
Mahdi Haghshenas-Jaryani ◽  
Muthu B. J. Wijesundara

This paper presents the development of a framework based on a quasi-statics concept for modeling and analyzing the physical human-robot interaction in soft robotic hand exoskeletons used for rehabilitation and human performance augmentation. This framework provides both forward and inverse quasi-static formulations for the interaction between a soft robotic digit and a human finger which can be used for the calculation of angular motions, interaction forces, actuation torques, and stiffness at human joints. This is achieved by decoupling the dynamics of the soft robotic digit and the human finger with similar interaction forces acting on both sides. The presented theoretical models were validated by a series of numerical simulations based on a finite element model which replicates similar human-robot interaction. The comparison of the results obtained for the angular motion, interaction forces, and the estimated stiffness at the joints indicates the accuracy and effectiveness of the quasi-static models for predicting the human-robot interaction.


AI Magazine ◽  
2015 ◽  
Vol 36 (3) ◽  
pp. 107-112
Author(s):  
Adam B. Cohen ◽  
Sonia Chernova ◽  
James Giordano ◽  
Frank Guerin ◽  
Kris Hauser ◽  
...  

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


Author(s):  
Shan G. Lakhmani ◽  
Julia L. Wright ◽  
Michael R. Schwartz ◽  
Daniel Barber

Human-robot interaction requires communication, however what form this communication should take to facilitate effective team performance is still undetermined. One notion is that effective human-agent communications can be achieved by combining transparent information-sharing techniques with specific communication patterns. This study examines how transparency and a robot’s communication patterns interact to affect human performance in a human-robot teaming task. Participants’ performance in a target identification task was affected by the robot’s communication pattern. Participants missed identifying more targets when they worked with a bidirectionally communicating robot than when they were working with a unidirectionally communicating one. Furthermore, working with a bidirectionally communicating robot led to fewer correct identifications than working with a unidirectionally communicating robot, but only when the robot provided less transparency information. The implications these findings have for future robot interface designs are discussed.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2019 ◽  
Vol 16 (06) ◽  
pp. 1950028
Author(s):  
Stefano Borgo ◽  
Enrico Blanzieri

Robots might not act according to human expectations if they cannot anticipate how people make sense of a situation and what behavior they consider appropriate in some given circumstances. In many cases, understanding, expectations and behavior are constrained, if not driven, by culture, and a robot that knows about human culture could improve the quality level of human–robot interaction. Can we share human culture with a robot? Can we provide robots with formal representations of different cultures? In this paper, we discuss the (elusive) notion of culture and propose an approach based on the notion of trait which, we argue, permits us to build formal modules suitable to represent culture (broadly understood) in a robot architecture. We distinguish the types of traits that such modules should contain, namely behavior, knowledge, rule and interpretation traits, and how they could be organized. We identify the interpretation process that maps situations to specific knowledge traits, called scenarios, as a key component of the trait-based culture module. Finally, we describe how culture modules can be integrated in an existing architecture, and discuss three use cases to exemplify the advantages of having a culture module in the robot architecture highlighting surprising potentialities.


Sign in / Sign up

Export Citation Format

Share Document