Individual Differences in Human-Robot Interaction in a Military Multitasking Environment

2011 ◽  
Vol 5 (1) ◽  
pp. 83-105 ◽  
Author(s):  
Jessie Y. C. Chen

A military vehicle crew station environment was simulated and a series of three experiments was conducted to examine the workload and performance of the combined position of the gunner and robotics operator in a multitasking environment. The study also evaluated whether aided target recognition (AiTR) capabilities (delivered through tactile and/or visual cuing) for the gunnery task might benefit the concurrent robotics and communication tasks and how the concurrent task performance might be affected when the AiTR was unreliable (i.e., false alarm prone or miss prone). Participants’ spatial ability was consistently found to be a reliable predictor of their targeting task performance as well as their modality preference for the AiTR display. Participants’ attentional control was found to significantly affect the way they interacted with unreliable automated systems.

2007 ◽  
Vol 8 (3) ◽  
pp. 391-410 ◽  
Author(s):  
Justine Cassell ◽  
Andrea Tartaro

What is the hallmark of success in human–agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent’s behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human–robot interaction.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 112
Author(s):  
Marit Hagens ◽  
Serge Thill

Perfect information about an environment allows a robot to plan its actions optimally, but often requires significant investments into sensors and possibly infrastructure. In applications relevant to human–robot interaction, the environment is by definition dynamic and events close to the robot may be more relevant than distal ones. This suggests a non-trivial relationship between sensory sophistication on one hand, and task performance on the other. In this paper, we investigate this relationship in a simulated crowd navigation task. We use three different environments with unique characteristics that a crowd navigating robot might encounter and explore how the robot’s sensor range correlates with performance in the navigation task. We find diminishing returns of increased range in our particular case, suggesting that task performance and sensory sophistication might follow non-trivial relationships and that increased sophistication on the sensor side does not necessarily equal a corresponding increase in performance. Although this result is a simple proof of concept, it illustrates the benefit of exploring the consequences of different hardware designs—rather than merely algorithmic choices—in simulation first. We also find surprisingly good performance in the navigation task, including a low number of collisions with simulated human agents, using a relatively simple A*/NavMesh-based navigation strategy, which suggests that navigation strategies for robots in crowds need not always be sophisticated.


Author(s):  
Louise LePage

AbstractStage plays, theories of theatre, narrative studies, and robotics research can serve to identify, explore, and interrogate theatrical elements that support the effective performance of sociable humanoid robots. Theatre, including its parts of performance, aesthetics, character, and genre, can also reveal features of human–robot interaction key to creating humanoid robots that are likeable rather than uncanny. In particular, this can be achieved by relating Mori's (1970/2012) concept of total appearance to realism. Realism is broader and more subtle in its workings than is generally recognised in its operationalization in studies that focus solely on appearance. For example, it is complicated by genre. A realistic character cast in a detective drama will convey different qualities and expectations than the same character in a dystopian drama or romantic comedy. The implications of realism and genre carry over into real life. As stage performances and robotics studies reveal, likeability depends on creating aesthetically coherent representations of character, where all the parts coalesce to produce a socially identifiable figure demonstrating predictable behaviour.


Author(s):  
Antonio Bicchi ◽  
Michele Bavaro ◽  
Gianluca Boccadamo ◽  
Davide De Carli ◽  
Roberto Filippini ◽  
...  

2018 ◽  
Vol 15 (4) ◽  
pp. 172988141877319 ◽  
Author(s):  
S M Mizanoor Rahman ◽  
Ryojun Ikeura

In the first step, a one degree of freedom power assist robotic system is developed for lifting lightweight objects. Dynamics for human–robot co-manipulation is derived that includes human cognition, for example, weight perception. A novel admittance control scheme is derived using the weight perception–based dynamics. Human subjects lift a small-sized, lightweight object with the power assist robotic system. Human–robot interaction and system characteristics are analyzed. A comprehensive scheme is developed to evaluate the human–robot interaction and performance, and a constrained optimization algorithm is developed to determine the optimum human–robot interaction and performance. The results show that the inclusion of weight perception in the control helps achieve optimum human–robot interaction and performance for a set of hard constraints. In the second step, the same optimization algorithm and control scheme are used for lifting a heavy object with a multi-degree of freedom power assist robotic system. The results show that the human–robot interaction and performance for lifting the heavy object are not as good as that for lifting the lightweight object. Then, weight perception–based intelligent controls in the forms of model predictive control and vision-based variable admittance control are applied for lifting the heavy object. The results show that the intelligent controls enhance human–robot interaction and performance, help achieve optimum human–robot interaction and performance for a set of soft constraints, and produce similar human–robot interaction and performance as obtained for lifting the lightweight object. The human–robot interaction and performance for lifting the heavy object with power assist are treated as intuitive and natural because these are calibrated with those for lifting the lightweight object. The results also show that the variable admittance control outperforms the model predictive control. We also propose a method to adjust the variable admittance control for three degrees of freedom translational manipulation of heavy objects based on human intent recognition. The results are useful for developing controls of human friendly, high performance power assist robotic systems for heavy object manipulation in industries.


AI & Society ◽  
2021 ◽  
Author(s):  
Núria Vallès-Peris ◽  
Miquel Domènech

AbstractIn the scenario of growing polarization of promises and dangers that surround artificial intelligence (AI), how to introduce responsible AI and robotics in healthcare? In this paper, we develop an ethical–political approach to introduce democratic mechanisms to technological development, what we call “Caring in the In-Between”. Focusing on the multiple possibilities for action that emerge in the realm of uncertainty, we propose an ethical and responsible framework focused on care actions in between fears and hopes. Using the theoretical perspective of Science and Technology Studies and empirical research, “Caring in the In-Between” is based on three movements: the first is a change of focus from the world of promises and dangers to the world of uncertainties; the second is a conceptual shift from assuming a relationship with robotics based on a Human–Robot Interaction to another focused on the network in which the robot is embedded (the “Robot Embedded in a Network”); and the last is an ethical shift from a general normative framework to a discussion on the context of use. Based on these suggestions, “Caring in the In-Between” implies institutional challenges, as well as new practices in healthcare systems. It is articulated around three simultaneous processes, each of them related to practical actions in the “in-between” dimensions considered: monitoring relations and caring processes, through public engagement and institutional changes; including concerns and priorities of stakeholders, with the organization of participatory processes and alternative forms of representation; and making fears and hopes commensurable, through the choice of progressive and reversible actions.


2021 ◽  
Vol 12 (1) ◽  
pp. 423-436
Author(s):  
Alexander M. Aroyo ◽  
Jan de Bruyne ◽  
Orian Dheu ◽  
Eduard Fosch-Villaronga ◽  
Aleksei Gudkov ◽  
...  

Abstract There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.


2019 ◽  
Vol 6 (2) ◽  
pp. 103
Author(s):  
Erik Danford Klein ◽  
Gary Backous ◽  
Thomas M. Schnieders ◽  
Zhonglun Wang ◽  
Richard T. Stone

2020 ◽  
Vol 35 ◽  
Author(s):  
Jaesik Jeong ◽  
Jeehyun Yang ◽  
Jacky Baltes

Abstract The use of robots in performance arts is increasing. But, it is hard for robots to cope with unexpected circumstances during a performance, and it is almost impossible for robots to act fully autonomously in such situations. IROS-HAC is a new challenge in robotics research and a new opportunity for cross-disciplinary collaborative research. In this paper, we describe a practical method for generating different personalities of a robot entertainer. The personalities are created by selecting speech or gestures from a set of options. The selection uses roulette wheel selection to select answers that are more closely aligned with the desired personality. In particular, we focus on a robot magician, as a good magic show includes good interaction with the audience and it may also include other robots and performers. The magician with a variety of personalities increased the audience immersion and appreciation and maintained the audience’s interest. The magic show was awarded first prize in the competition for a comprehensive evaluation of technology, story, and performance. This paper contains both the research methodology and a critical evaluation of our research.


Automation is becoming increasingly pervasive across various technological domains. As this trend continues, work must be done to understand how humans interact with these automated systems. However, individual differences can influence performance during these interactions, particularly as automation becomes more complex, potentially leaving operators out-of-the-loop. Much of the current research investigates the role of working memory and performance across low and high levels of unreliable automation. There is little work investigating the connection between other high-level cognitive processes such as attentional control and performance. Foroughi et al. (2019) found a positive correlation between attentional control and task performance. However, they only included a low-level form of automation. The purpose of this study was to investigate the relationship between attentional control and performance using increasing degrees of unreliable automation. Our results demonstrated a positive correlation between attentional control and performance using high-level unreliable automation.


Sign in / Sign up

Export Citation Format

Share Document