Towards Adaptive and Least-Collaborative-Effort Social Robots

Author(s):  
Dimosthenis Kontogiorgos ◽  
Hannah R. M. Pelikan
Author(s):  
Ronald S. Weinstein ◽  
N. Scott McNutt

The Type I simple cold block device was described by Bullivant and Ames in 1966 and represented the product of the first successful effort to simplify the equipment required to do sophisticated freeze-cleave techniques. Bullivant, Weinstein and Someda described the Type II device which is a modification of the Type I device and was developed as a collaborative effort at the Massachusetts General Hospital and the University of Auckland, New Zealand. The modifications reduced specimen contamination and provided controlled specimen warming for heat-etching of fracture faces. We have now tested the Mass. General Hospital version of the Type II device (called the “Type II-MGH device”) on a wide variety of biological specimens and have established temperature and pressure curves for routine heat-etching with the device.


1992 ◽  
Vol 23 (4) ◽  
pp. 367-368 ◽  
Author(s):  
Jennifer Chisler Borsch ◽  
Ruth Oaks

This article discusses a collaborative effort between a speech-language pathologist and a regular third grade teacher. The overall goal of the collaboration was to improve communication skills of students throughout the school. The factors that contributed to making the collaboration a success are discussed.


1999 ◽  
Vol 9 (1) ◽  
pp. 1-1

It is my pleasure to introduce this newsletter, which is the first collaborative effort between Division 1, Language Learning and Education and Division 9, Hearing and Hearing Disorders in Childhood to share information we believe affiliates from both divisions will find useful.


2020 ◽  
Author(s):  
Chiara de Jong ◽  
Rinaldo Kühne ◽  
Jochen Peter ◽  
Caroline L. van Straten ◽  
Alex Barco
Keyword(s):  

2016 ◽  
Author(s):  
William Slattery ◽  
◽  
Kurtz K. Miller ◽  
Douglas Brown ◽  
D. Mark Jones ◽  
...  

Author(s):  
Alistair M. C. Isaac ◽  
Will Bridewell

It is easy to see that social robots will need the ability to detect and evaluate deceptive speech; otherwise they will be vulnerable to manipulation by malevolent humans. More surprisingly, we argue that effective social robots must also be able to produce deceptive speech. Many forms of technically deceptive speech perform a positive pro-social function, and the social integration of artificial agents will be possible only if they participate in this market of constructive deceit. We demonstrate that a crucial condition for detecting and producing deceptive speech is possession of a theory of mind. Furthermore, strategic reasoning about deception requires identifying a type of goal distinguished by its priority over the norms of conversation, which we call an ulterior motive. We argue that this goal is the appropriate target for ethical evaluation, not the veridicality of speech per se. Consequently, deception-capable robots are compatible with the most prominent programs to ensure that robots behave ethically.


Sign in / Sign up

Export Citation Format

Share Document