An efficient density-based clustering with side information and active learning: A case study for facial expression recognition task

2019 ◽  
Vol 23 (1) ◽  
pp. 227-240 ◽  
Author(s):  
Viet-Vu Vu ◽  
Hong-Quan Do ◽  
Vu-Tuan Dang ◽  
Nang-Toan Do
2010 ◽  
Author(s):  
M. Fischer-Shofty ◽  
S. G. Shamay-Tsoorya ◽  
H. Harari ◽  
Y. Levkovitz

2019 ◽  
Vol 9 (11) ◽  
pp. 2218 ◽  
Author(s):  
Maria Grazia Violante ◽  
Federica Marcolin ◽  
Enrico Vezzetti ◽  
Luca Ulrich ◽  
Gianluca Billia ◽  
...  

This study proposes a novel quality function deployment (QFD) design methodology based on customers’ emotions conveyed by facial expressions. The current advances in pattern recognition related to face recognition techniques have fostered the cross-fertilization and pollination between this context and other fields, such as product design and human-computer interaction. In particular, the current technologies for monitoring human emotions have supported the birth of advanced emotional design techniques, whose main focus is to convey users’ emotional feedback into the design of novel products. As quality functional deployment aims at transforming the voice of customers into engineering features of a product, it appears to be an appropriate and promising nest in which to embed users’ emotional feedback with new emotional design methodologies, such as facial expression recognition. This way, the present methodology consists in interviewing the user and acquiring his/her face with a depth camera (allowing three-dimensional (3D) data), clustering the face information into different emotions with a support vector machine classificator, and assigning customers’ needs weights relying on the detected facial expressions. The proposed method has been applied to a case study in the context of agriculture and validated by a consortium. The approach appears sound and capable of collecting the unconscious feedback of the interviewee.


2005 ◽  
Vol 50 (9) ◽  
pp. 525-533 ◽  
Author(s):  
Benoit Bediou ◽  
Pierre Krolak-Salmon ◽  
Mohamed Saoud ◽  
Marie-Anne Henaff ◽  
Michael Burt ◽  
...  

Background: Impaired facial expression recognition in schizophrenia patients contributes to abnormal social functioning and may predict functional outcome in these patients. Facial expression processing involves individual neural networks that have been shown to malfunction in schizophrenia. Whether these patients have a selective deficit in facial expression recognition or a more global impairment in face processing remains controversial. Objective: To investigate whether patients with schizophrenia exhibit a selective impairment in facial emotional expression recognition, compared with patients with major depression and healthy control subjects. Methods: We studied performance in facial expression recognition and facial sex recognition paradigms, using original morphed faces, in a population with schizophrenia ( n = 29) and compared their scores with those of depression patients ( n = 20) and control subjects ( n = 20). Results: Schizophrenia patients achieved lower scores than both other groups in the expression recognition task, particularly in fear and disgust recognition. Sex recognition was unimpaired. Conclusion: Facial expression recognition is impaired in schizophrenia, whereas sex recognition is preserved, which highly suggests an abnormal processing of changeable facial features in this disease. A dysfunction of the top-down retrograde modulation coming from limbic and paralimbic structures on visual areas is hypothesized.


2019 ◽  
Vol 16 (04) ◽  
pp. 1941002 ◽  
Author(s):  
Jing Li ◽  
Yang Mi ◽  
Gongfa Li ◽  
Zhaojie Ju

Facial expression recognition has been widely used in human computer interaction (HCI) systems. Over the years, researchers have proposed different feature descriptors, implemented different classification methods, and carried out a number of experiments on various datasets for automatic facial expression recognition. However, most of them used 2D static images or 2D video sequences for the recognition task. The main limitations of 2D-based analysis are problems associated with variations in pose and illumination, which reduce the recognition accuracy. Therefore, an alternative way is to incorporate depth information acquired by 3D sensor, because it is invariant in both pose and illumination. In this paper, we present a two-stream convolutional neural network (CNN)-based facial expression recognition system and test it on our own RGB-D facial expression dataset collected by Microsoft Kinect for XBOX in unspontaneous scenarios since Kinect is an inexpensive and portable device to capture both RGB and depth information. Our fully annotated dataset includes seven expressions (i.e., neutral, sadness, disgust, fear, happiness, anger, and surprise) for 15 subjects (9 males and 6 females) aged from 20 to 25. The two individual CNNs are identical in architecture but do not share parameters. To combine the detection results produced by these two CNNs, we propose the late fusion approach. The experimental results demonstrate that the proposed two-stream network using RGB-D images is superior to that of using only RGB images or depth images.


10.2196/13810 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13810 ◽  
Author(s):  
Anish Nag ◽  
Nick Haber ◽  
Catalin Voss ◽  
Serena Tamura ◽  
Jena Daniels ◽  
...  

Background Several studies have shown that facial attention differs in children with autism. Measuring eye gaze and emotion recognition in children with autism is challenging, as standard clinical assessments must be delivered in clinical settings by a trained clinician. Wearable technologies may be able to bring eye gaze and emotion recognition into natural social interactions and settings. Objective This study aimed to test: (1) the feasibility of tracking gaze using wearable smart glasses during a facial expression recognition task and (2) the ability of these gaze-tracking data, together with facial expression recognition responses, to distinguish children with autism from neurotypical controls (NCs). Methods We compared the eye gaze and emotion recognition patterns of 16 children with autism spectrum disorder (ASD) and 17 children without ASD via wearable smart glasses fitted with a custom eye tracker. Children identified static facial expressions of images presented on a computer screen along with nonsocial distractors while wearing Google Glass and the eye tracker. Faces were presented in three trials, during one of which children received feedback in the form of the correct classification. We employed hybrid human-labeling and computer vision–enabled methods for pupil tracking and world–gaze translation calibration. We analyzed the impact of gaze and emotion recognition features in a prediction task aiming to distinguish children with ASD from NC participants. Results Gaze and emotion recognition patterns enabled the training of a classifier that distinguished ASD and NC groups. However, it was unable to significantly outperform other classifiers that used only age and gender features, suggesting that further work is necessary to disentangle these effects. Conclusions Although wearable smart glasses show promise in identifying subtle differences in gaze tracking and emotion recognition patterns in children with and without ASD, the present form factor and data do not allow for these differences to be reliably exploited by machine learning systems. Resolving these challenges will be an important step toward continuous tracking of the ASD phenotype.


2021 ◽  
pp. 114991
Author(s):  
Miguel Rodríguez Santander ◽  
Juan Hernández Albarracín ◽  
Adín Ramírez Rivera

2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Shota Uono ◽  
Wataru Sato ◽  
Reiko Sawada ◽  
Sayaka Kawakami ◽  
Sayaka Yoshimura ◽  
...  

People with schizophrenia or subclinical schizotypal traits exhibit impaired recognition of facial expressions. However, it remains unclear whether the detection of emotional facial expressions is impaired in people with schizophrenia or high levels of schizotypy. The present study examined whether the detection of emotional facial expressions would be associated with schizotypy in a non-clinical population after controlling for the effects of IQ, age, and sex. Participants were asked to respond to whether all faces were the same as quickly and as accurately as possible following the presentation of angry or happy faces or their anti-expressions among crowds of neutral faces. Anti-expressions contain a degree of visual change that is equivalent to that of normal emotional facial expressions relative to neutral facial expressions and are recognized as neutral expressions. Normal expressions of anger and happiness were detected more rapidly and accurately than their anti-expressions. Additionally, the degree of overall schizotypy was negatively correlated with the effectiveness of detecting normal expressions versus anti-expressions. An emotion–recognition task revealed that the degree of positive schizotypy was negatively correlated with the accuracy of facial expression recognition. These results suggest that people with high levels of schizotypy experienced difficulties detecting and recognizing emotional facial expressions.


2018 ◽  
Vol 52 ◽  
pp. 212-222 ◽  
Author(s):  
Minhaz Uddin Ahmed ◽  
Kim Jin Woo ◽  
Kim Yeong Hyeon ◽  
Md. Rezaul Bashar ◽  
Phill Kyu Rhee

Author(s):  
Laura Alonso-Recio ◽  
Fernando Carvajal ◽  
Carlos Merino ◽  
Juan Manuel Serrano

Abstract Social cognition (SC) comprises an array of cognitive and affective abilities such as social perception, theory of mind, empathy, and social behavior. Previous studies have suggested the existence of deficits in several SC abilities in Parkinson disease (PD), although not unanimously. Objective: The aim of this study is to assess the SC construct and to explore its relationship with cognitive state in PD patients. Method: We compare 19 PD patients with cognitive decline, 27 cognitively preserved PD patients, and 29 healthy control (HC) individuals in social perception (static and dynamic emotional facial recognition), theory of mind, empathy, and social behavior tasks. We also assess processing speed, executive functions, memory, language, and visuospatial ability. Results: PD patients with cognitive decline perform worse than the other groups in both facial expression recognition tasks and theory of mind. Cognitively preserved PD patients only score worse than HCs in the static facial expression recognition task. We find several significant correlations between each of the SC deficits and diverse cognitive processes. Conclusions: The results indicate that some components of SC are impaired in PD patients. These problems seem to be related to a global cognitive decline rather than to specific deficits. Considering the importance of these abilities for social interaction, we suggest that SC be included in the assessment protocols in PD.


Sign in / Sign up

Export Citation Format

Share Document