Social Cognition and Cognitive Decline in Patients with Parkinson’s Disease

Author(s):  
Laura Alonso-Recio ◽  
Fernando Carvajal ◽  
Carlos Merino ◽  
Juan Manuel Serrano

Abstract Social cognition (SC) comprises an array of cognitive and affective abilities such as social perception, theory of mind, empathy, and social behavior. Previous studies have suggested the existence of deficits in several SC abilities in Parkinson disease (PD), although not unanimously. Objective: The aim of this study is to assess the SC construct and to explore its relationship with cognitive state in PD patients. Method: We compare 19 PD patients with cognitive decline, 27 cognitively preserved PD patients, and 29 healthy control (HC) individuals in social perception (static and dynamic emotional facial recognition), theory of mind, empathy, and social behavior tasks. We also assess processing speed, executive functions, memory, language, and visuospatial ability. Results: PD patients with cognitive decline perform worse than the other groups in both facial expression recognition tasks and theory of mind. Cognitively preserved PD patients only score worse than HCs in the static facial expression recognition task. We find several significant correlations between each of the SC deficits and diverse cognitive processes. Conclusions: The results indicate that some components of SC are impaired in PD patients. These problems seem to be related to a global cognitive decline rather than to specific deficits. Considering the importance of these abilities for social interaction, we suggest that SC be included in the assessment protocols in PD.

2010 ◽  
Author(s):  
M. Fischer-Shofty ◽  
S. G. Shamay-Tsoorya ◽  
H. Harari ◽  
Y. Levkovitz

2005 ◽  
Vol 50 (9) ◽  
pp. 525-533 ◽  
Author(s):  
Benoit Bediou ◽  
Pierre Krolak-Salmon ◽  
Mohamed Saoud ◽  
Marie-Anne Henaff ◽  
Michael Burt ◽  
...  

Background: Impaired facial expression recognition in schizophrenia patients contributes to abnormal social functioning and may predict functional outcome in these patients. Facial expression processing involves individual neural networks that have been shown to malfunction in schizophrenia. Whether these patients have a selective deficit in facial expression recognition or a more global impairment in face processing remains controversial. Objective: To investigate whether patients with schizophrenia exhibit a selective impairment in facial emotional expression recognition, compared with patients with major depression and healthy control subjects. Methods: We studied performance in facial expression recognition and facial sex recognition paradigms, using original morphed faces, in a population with schizophrenia ( n = 29) and compared their scores with those of depression patients ( n = 20) and control subjects ( n = 20). Results: Schizophrenia patients achieved lower scores than both other groups in the expression recognition task, particularly in fear and disgust recognition. Sex recognition was unimpaired. Conclusion: Facial expression recognition is impaired in schizophrenia, whereas sex recognition is preserved, which highly suggests an abnormal processing of changeable facial features in this disease. A dysfunction of the top-down retrograde modulation coming from limbic and paralimbic structures on visual areas is hypothesized.


2019 ◽  
Vol 16 (04) ◽  
pp. 1941002 ◽  
Author(s):  
Jing Li ◽  
Yang Mi ◽  
Gongfa Li ◽  
Zhaojie Ju

Facial expression recognition has been widely used in human computer interaction (HCI) systems. Over the years, researchers have proposed different feature descriptors, implemented different classification methods, and carried out a number of experiments on various datasets for automatic facial expression recognition. However, most of them used 2D static images or 2D video sequences for the recognition task. The main limitations of 2D-based analysis are problems associated with variations in pose and illumination, which reduce the recognition accuracy. Therefore, an alternative way is to incorporate depth information acquired by 3D sensor, because it is invariant in both pose and illumination. In this paper, we present a two-stream convolutional neural network (CNN)-based facial expression recognition system and test it on our own RGB-D facial expression dataset collected by Microsoft Kinect for XBOX in unspontaneous scenarios since Kinect is an inexpensive and portable device to capture both RGB and depth information. Our fully annotated dataset includes seven expressions (i.e., neutral, sadness, disgust, fear, happiness, anger, and surprise) for 15 subjects (9 males and 6 females) aged from 20 to 25. The two individual CNNs are identical in architecture but do not share parameters. To combine the detection results produced by these two CNNs, we propose the late fusion approach. The experimental results demonstrate that the proposed two-stream network using RGB-D images is superior to that of using only RGB images or depth images.


10.2196/13810 ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. e13810 ◽  
Author(s):  
Anish Nag ◽  
Nick Haber ◽  
Catalin Voss ◽  
Serena Tamura ◽  
Jena Daniels ◽  
...  

Background Several studies have shown that facial attention differs in children with autism. Measuring eye gaze and emotion recognition in children with autism is challenging, as standard clinical assessments must be delivered in clinical settings by a trained clinician. Wearable technologies may be able to bring eye gaze and emotion recognition into natural social interactions and settings. Objective This study aimed to test: (1) the feasibility of tracking gaze using wearable smart glasses during a facial expression recognition task and (2) the ability of these gaze-tracking data, together with facial expression recognition responses, to distinguish children with autism from neurotypical controls (NCs). Methods We compared the eye gaze and emotion recognition patterns of 16 children with autism spectrum disorder (ASD) and 17 children without ASD via wearable smart glasses fitted with a custom eye tracker. Children identified static facial expressions of images presented on a computer screen along with nonsocial distractors while wearing Google Glass and the eye tracker. Faces were presented in three trials, during one of which children received feedback in the form of the correct classification. We employed hybrid human-labeling and computer vision–enabled methods for pupil tracking and world–gaze translation calibration. We analyzed the impact of gaze and emotion recognition features in a prediction task aiming to distinguish children with ASD from NC participants. Results Gaze and emotion recognition patterns enabled the training of a classifier that distinguished ASD and NC groups. However, it was unable to significantly outperform other classifiers that used only age and gender features, suggesting that further work is necessary to disentangle these effects. Conclusions Although wearable smart glasses show promise in identifying subtle differences in gaze tracking and emotion recognition patterns in children with and without ASD, the present form factor and data do not allow for these differences to be reliably exploited by machine learning systems. Resolving these challenges will be an important step toward continuous tracking of the ASD phenotype.


2021 ◽  
Vol 8 (11) ◽  
Author(s):  
Shota Uono ◽  
Wataru Sato ◽  
Reiko Sawada ◽  
Sayaka Kawakami ◽  
Sayaka Yoshimura ◽  
...  

People with schizophrenia or subclinical schizotypal traits exhibit impaired recognition of facial expressions. However, it remains unclear whether the detection of emotional facial expressions is impaired in people with schizophrenia or high levels of schizotypy. The present study examined whether the detection of emotional facial expressions would be associated with schizotypy in a non-clinical population after controlling for the effects of IQ, age, and sex. Participants were asked to respond to whether all faces were the same as quickly and as accurately as possible following the presentation of angry or happy faces or their anti-expressions among crowds of neutral faces. Anti-expressions contain a degree of visual change that is equivalent to that of normal emotional facial expressions relative to neutral facial expressions and are recognized as neutral expressions. Normal expressions of anger and happiness were detected more rapidly and accurately than their anti-expressions. Additionally, the degree of overall schizotypy was negatively correlated with the effectiveness of detecting normal expressions versus anti-expressions. An emotion–recognition task revealed that the degree of positive schizotypy was negatively correlated with the accuracy of facial expression recognition. These results suggest that people with high levels of schizotypy experienced difficulties detecting and recognizing emotional facial expressions.


Author(s):  
IOAN BUCIU ◽  
IOAN NAFORNITA

Human face analysis has attracted a large number of researchers from various fields, such as computer vision, image processing, neurophysiology or psychology. One of the particular aspects of human face analysis is encompassed by facial expression recognition task. A novel method based on phase congruency for extracting the facial features used in the facial expression classification procedure is developed. Considering a set of image samples comprising humans expressing various expressions, this new approach computes the phase congruency map between the samples. The analysis is performed in the frequency space where the similarity (or dissimilarity) between sample phases is measured to form discriminant features. The experiments were run using samples from two facial expression databases. To assess the method's performance, the technique is compared to the state-of-the art techniques utilized for classifying facial expressions, such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and Gabor jets. The features extracted by the aforementioned techniques are further classified using two classifiers: a distance-based classifier and a Support Vector Machine-based classifier. Experiments reveal superior facial expression recognition performance for the proposed approach with respect to other techniques.


2016 ◽  
Vol 31 (3) ◽  
pp. 320-326 ◽  
Author(s):  
Amy C Bilderbeck ◽  
Lauren Z Atkinson ◽  
John R Geddes ◽  
Guy M Goodwin ◽  
Catherine J Harmer

Objectives: Emotional processing abnormalities have been implicated in bipolar disorder (BD) but studies are typically small and uncontrolled. Here, facial expression recognition was explored in a large and naturalistically recruited cohort of BD patients. Methods: 271 patients with BD completed the facial expression recognition task. The effects of current medication together with the influence of current mood state and diagnostic subtype were assessed whilst controlling for the effects of demographic variables. Results: Patients who were currently receiving treatment with lithium demonstrated significantly poorer accuracy in recognising angry faces, an effect that held in a monotherapy sub-analysis comparing those participants on lithium only and those who were medication-free. Accuracy in recognising angry faces was also lower amongst participants currently taking dopamine antagonists (antipsychotics). Higher levels of current depressive symptoms were linked to poorer accuracy at identifying happy faces. Conclusion: Use of lithium and possibly dopamine antagonists may be associated with reduced processing of anger cues in BD. Findings support the existence of mood-congruent negative biases associated with depressive symptoms in BD. Observational cohort studies provide opportunities to explore the substantial effects of demographic, psychometric and clinical variables on cognitive performance and emotional processing.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Xiaoqing Wang ◽  
Xiangjun Wang ◽  
Yubo Ni

In the facial expression recognition task, a good-performing convolutional neural network (CNN) model trained on one dataset (source dataset) usually performs poorly on another dataset (target dataset). This is because the feature distribution of the same emotion varies in different datasets. To improve the cross-dataset accuracy of the CNN model, we introduce an unsupervised domain adaptation method, which is especially suitable for unlabelled small target dataset. In order to solve the problem of lack of samples from the target dataset, we train a generative adversarial network (GAN) on the target dataset and use the GAN generated samples to fine-tune the model pretrained on the source dataset. In the process of fine-tuning, we give the unlabelled GAN generated samples distributed pseudolabels dynamically according to the current prediction probabilities. Our method can be easily applied to any existing convolutional neural networks (CNN). We demonstrate the effectiveness of our method on four facial expression recognition datasets with two CNN structures and obtain inspiring results.


Sign in / Sign up

Export Citation Format

Share Document