Image based Emotional State Prediction from Multiparty Audio Conversation

Author(s):  
Shruti Jaiswal ◽  
Ayush Jain ◽  
G.C. Nandi
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 164759-164774 ◽  
Author(s):  
Yasemin Bay Ayzeren ◽  
Meryem Erbilek ◽  
Erbug Celebi

10.2196/24465 ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. e24465
Author(s):  
Emese Sükei ◽  
Agnes Norbury ◽  
M Mercedes Perez-Rodriguez ◽  
Pablo M Olmos ◽  
Antonio Artés

Background Mental health disorders affect multiple aspects of patients’ lives, including mood, cognition, and behavior. eHealth and mobile health (mHealth) technologies enable rich sets of information to be collected noninvasively, representing a promising opportunity to construct behavioral markers of mental health. Combining such data with self-reported information about psychological symptoms may provide a more comprehensive and contextualized view of a patient’s mental state than questionnaire data alone. However, mobile sensed data are usually noisy and incomplete, with significant amounts of missing observations. Therefore, recognizing the clinical potential of mHealth tools depends critically on developing methods to cope with such data issues. Objective This study aims to present a machine learning–based approach for emotional state prediction that uses passively collected data from mobile phones and wearable devices and self-reported emotions. The proposed methods must cope with high-dimensional and heterogeneous time-series data with a large percentage of missing observations. Methods Passively sensed behavior and self-reported emotional state data from a cohort of 943 individuals (outpatients recruited from community clinics) were available for analysis. All patients had at least 30 days’ worth of naturally occurring behavior observations, including information about physical activity, geolocation, sleep, and smartphone app use. These regularly sampled but frequently missing and heterogeneous time series were analyzed with the following probabilistic latent variable models for data averaging and feature extraction: mixture model (MM) and hidden Markov model (HMM). The extracted features were then combined with a classifier to predict emotional state. A variety of classical machine learning methods and recurrent neural networks were compared. Finally, a personalized Bayesian model was proposed to improve performance by considering the individual differences in the data and applying a different classifier bias term for each patient. Results Probabilistic generative models proved to be good preprocessing and feature extractor tools for data with large percentages of missing observations. Models that took into account the posterior probabilities of the MM and HMM latent states outperformed those that did not by more than 20%, suggesting that the underlying behavioral patterns identified were meaningful for individuals’ overall emotional state. The best performing generalized models achieved a 0.81 area under the curve of the receiver operating characteristic and 0.71 area under the precision-recall curve when predicting self-reported emotional valence from behavior in held-out test data. Moreover, the proposed personalized models demonstrated that accounting for individual differences through a simple hierarchical model can substantially improve emotional state prediction performance without relying on previous days’ data. Conclusions These findings demonstrate the feasibility of designing machine learning models for predicting emotional states from mobile sensing data capable of dealing with heterogeneous data with large numbers of missing observations. Such models may represent valuable tools for clinicians to monitor patients’ mood states.


2017 ◽  
Vol 76 (2) ◽  
pp. 71-79 ◽  
Author(s):  
Hélène Maire ◽  
Renaud Brochard ◽  
Jean-Luc Kop ◽  
Vivien Dioux ◽  
Daniel Zagar

Abstract. This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants’ performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model ( Ratcliff, 1978 ) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.


2010 ◽  
Vol 24 (1) ◽  
pp. 33-40 ◽  
Author(s):  
Miroslaw Wyczesany ◽  
Jan Kaiser ◽  
Anton M. L. Coenen

The study determines the associations between self-report of ongoing emotional state and EEG patterns. A group of 31 hospitalized patients were enrolled with three types of diagnosis: major depressive disorder, manic episode of bipolar affective disorder, and nonaffective patients. The Thayer ADACL checklist, which yields two subjective dimensions, was used for the assessment of affective state: Energy Tiredness (ET) and Tension Calmness (TC). Quantitative analysis of EEG was based on EEG spectral power and laterality coefficient (LC). Only the ET scale showed relationships with the laterality coefficient. The high-energy group showed right shift of activity in frontocentral and posterior areas visible in alpha and beta range, respectively. No effect of ET estimation on prefrontal asymmetry was observed. For the TC scale, an estimation of high tension was related to right prefrontal dominance and right posterior activation in beta1 band. Also, decrease of alpha2 power together with increase of beta2 power was observed over the entire scalp.


2020 ◽  
pp. 1-17
Author(s):  
Szczepan J. Grzybowski ◽  
Miroslaw Wyczesany ◽  
Jan Kaiser

Abstract. The goal of the study was to explore event-related potential (ERP) differences during the processing of emotional adjectives that were evaluated as congruent or incongruent with the current mood. We hypothesized that the first effects of congruence evaluation would be evidenced during the earliest stages of semantic analysis. Sixty mood adjectives were presented separately for 1,000 ms each during two sessions of mood induction. After each presentation, participants evaluated to what extent the word described their mood. The results pointed to incongruence marking of adjective’s meaning with current mood during early attention orientation and semantic access stages (the P150 component time window). This was followed by enhanced processing of congruent words at later stages. As a secondary goal the study also explored word valence effects and their relation to congruence evaluation. In this regard, no significant effects were observed on the ERPs; however, a negativity bias (enhanced responses to negative adjectives) was noted on the behavioral data (RTs), which could correspond to the small differences traced on the late positive potential.


2007 ◽  
Author(s):  
T. N. Reznikova ◽  
I. U. Terent'eva ◽  
N. A. Seliverstova ◽  
V. I. Semivolos

Sign in / Sign up

Export Citation Format

Share Document