scholarly journals Recognition of Facial Expressions under Varying Conditions Using Dual-Feature Fusion

2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Awais Mahmood ◽  
Shariq Hussain ◽  
Khalid Iqbal ◽  
Wail S. Elkilani

Facial expression recognition plays an important role in communicating the emotions and intentions of human beings. Facial expression recognition in uncontrolled environment is more difficult as compared to that in controlled environment due to change in occlusion, illumination, and noise. In this paper, we present a new framework for effective facial expression recognition from real-time facial images. Unlike other methods which spend much time by dividing the image into blocks or whole face image, our method extracts the discriminative feature from salient face regions and then combine with texture and orientation features for better representation. Furthermore, we reduce the data dimension by selecting the highly discriminative features. The proposed framework is capable of providing high recognition accuracy rate even in the presence of occlusions, illumination, and noise. To show the robustness of the proposed framework, we used three publicly available challenging datasets. The experimental results show that the performance of the proposed framework is better than existing techniques, which indicate the considerable potential of combining geometric features with appearance-based features.

2017 ◽  
Vol 120 (3) ◽  
pp. 391-407 ◽  
Author(s):  
Ying Ge ◽  
Xiaofang Zhong ◽  
Wenbo Luo

Internet addition affects facial expression recognition of individuals. However, evidences of facial expression recognition from different types of addicts are insufficient. The present study addressed the question by adopting eye-movement analytical method and focusing on the difference in facial expression recognition between internet-addicted and non-internet-addicted urban left-behind children in China. Sixty 14-year-old Chinese participants performed tasks requiring absolute recognition judgment and relative recognition judgment. The results show that the information processing mode adopted by the internet-addicted involved earlier gaze acceleration, longer fixation durations, lower fixation counts, and uniform extraction of pictorial information. The information processing mode of the non-addicted showed the opposite pattern. Moreover, recognition and processing of negative emotion pictures were relatively complex, and it was especially difficult for urban internet-addicted left-behind children to process negative emotion pictures in fine judgment and processing stage of recognition on differences as demonstrated by longer fixation duration and inadequate fixation counts.


Fractals ◽  
2002 ◽  
Vol 10 (01) ◽  
pp. 47-52 ◽  
Author(s):  
TAKUMA TAKEHARA ◽  
FUMIO OCHIAI ◽  
NAOTO SUZUKI

Following the Mandelbrot's theory of fractals, many shapes and phenomena in nature have been suggested to be fractal. Even animal behavior and human physiological responses can also be represented as fractal. Here, we show the evidence that it is possible to apply the concept of fractals even to the facial expression recognition, which is one of the most important parts of human recognition. Rating data derived from judging morphed facial images were represented in the two-dimensional psychological space by multidimensional scaling of four different scales. The resultant perimeter of the structure of the emotion circumplex was fluctuated and was judged to have a fractal dimension of 1.18. The smaller the unit of measurement, the longer the length of the perimeter of the circumplex. In this study, we provide interdisciplinarily important evidence of fractality through its application to facial expression recognition.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 804-816
Author(s):  
Elaf J. Al Taee ◽  
Qasim Mohammed Jasim

A facial expression is a visual impression of a person's situations, emotions, cognitive activity, personality, intention and psychopathology, it has an active and vital role in the exchange of information and communication between people. In machines and robots which dedicated to communication with humans, the facial expressions recognition play an important and vital role in communication and reading of what is the person implies, especially in the field of health. For that the research in this field leads to development in communication with the robot. This topic has been discussed extensively, and with the progress of deep learning and use Convolution Neural Network CNN in image processing which widely proved efficiency, led to use CNN in the recognition of facial expressions. Automatic system for Facial Expression Recognition FER require to perform detection and location of faces in a cluttered scene, feature extraction, and classification. In this research, the CNN used for perform the process of FER. The target is to label each image of facial into one of the seven facial emotion categories considered in the JAFFE database. JAFFE facial expression database with seven facial expression labels as sad, happy, fear, surprise, anger, disgust, and natural are used in this research. We trained CNN with different depths using gray-scale images from the JAFFE database.The accuracy of proposed system was 100%.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


Sign in / Sign up

Export Citation Format

Share Document