scholarly journals Face Adaptation—Investigating Nonconfigural Saturation Alterations

i-Perception ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 204166952110563
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Recognizing familiar faces requires a comparison of the incoming perceptual information with mental face representations stored in memory. Mounting evidence indicates that these representations adapt quickly to recently perceived facial changes. This becomes apparent in face adaptation studies where exposure to a strongly manipulated face alters the perception of subsequent face stimuli: original, non-manipulated face images then appear to be manipulated, while images similar to the adaptor are perceived as “normal.” The face adaptation paradigm serves as a good tool for investigating the information stored in facial memory. So far, most of the face adaptation studies focused on configural (second-order relationship) face information, mainly neglecting non-configural face information (i.e., that does not affect spatial face relations), such as color, although several (non-adaptation) studies were able to demonstrate the importance of color information in face perception and identification. The present study therefore focuses on adaptation effects on saturation color information and compares the results with previous findings on brightness. The study reveals differences in the effect pattern and robustness, indicating that adaptation effects vary considerably even within the same class of non-configural face information.

2005 ◽  
Vol 272 (1566) ◽  
pp. 897-904 ◽  
Author(s):  
David A Leopold ◽  
Gillian Rhodes ◽  
Kai-Markus Müller ◽  
Linda Jeffery

Several recent demonstrations using visual adaptation have revealed high-level aftereffects for complex patterns including faces. While traditional aftereffects involve perceptual distortion of simple attributes such as orientation or colour that are processed early in the visual cortical hierarchy, face adaptation affects perceived identity and expression, which are thought to be products of higher-order processing. And, unlike most simple aftereffects, those involving faces are robust to changes in scale, position and orientation between the adapting and test stimuli. These differences raise the question of how closely related face aftereffects are to traditional ones. Little is known about the build-up and decay of the face aftereffect, and the similarity of these dynamic processes to traditional aftereffects might provide insight into this relationship. We examined the effect of varying the duration of both the adapting and test stimuli on the magnitude of perceived distortions in face identity. We found that, just as with traditional aftereffects, the identity aftereffect grew logarithmically stronger as a function of adaptation time and exponentially weaker as a function of test duration. Even the subtle aspects of these dynamics, such as the power-law relationship between the adapting and test durations, closely resembled that of other aftereffects. These results were obtained with two different sets of face stimuli that differed greatly in their low-level properties. We postulate that the mechanisms governing these shared dynamics may be dissociable from the responses of feature-selective neurons in the early visual cortex.


2019 ◽  
Vol 31 (12) ◽  
pp. 2348-2367
Author(s):  
Tian Han ◽  
Xianglei Xing ◽  
Jiawen Wu ◽  
Ying Nian Wu

A recent Cell paper (Chang & Tsao, 2017 ) reports an interesting discovery. For the face stimuli generated by a pretrained active appearance model (AAM), the responses of neurons in the areas of the primate brain that are responsible for face recognition exhibit a strong linear relationship with the shape variables and appearance variables of the AAM that generates the face stimuli. In this letter, we show that this behavior can be replicated by a deep generative model, the generator network, that assumes that the observed signals are generated by latent random variables via a top-down convolutional neural network. Specifically, we learn the generator network from the face images generated by a pretrained AAM model using a variational autoencoder, and we show that the inferred latent variables of the learned generator network have a strong linear relationship with the shape and appearance variables of the AAM model that generates the face images. Unlike the AAM model, which has an explicit shape model where the shape variables generate the control points or landmarks, the generator network has no such shape model and shape variables. Yet it can learn the shape knowledge in the sense that some of the latent variables of the learned generator network capture the shape variations in the face images generated by AAM.


2021 ◽  
Vol 12 ◽  
Author(s):  
Anjana Lakshmi ◽  
Bernd Wittenbrink ◽  
Joshua Correll ◽  
Debbie S. Ma

This paper serves three specific goals. First, it reports the development of an Indian Asian face set, to serve as a free resource for psychological research. Second, it examines whether the use of pre-tested U.S.-specific norms for stimulus selection or weighting may introduce experimental confounds in studies involving non-U.S. face stimuli and/or non-U.S. participants. Specifically, it examines whether subjective impressions of the face stimuli are culturally dependent, and the extent to which these impressions reflect social stereotypes and ingroup favoritism. Third, the paper investigates whether differences in face familiarity impact accuracy in identifying face ethnicity. To this end, face images drawn from volunteers in India as well as a subset of Caucasian face images from the Chicago Face Database were presented to Indian and U.S. participants, and rated on a range of measures, such as perceived attractiveness, warmth, and social status. Results show significant differences in the overall valence of ratings of ingroup and outgroup faces. In addition, the impression ratings show minor differentiation along two basic stereotype dimensions, competence and trustworthiness, but not warmth. We also find participants to show significantly greater accuracy in correctly identifying the ethnicity of ingroup faces, relative to outgroup faces. This effect is found to be mediated by ingroup-outgroup differences in perceived group typicality of the target faces. Implications for research on intergroup relations in a cross-cultural context are discussed.


2013 ◽  
Vol 9 (6) ◽  
pp. 20130633 ◽  
Author(s):  
C. E. Lefevre ◽  
M. P. Ewbank ◽  
A. J. Calder ◽  
E. von dem Hagen ◽  
D. I. Perrett

Recently, the importance of skin colour for facial attractiveness has been recognized. In particular, dietary carotenoid-induced skin colour has been proposed as a signal of health and therefore attractiveness. While perceptual results are highly consistent, it is currently not clear whether carotenoid skin colour is preferred because it poses a cue to current health condition in humans or whether it is simply seen as a more aesthetically pleasing colour, independently of skin-specific signalling properties. Here, we tested this question by comparing attractiveness ratings of faces to corresponding ratings of meaningless scrambled face images matching the colours and contrasts found in the face. We produced sets of face and non-face stimuli with either healthy (high-carotenoid coloration) or unhealthy (low-carotenoid coloration) colour and asked participants for attractiveness ratings. Results showed that, while for faces increased carotenoid coloration significantly improved attractiveness, there was no equivalent effect on perception of scrambled images. These findings are consistent with a specific signalling system of current condition through skin coloration in humans and indicate that preferences are not caused by sensory biases in observers.


2021 ◽  
Vol 17 (2) ◽  
pp. 176-192
Author(s):  
Ronja Mueller ◽  
Sandra Utz ◽  
Claus-Christian Carbon ◽  
Tilo Strobach

Inspecting new visual information in a face can affect the perception of subsequently seen faces. In experimental settings for example, previously seen manipulated versions of a face can lead to a clear bias of the participant’s perception of subsequent images: Original images are then perceived as manipulated in the opposite direction of the adaptor while images that are more similar to the adaptor are perceived as normal or natural. These so-called face adaptation effects can be a useful tool to provide information about which facial information is processed and stored in facial memory. Most experiments so far used variants of the second-order relationship configural information (e.g., spatial relations between facial features) when investigating these effects. However, non-configural face information (e.g., color) was mainly neglected when focusing on face adaptation, although this type of information plays an important role in face processing. Therefore, we investigated adaptation effects of non-configural face information by employing brightness alterations. Our results provide clear evidence for brightness adaptation effects (Experiment 1). These effects are face-specific to some extent (Experiments 2 and 3) and robust over time (Experiments 4 and 5). They support the assumption that non-configural face information is not only relevant in face perception but also in face retention. Brightness information seems to be stored in memory and thus is even involved in face recognition.


Author(s):  
Tian Han ◽  
Jiawen Wu ◽  
Ying Nian Wu

A recent Cell paper [Chang and Tsao, 2017] reports an interesting discovery. For the face stimuli generated by a pre-trained active appearance model (AAM), the responses of neurons in the areas of the primate brain that are responsible for face recognition exhibit strong linear relationship with the shape variables and appearance variables of the AAM that generates the face stimuli. In this paper, we show that this behavior can be replicated by a deep generative model called the generator network, which assumes that the observed signals are generated by latent random variables via a top-down convolutional neural network. Specifically, we learn the generator network from the face images generated by a pre-trained AAM model using variational auto-encoder, and we show that the inferred latent variables of the learned generator network have strong linear relationship with the shape and appearance variables of the AAM model that generates the face images. Unlike the AAM model that has an explicit shape model where the shape variables generate the control points or landmarks, the generator network has no such shape model and shape variables. Yet the generator network can learn the shape knowledge in the sense that some of the latent variables of the learned generator network capture the shape variations in the face images generated by AAM.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2021 ◽  
pp. 1-11
Author(s):  
Suphawimon Phawinee ◽  
Jing-Fang Cai ◽  
Zhe-Yu Guo ◽  
Hao-Ze Zheng ◽  
Guan-Chen Chen

Internet of Things is considerably increasing the levels of convenience at homes. The smart door lock is an entry product for smart homes. This work used Raspberry Pi, because of its low cost, as the main control board to apply face recognition technology to a door lock. The installation of the control sensing module with the GPIO expansion function of Raspberry Pi also improved the antitheft mechanism of the door lock. For ease of use, a mobile application (hereafter, app) was developed for users to upload their face images for processing. The app sends the images to Firebase and then the program downloads the images and captures the face as a training set. The face detection system was designed on the basis of machine learning and equipped with a Haar built-in OpenCV graphics recognition program. The system used four training methods: convolutional neural network, VGG-16, VGG-19, and ResNet50. After the training process, the program could recognize the user’s face to open the door lock. A prototype was constructed that could control the door lock and the antitheft system and stream real-time images from the camera to the app.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takao Fukui ◽  
Mrinmoy Chakrabarty ◽  
Misako Sano ◽  
Ari Tanaka ◽  
Mayuko Suzuki ◽  
...  

AbstractEye movements toward sequentially presented face images with or without gaze cues were recorded to investigate whether those with ASD, in comparison to their typically developing (TD) peers, could prospectively perform the task according to gaze cues. Line-drawn face images were sequentially presented for one second each on a laptop PC display, and the face images shifted from side-to-side and up-and-down. In the gaze cue condition, the gaze of the face image was directed to the position where the next face would be presented. Although the participants with ASD looked less at the eye area of the face image than their TD peers, they could perform comparable smooth gaze shift to the gaze cue of the face image in the gaze cue condition. This appropriate gaze shift in the ASD group was more evident in the second half of trials in than in the first half, as revealed by the mean proportion of fixation time in the eye area to valid gaze data in the early phase (during face image presentation) and the time to first fixation on the eye area. These results suggest that individuals with ASD may benefit from the short-period trial experiment by enhancing the usage of gaze cue.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2003 ◽  
Author(s):  
Xiaoliang Zhu ◽  
Shihao Ye ◽  
Liang Zhao ◽  
Zhicheng Dai

As a sub-challenge of EmotiW (the Emotion Recognition in the Wild challenge), how to improve performance on the AFEW (Acted Facial Expressions in the wild) dataset is a popular benchmark for emotion recognition tasks with various constraints, including uneven illumination, head deflection, and facial posture. In this paper, we propose a convenient facial expression recognition cascade network comprising spatial feature extraction, hybrid attention, and temporal feature extraction. First, in a video sequence, faces in each frame are detected, and the corresponding face ROI (range of interest) is extracted to obtain the face images. Then, the face images in each frame are aligned based on the position information of the facial feature points in the images. Second, the aligned face images are input to the residual neural network to extract the spatial features of facial expressions corresponding to the face images. The spatial features are input to the hybrid attention module to obtain the fusion features of facial expressions. Finally, the fusion features are input in the gate control loop unit to extract the temporal features of facial expressions. The temporal features are input to the fully connected layer to classify and recognize facial expressions. Experiments using the CK+ (the extended Cohn Kanade), Oulu-CASIA (Institute of Automation, Chinese Academy of Sciences) and AFEW datasets obtained recognition accuracy rates of 98.46%, 87.31%, and 53.44%, respectively. This demonstrated that the proposed method achieves not only competitive performance comparable to state-of-the-art methods but also greater than 2% performance improvement on the AFEW dataset, proving the significant outperformance of facial expression recognition in the natural environment.


Sign in / Sign up

Export Citation Format

Share Document