dynamic faces
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 15)

H-INDEX

15
(FIVE YEARS 0)

2022 ◽  
Vol 191 ◽  
pp. 107970
Author(s):  
Yu Zhou ◽  
Jia Lin ◽  
Guomei Zhou
Keyword(s):  

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Nihan Alp ◽  
Huseyin Ozkan

AbstractIntegrating the spatiotemporal information acquired from the highly dynamic world around us is essential to navigate, reason, and decide properly. Although this is particularly important in a face-to-face conversation, very little research to date has specifically examined the neural correlates of temporal integration in dynamic face perception. Here we present statistically robust observations regarding the brain activations measured via electroencephalography (EEG) that are specific to the temporal integration. To that end, we generate videos of neutral faces of individuals and non-face objects, modulate the contrast of the even and odd frames at two specific frequencies ($$f_1$$ f 1 and $$f_2$$ f 2 ) in an interlaced manner, and measure the steady-state visual evoked potential as participants view the videos. Then, we analyze the intermodulation components (IMs: ($$nf_1\pm mf_2$$ n f 1 ± m f 2 ), a linear combination of the fundamentals with integer multipliers) that consequently reflect the nonlinear processing and indicate temporal integration by design. We show that electrodes around the medial temporal, inferior, and medial frontal areas respond strongly and selectively when viewing dynamic faces, which manifests the essential processes underlying our ability to perceive and understand our social world. The generation of IMs is only possible if even and odd frames are processed in succession and integrated temporally, therefore, the strong IMs in our frequency spectrum analysis show that the time between frames (1/60 s) is sufficient for temporal integration.


2021 ◽  
Vol 2 ◽  
pp. 117-122
Author(s):  
Nurul Hidayati

This study focused on the dynamic process a 26 years old woman is going through in building her resiliency. The woman used to be verbally and emotionally abused by her stepmother, and that experience brings a significant impact on her life. This article’s primary purpose is to describe how the woman’s dynamic faces the problems and how she uses the counseling process, e-counseling, and face-to-face support to transform herself into a resilient and forgiving person. It also takes into account the woman’s risk factors and protective factors. This is a case study, and the data was collected from a single case using in-depth interviews, observation, and psychological assessment. The data analysis is using thematic analysis. The result showed that the counseling process has helped the woman grow, raising her ability to cope with her problems, helped her to become more resilient, and become more forgiving and release some of her burdens.


2021 ◽  
Vol 21 (9) ◽  
pp. 2361
Author(s):  
Ruoying Zheng ◽  
Guomei Zhou
Keyword(s):  

2021 ◽  
Vol 21 (9) ◽  
pp. 2600
Author(s):  
Antoine Coutrot ◽  
Astrid Kibleur ◽  
Marion Trousselard ◽  
Barbara Lefranc ◽  
Céline Ramdani ◽  
...  

2021 ◽  
Vol 8 (7) ◽  
pp. 202159
Author(s):  
Jonathan Yi ◽  
Philip Pärnamets ◽  
Andreas Olsson

Responding appropriately to others' facial expressions is key to successful social functioning. Despite the large body of work on face perception and spontaneous responses to static faces, little is known about responses to faces in dynamic, naturalistic situations, and no study has investigated how goal directed responses to faces are influenced by learning during dyadic interactions. To experimentally model such situations, we developed a novel method based on online integration of electromyography signals from the participants’ face (corrugator supercilii and zygomaticus major) during facial expression exchange with dynamic faces displaying happy and angry facial expressions. Fifty-eight participants learned by trial-and-error to avoid receiving aversive stimulation by either reciprocate (congruently) or respond opposite (incongruently) to the expression of the target face. Our results validated our method, showing that participants learned to optimize their facial behaviour, and replicated earlier findings of faster and more accurate responses in congruent versus incongruent conditions. Moreover, participants performed better on trials when confronted with smiling, when compared with frowning, faces, suggesting it might be easier to adapt facial responses to positively associated expressions. Finally, we applied drift diffusion and reinforcement learning models to provide a mechanistic explanation for our findings which helped clarifying the underlying decision-making processes of our experimental manipulation. Our results introduce a new method to study learning and decision-making in facial expression exchange, in which there is a need to gradually adapt facial expression selection to both social and non-social reinforcements.


2021 ◽  
Vol 11 (2) ◽  
pp. 231
Author(s):  
Lisa M. Oakes ◽  
Michaela C. DeBolt ◽  
Aaron G. Beckner ◽  
Annika T. Voss ◽  
Lisa M. Cantrell

Research using eye tracking methods has revealed that when viewing faces, between 6 to 10 months of age, infants begin to shift visual attention from the eye region to the mouth region. Moreover, this shift varies with stimulus characteristics and infants’ experience with faces and languages. The current study examined the eye movements of a racially diverse sample of 98 infants between 7.5 and 10.5 months of age as they viewed movies of White and Asian American women reciting a nursery rhyme (the auditory component of the movies was replaced with music to eliminate the influence of the speech on infants’ looking behavior). Using an analytic approach inspired by the multiverse analysis approach, several measures from infants’ eye gaze were examined to identify patterns that were robust across different analyses. Although in general infants preferred the lower regions of the faces, i.e., the region containing the mouth, this preference depended on the stimulus characteristics and was stronger for infants whose typical experience included faces of more races and for infants who were exposed to multiple languages. These results show how we can leverage the richness of eye tracking data with infants to add to our understanding of the factors that influence infants’ visual exploration of faces.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Marlee M. Vandewouw ◽  
EunJung Choi ◽  
Christopher Hammill ◽  
Paul Arnold ◽  
Russell Schachar ◽  
...  

Abstract Autism spectrum disorder (ASD) is classically associated with poor face processing skills, yet evidence suggests that those with obsessive-compulsive disorder (OCD) and attention deficit hyperactivity disorder (ADHD) also have difficulties understanding emotions. We determined the neural underpinnings of dynamic emotional face processing across these three clinical paediatric groups, including developmental trajectories, compared with typically developing (TD) controls. We studied 279 children, 5–19 years of age but 57 were excluded due to excessive motion in fMRI, leaving 222: 87 ASD, 44 ADHD, 42 OCD and 49 TD. Groups were sex- and age-matched. Dynamic faces (happy, angry) and dynamic flowers were presented in 18 pseudo-randomized blocks while fMRI data were collected with a 3T MRI. Group-by-age interactions and group difference contrasts were analysed for the faces vs. flowers and between happy and angry faces. TD children demonstrated different activity patterns across the four contrasts; these patterns were more limited and distinct for the NDDs. Processing happy and angry faces compared to flowers yielded similar activation in occipital regions in the NDDs compared to TDs. Processing happy compared to angry faces showed an age by group interaction in the superior frontal gyrus, increasing with age for ASD and OCD, decreasing for TDs. Children with ASD, ADHD and OCD differentiated less between dynamic faces and dynamic flowers, with most of the effects seen in the occipital and temporal regions, suggesting that emotional difficulties shared in NDDs may be partly attributed to shared atypical visual information processing.


Author(s):  
D P M Fattah ◽  
D Kurniasih
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document