scholarly journals Neurogastronomy as a Tool for Evaluating Emotions and Visual Preferences of Selected Food Served in Different Ways

Foods ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 354
Author(s):  
Jakub Berčík ◽  
Johana Paluchová ◽  
Katarína Neomániová

The appearance of food provides certain expectations regarding the harmonization of taste, delicacy, and overall quality, which subsequently affects not only the intake itself but also many other features of the behavior of customers of catering facilities. The main goal of this article is to find out what effect the visual design of food (waffles) prepared from the same ingredients and served in three different ways—a stone plate, street food style, and a white classic plate—has on the consumer’s preferences. In addition to the classic tablet assistance personal interview (TAPI) tools, biometric methods such as eye tracking and face reading were used in order to obtain unconscious feedback. During testing, air quality in the room by means of the Extech device and the influence of the visual design of food on the perception of its smell were checked. At the end of the paper, we point out the importance of using classical feedback collection techniques (TAPI) and their extension in measuring subconscious reactions based on monitoring the eye movements and facial expressions of the respondents, which provides a whole new perspective on the perception of visual design and serving food as well as more effective targeting and use of corporate resources.

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


2021 ◽  
Author(s):  
Louisa Kulke ◽  
Lena Brümmer ◽  
Arezoo Pooresmaeili ◽  
Annekathrin Schacht

In everyday life, faces with emotional expressions quickly attract attention and eye-movements. To study the neural mechanisms of such emotion-driven attention by means of event-related brain potentials (ERPs), tasks that employ covert shifts of attention are commonly used, in which participants need to inhibit natural eye-movements towards stimuli. It remains, however, unclear how shifts of attention to emotional faces with and without eye-movements differ from each other. The current preregistered study aimed to investigate neural differences between covert and overt emotion-driven attention. We combined eye-tracking with measurements of ERPs to compare shifts of attention to faces with happy, angry or neutral expressions when eye-movements were either executed (Go conditions) or withheld (No-go conditions). Happy and angry faces led to larger EPN amplitudes, shorter latencies of the P1 component and faster saccades, suggesting that emotional expressions significantly affected shifts of attention. Several ERPs (N170, EPN, LPC), were augmented in amplitude when attention was shifted with an eye-movement, indicating an enhanced neural processing of faces if eye-movements had to be executed together with a reallocation of attention. However, the modulation of ERPs by facial expressions did not differ between the Go and No-go conditions, suggesting that emotional content enhances both covert and overt shifts of attention. In summary, our results indicate that overt and covert attention shifts differ but are comparably affected by emotional content.


2018 ◽  
Vol 122 (4) ◽  
pp. 1432-1448 ◽  
Author(s):  
Charlott Maria Bodenschatz ◽  
Anette Kersting ◽  
Thomas Suslow

Orientation of gaze toward specific regions of the face such as the eyes or the mouth helps to correctly identify the underlying emotion. The present eye-tracking study investigates whether facial features diagnostic of specific emotional facial expressions are processed preferentially, even when presented outside of subjective awareness. Eye movements of 73 healthy individuals were recorded while completing an affective priming task. Primes (pictures of happy, neutral, sad, angry, and fearful facial expressions) were presented for 50 ms with forward and backward masking. Participants had to evaluate subsequently presented neutral faces. Results of an awareness check indicated that participants were subjectively unaware of the emotional primes. No affective priming effects were observed but briefly presented emotional facial expressions elicited early eye movements toward diagnostic regions of the face. Participants oriented their gaze more rapidly to the eye region of the neutral mask after a fearful facial expression. After a happy facial expression, participants oriented their gaze more rapidly to the mouth region of the neutral mask. Moreover, participants dwelled longest on the eye region after a fearful facial expression, and the dwell time on the mouth region was longest for happy facial expressions. Our findings support the idea that briefly presented fearful and happy facial expressions trigger an automatic mechanism that is sensitive to the distribution of relevant facial features and facilitates the orientation of gaze toward them.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


2019 ◽  
Vol 69 (3) ◽  
pp. 393-400
Author(s):  
Soichiro Matsuda ◽  
Takahide Omori ◽  
Joseph P. McCleery ◽  
Junichi Yamamoto

Healthcare ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Chong-Bin Tsai ◽  
Wei-Yu Hung ◽  
Wei-Yen Hsu

Optokinetic nystagmus (OKN) is an involuntary eye movement induced by motion of a large proportion of the visual field. It consists of a “slow phase (SP)” with eye movements in the same direction as the movement of the pattern and a “fast phase (FP)” with saccadic eye movements in the opposite direction. Study of OKN can reveal valuable information in ophthalmology, neurology and psychology. However, the current commercially available high-resolution and research-grade eye tracker is usually expensive. Methods & Results: We developed a novel fast and effective system combined with a low-cost eye tracking device to accurately quantitatively measure OKN eye movement. Conclusions: The experimental results indicate that the proposed method achieves fast and promising results in comparisons with several traditional approaches.


2020 ◽  
pp. 1-27
Author(s):  
Katja I. Haeuser ◽  
Shari Baum ◽  
Debra Titone

Abstract Comprehending idioms (e.g., bite the bullet) requires that people appreciate their figurative meanings while suppressing literal interpretations of the phrase. While much is known about idioms, an open question is how healthy aging and noncanonical form presentation affect idiom comprehension when the task is to read sentences silently for comprehension. Here, younger and older adults read sentences containing idioms or literal phrases, while we monitored their eye movements. Idioms were presented in a canonical or a noncanonical form (e.g., bite the iron bullet). To assess whether people integrate figurative or literal interpretations of idioms, a disambiguating region that was figuratively or literally biased followed the idiom in each sentence. During early stages of reading, older adults showed facilitation for canonical idioms, suggesting a greater sensitivity to stored idiomatic forms. During later stages of reading, older adults showed slower reading times when canonical idioms were biased toward their literal interpretation, suggesting they were more likely to interpret idioms figuratively on the first pass. In contrast, noncanonical form presentation slowed comprehension of figurative meanings comparably in younger and older participants. We conclude that idioms may be more strongly entrenched in older adults, and that noncanonical form presentation slows comprehension of figurative meanings.


2021 ◽  
Author(s):  
Federico Carbone ◽  
Philipp Ellmerer ◽  
Marcel Ritter ◽  
Sabine Spielberger ◽  
Philipp Mahlknecht ◽  
...  

2021 ◽  
Vol 20 (2) ◽  
pp. 84-96
Author(s):  
Mitja Ružojčić ◽  
Zvonimir Galić ◽  
Antun Palanović ◽  
Maja Parmač Kovačić ◽  
Andreja Bubić

Abstract. To better understand the process of responding to the Conditional Reasoning Test for Aggression (CRT-A) and its implication for the test's use in personnel selection, we conducted two lab studies in which we compared test scores and eye movements of participants responding honestly and faking the test. Study 1 results showed that, although participants might try to respond differently to the CRT-A while faking, it remains an indirect and unfakeable measure as long as the test's purpose is undisclosed. Study 2 showed that revealing the true purpose of the CRT-A diminishes the test's indirect nature so the test becomes fakeable, solving it requires less attention and participants direct their eyes more to response alternatives congruent with the presentational demands.


2021 ◽  
pp. 1-26
Author(s):  
Jan-Louis Kruger ◽  
Natalia Wisniewska ◽  
Sixin Liao

Abstract High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers’ reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).


Sign in / Sign up

Export Citation Format

Share Document