scholarly journals Stereoscopic facial imaging for pain assessment using rotational offset microlens arrays based structured illumination

2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Jae-Myeong Kwon ◽  
Sung-Pyo Yang ◽  
Ki-Hun Jeong

AbstractConventional pain assessment methods such as patients’ self-reporting restrict the possibility of easy pain monitoring while pain serves as an important role in clinical practice. Here we report a pain assessment method via 3D face reading camera assisted by dot pattern illumination. The face reading camera module (FRCM) consists of a stereo camera and a dot projector, which allow the quantitative measurement of facial expression changes without human subjective judgement. The rotational offset microlens arrays (roMLAs) in the dot projector form a uniform dense dot pattern on a human face. The dot projection facilitates evaluating three-dimensional change of facial expression by improving 3D reconstruction results of non-textured facial surfaces. In addition, the FRCM provides consistent pain rating from 3D data, regardless of head movement. This pain assessment method can provide a new guideline for precise, real-time, and continuous pain monitoring.

Author(s):  
Sanjay Kumar Singh ◽  
V. Rastogi ◽  
S. K. Singh

Pain, assumed to be the fifth vital sign, is an important symptom that needs to be adequately assessed in heath care. The visual changes reflected on the face of a person in pain may be apparent for only a few seconds and occur instinctively. Tracking these changes is a difficult and time-consuming process in a clinical setting. This is why it is motivating researchers and experts from medical, psychology and computer fields to conduct inter-disciplinary research in capturing facial expressions. This chapter contains a comprehensive review of technologies in the study of facial expression along with its application in pain assessment. The facial expressions of pain in children's (0-2 years) and in non-communicative patients need to be recognized as they are of utmost importance for proper diagnosis. Well designed computerized methodologies would streamline the process of patient assessment, increasing its accessibility to physicians and improving quality of care.


2019 ◽  
Vol 9 (11) ◽  
pp. 2218 ◽  
Author(s):  
Maria Grazia Violante ◽  
Federica Marcolin ◽  
Enrico Vezzetti ◽  
Luca Ulrich ◽  
Gianluca Billia ◽  
...  

This study proposes a novel quality function deployment (QFD) design methodology based on customers’ emotions conveyed by facial expressions. The current advances in pattern recognition related to face recognition techniques have fostered the cross-fertilization and pollination between this context and other fields, such as product design and human-computer interaction. In particular, the current technologies for monitoring human emotions have supported the birth of advanced emotional design techniques, whose main focus is to convey users’ emotional feedback into the design of novel products. As quality functional deployment aims at transforming the voice of customers into engineering features of a product, it appears to be an appropriate and promising nest in which to embed users’ emotional feedback with new emotional design methodologies, such as facial expression recognition. This way, the present methodology consists in interviewing the user and acquiring his/her face with a depth camera (allowing three-dimensional (3D) data), clustering the face information into different emotions with a support vector machine classificator, and assigning customers’ needs weights relying on the detected facial expressions. The proposed method has been applied to a case study in the context of agriculture and validated by a consortium. The approach appears sound and capable of collecting the unconscious feedback of the interviewee.


2020 ◽  
Vol 10 (2) ◽  
pp. 20190070 ◽  
Author(s):  
Sophie Ketchen ◽  
Arndt Rohwedder ◽  
Sabine Knipp ◽  
Filomena Esteves ◽  
Nina Struve ◽  
...  

The limitations of two-dimensional analysis in three-dimensional (3D) cellular imaging impair the accuracy of research findings in biological studies. Here, we report a novel 3D approach to acquisition, analysis and interpretation of tumour spheroid images. Our research interest in mesenchymal–amoeboid transition led to the development of a workflow incorporating the generation and analysis of 3D data with instant structured illumination microscopy and a new ImageJ plugin.


2012 ◽  
Vol 25 (0) ◽  
pp. 46-47
Author(s):  
Kazumichi Matsumiya

Adaptation to a face belonging to a facial category, such as expression, causes a subsequently neutral face to be perceived as belonging to an opposite facial category. This is referred to as the face aftereffect (FAE) (Leopold et al., 2001; Rhodes et al., 2004; Webster et al., 2004). The FAE is generally thought of as being a visual phenomenon. However, recent studies have shown that humans can haptically recognize a face (Kilgour and Lederman, 2002; Lederman et al., 2007). Here, I investigated whether FAEs could occur in haptic perception of faces. Three types of facial expressions (happy, sad and neutral) were generated using a computer-graphics software, and three-dimensional masks of these faces were made from epoxy-cured resin for use in the experiments. An adaptation facemask was positioned on the left side of a table in front of the participant, and a test facemask was placed on the right. During adaptation, participants haptically explored the adaptation facemask with their eyes closed for 20 s, after which they haptically explored the test facemask for 5 s. Participants were then requested to classify the test facemask as either happy or sad. The experiment was performed under two adaptation conditions: (1) with adaptation to a happy facemask and (2) with adaptation to a sad facemask. In both cases, the expression of the test facemask was neutral. The results indicate that adaptation to a haptic face that belongs to a specific facial expression causes a subsequently touched neutral face to be perceived as having the opposite facial expression, suggesting that FAEs can be observed in haptic perception of faces.


Author(s):  
Sanjay Kumar Singh ◽  
V. Rastogi ◽  
S. K. Singh

Pain, assumed to be the fifth vital sign, is an important symptom that needs to be adequately assessed in heath care. The visual changes reflected on the face of a person in pain may be apparent for only a few seconds and occur instinctively. Tracking these changes is a difficult and time-consuming process in a clinical setting. This is why it is motivating researchers and experts from medical, psychology and computer fields to conduct inter-disciplinary research in capturing facial expressions. This chapter contains a comprehensive review of technologies in the study of facial expression along with its application in pain assessment. The facial expressions of pain in children's (0-2 years) and in non-communicative patients need to be recognized as they are of utmost importance for proper diagnosis. Well designed computerized methodologies would streamline the process of patient assessment, increasing its accessibility to physicians and improving quality of care.


2006 ◽  
Vol 18 (4) ◽  
pp. 504-510 ◽  
Author(s):  
Minoru Hashimoto ◽  
◽  
Daisuke Morooka ◽  

We propose robotic facial expression using a curved surface display. An image of the robot’s face is displayed on a curved screen to form a facial expression easily compared to other mechanical facial expression. The curved surface gives the face a three-dimensional effect due to not possible using a plane image. The curved surface display consists of a domed screen, a fish-eye lens, and a projector. The face robot has a neck to move the head. We detail the domed display, compensation for image distortion, and the drawing of shadow images indicating the direction of a light source. The facial expression is animated and the head moves using the neck conducted. Experiments confirmed the effectiveness of our proposal.


2018 ◽  
Vol 9 (2) ◽  
pp. 31-38
Author(s):  
Fransisca Adis ◽  
Yohanes Merci Widiastomo

Facial expression is one of some aspects that can deliver story and character’s emotion in 3D animation. To achieve that, we need to plan the character facial from very beginning of the production. At early stage, the character designer need to think about the expression after theu done the character design. Rigger need to create a flexible rigging to achieve the design. Animator can get the clear picture how they animate the facial. Facial Action Coding System (FACS) that originally developed by Carl-Herman Hjortsjo and adopted by Paul Ekman and Wallace V. can be used to identify emotion in a person generally. This paper is going to explain how the Writer use FACS to help designing the facial expression in 3D characters. FACS will be used to determine the basic characteristic of basic shapes of the face when show emotions, while compare with actual face reference. Keywords: animation, facial expression, non-dialog


2004 ◽  
Vol 126 (5) ◽  
pp. 861-870 ◽  
Author(s):  
A. Thakur ◽  
X. Liu ◽  
J. S. Marshall

An experimental and computational study is performed of the wake flow behind a single yawed cylinder and a pair of parallel yawed cylinders placed in tandem. The experiments are performed for a yawed cylinder and a pair of yawed cylinders towed in a tank. Laser-induced fluorescence is used for flow visualization and particle-image velocimetry is used for quantitative velocity and vorticity measurement. Computations are performed using a second-order accurate block-structured finite-volume method with periodic boundary conditions along the cylinder axis. Results are applied to assess the applicability of a quasi-two-dimensional approximation, which assumes that the flow field is the same for any slice of the flow over the cylinder cross section. For a single cylinder, it is found that the cylinder wake vortices approach a quasi-two-dimensional state away from the cylinder upstream end for all cases examined (in which the cylinder yaw angle covers the range 0⩽ϕ⩽60°). Within the upstream region, the vortex orientation is found to be influenced by the tank side-wall boundary condition relative to the cylinder. For the case of two parallel yawed cylinders, vortices shed from the upstream cylinder are found to remain nearly quasi-two-dimensional as they are advected back and reach within about a cylinder diameter from the face of the downstream cylinder. As the vortices advect closer to the cylinder, the vortex cores become highly deformed and wrap around the downstream cylinder face. Three-dimensional perturbations of the upstream vortices are amplified as the vortices impact upon the downstream cylinder, such that during the final stages of vortex impact the quasi-two-dimensional nature of the flow breaks down and the vorticity field for the impacting vortices acquire significant three-dimensional perturbations. Quasi-two-dimensional and fully three-dimensional computational results are compared to assess the accuracy of the quasi-two-dimensional approximation in prediction of drag and lift coefficients of the cylinders.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3046
Author(s):  
Shervin Minaee ◽  
Mehdi Minaei ◽  
Amirali Abdolrashidi

Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier’s output. Through experimental results, we show that different emotions are sensitive to different parts of the face.


Sign in / Sign up

Export Citation Format

Share Document