3d face
Recently Published Documents


TOTAL DOCUMENTS

1619
(FIVE YEARS 278)

H-INDEX

52
(FIVE YEARS 9)

2021 ◽  
Author(s):  
Wuyuan Xie ◽  
Zhaonian Kuang ◽  
Miaohui Wang

2021 ◽  
Vol 8 (2) ◽  
pp. 239-256
Author(s):  
Xiaoxing Zeng ◽  
Zhelun Wu ◽  
Xiaojiang Peng ◽  
Yu Qiao

AbstractRecent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks. However, current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a 3D face template, hindering use in real applications. To address these problems, we propose a deep shape reconstruction and texture completion network, SRTC-Net, which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image. In SRTC-Net, we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes. The SRTC-Net pipeline has three stages. The first introduces a correspondence network to identify pixel-wise correspondence between the input 2D image and a 3D template model, and transfers the input 2D image to a U-V texture map. Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network. To get the 3D facial geometries, we predict coarse shape (U-V position maps) from the segmented face from the correspondence network using a shape network, and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way. We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks, using both in-the-lab datasets (MICC, MultiPIE) and in-the-wild datasets (CFP). The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture; they outperform or are comparable to the state-of-the-art.


2021 ◽  
Author(s):  
walid Hariri ◽  
Marwa Zaabi

Abstract 3D face recognition (FR) has been successfully applied using Convolutional neural networks (CNN) which have demonstrated stunning results in diverse computer vision and image classification tasks. Learning CNNs, however, need to estimate millions of parameters that expect high-performance computing capacity and storage. To deal with this issue, we propose an efficient method based on the quantization of residual features extracted from ResNet-50 pre-trained model. The method starts by describing each 3D face using a convolutional feature extraction block, and then apply the Bag-of-Features (BoF) paradigm to learn deep neural networks (we call it Deep BoF). To do so, we apply Radial Basis Function (RBF) neurons to quantize the deep features extracted from the last convolutional layers. An SVM classifier is then applied to classify faces according to their quantized term vectors. The obtained model is lightweight compared to classical CNN and it allows classifying arbitrary-sized images. The experimental results on the FRGCv2 and Bosphorus datasets show the powerful of our method compared to state of the art methods.


2021 ◽  
Author(s):  
Dilovan Asaad Zebari ◽  
Araz Rajab Abrahim ◽  
Dheyaa Ahmed Ibrahim ◽  
Gheyath M. Othman ◽  
Falah Y. H. Ahmed

2021 ◽  
Author(s):  
Souhir Sghaier ◽  
Sabrine Hamdi ◽  
Anis Ammar ◽  
Chokri Souani

Sign in / Sign up

Export Citation Format

Share Document