3D Modeling and Animation
Latest Publications


TOTAL DOCUMENTS

22
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

Published By IGI Global

9781591402992, 9781931777995

2011 ◽  
pp. 130-174
Author(s):  
Burak Ozer ◽  
Tiehan Lv ◽  
Wayne Wolf

This chapter focuses on real-time processing techniques for the reconstruction of visual information from multiple views and its analysis for human detection and gesture and activity recognition. It presents a review of the main components of three-dimensional visual processing techniques and visual analysis of multiple cameras, i.e., projection of three-dimensional models onto two-dimensional images and three-dimensional visual reconstruction from multiple images. It discusses real-time aspects of these techniques and shows how these aspects affect the software and hardware architectures. Furthermore, the authors present their multiple-camera system to investigate the relationship between the activity recognition algorithms and the architectures required to perform these tasks in real time. The chapter describes the proposed activity recognition method that consists of a distributed algorithm and a data fusion scheme for two and three-dimensional visual analysis, respectively. The authors analyze the available data independencies for this algorithm and discuss the potential architectures to exploit the parallelism resulting from these independencies.


2011 ◽  
pp. 341-375
Author(s):  
Nikos Karatzoulis ◽  
Costas T. Davarakis ◽  
Dimitrios Tzovaras

This chapter presents a number of promising applications and provides an overview of recent developments and techniques in the area of analysis and synthesis techniques for the human body. The ability to model and to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human inhabited environment. The chapter analyzes the current techniques and technologies available for hand and body modeling and animation and presents recent results of synthesis and analysis techniques for the human body reported by R&D projects worldwide. Technical details are provided for each R&D project and the results are discussed and evaluated.


2011 ◽  
pp. 317-340
Author(s):  
Zhen Wen ◽  
Pengyu Hong ◽  
Jilin Tu ◽  
Thomas S. Huang

This chapter presents a unified framework for machine-learning-based facial deformation modeling, analysis and synthesis. It enables flexible, robust face motion analysis and natural synthesis, based on a compact face motion model learned from motion capture data. This model, called Motion Units (Muss), captures the characteristics of real facial motion. The MU space can be used to constrain noisy low-level motion estimation for robust facial motion analysis. For synthesis, a face model can be deformed by adjusting the weights of Mus. The weights can also be used as visual features to learn audio-to-visual mapping using neural networks for real-time, speech-driven, 3D face animation. Moreover, the framework includes parts-based MUs because of the local facial motion and an interpolation scheme to adapt MUs to arbitrary face geometry and mesh topology. Experiments show we can achieve natural face animation and robust non-rigid face tracking in our framework.


2011 ◽  
pp. 175-200 ◽  
Author(s):  
Kostas Karpouzis ◽  
Amaryllis Raouzaiou ◽  
Athanasios Drosopoulos ◽  
Spiros Ioannou ◽  
Themis Balomenos ◽  
...  

This chapter presents a holistic approach to emotion modeling and analysis and their applications in Man-Machine Interaction applications. Beginning from a symbolic representation of human emotions found in this context, based on their expression via facial expressions and hand gestures, we show that it is possible to transform quantitative feature information from video sequences to an estimation of a user’s emotional state. While these features can be used for simple representation purposes, in our approach they are utilized to provide feedback on the users’ emotional state, hoping to provide next-generation interfaces that are able to recognize the emotional states of their users.


2011 ◽  
pp. 27-69 ◽  
Author(s):  
Marius Preda ◽  
Ioan A. Salomie ◽  
Françoise Preteux ◽  
Gauthier Lafruit

Besides being one of the well-known audio/video coding techniques, MPEG-4 provides additional coding tools dedicated to virtual character animation. The motivation of considering virtual character definition and animation issues within MPEG-4 is first presented. Then, it is shown how MPEG-4, Amendment 1 offers an appropriate framework for virtual human stream is presented and discussed in terms of a generic representation and additional functionalities. The biomechanical properties, modeled by means of the character skeleton that defines the bone influence on the skin region, as well as the local spatial deformations simulating muscles, are supported by specific nodes. Animating the virtual character consists in instantiating bone transformations and muscle control curves. Interpolation techniques, inverse kinematics, discrete cosine transform and arithmetic encoding techniques make it possible to provide a highly compressed animation stream. Within a dedicated modeling approach — the so-called MeshGrid — we show how the bone and muscle-based animation mechanism is applied to deform the 3D space around a humanoid.


2011 ◽  
pp. 1-26 ◽  
Author(s):  
Angel Sappa ◽  
Sotiris Malassiotis

This chapter presents a survey of the most recent vision-based human body modeling techniques. It includes sections covering the topics of 3D human body coding standards, motion tracking, recognition and applications. Short summaries of various techniques, including their advantages and disadvantages, are introduced. Although this work is focused on computer vision, some references from computer graphics are also given. Considering that it is impossible to find a method valid for all applications, this chapter intends to give an overview of the current techniques in order to help in the selection of the most suitable method for a certain problem.


2011 ◽  
pp. 266-294
Author(s):  
Gregor A. Kalberer ◽  
Pascal Müller ◽  
Luc Van Gool

The problem of realistic face animation is a difficult one. This is hampering a further breakthrough of some high-tech domains, such as special effects in the movies, the use of 3D face models in communications, the use of avatars and likenesses in virtual reality, and the production of games with more subtle scenarios. This work attempts to improve on the current state-of-the-art in face animation, especially for the creation of highly realistic lip and speech-related motions. To that end, 3D models of faces are used and — based on the latest technology — speech-related 3D face motion will be learned from examples. Thus, the chapter subscribes to the surging field of image-based modeling and widens its scope to include animation. The exploitation of detailed 3D motion sequences is quite unique, thereby narrowing the gap between modeling and animation. From measured 3D face deformations around the mouth area, typical motions are extracted for different “visemes”. Visemes are the basic motion patterns observed for speech and are comparable to the phonemes of auditory speech. The visemes are studied with sufficient detail to also cover natural variations and differences between individuals. Furthermore, the transition between visemes is analyzed in terms of co-articulation effects, i.e., the visual blending of visemes as required for fluent, natural speech. The work presented in this chapter also encompasses the animation of faces for which no visemes have been observed and extracted. The “transplantation” of visemes to novel faces for which no viseme data have been recorded and for which only a static 3D model is available allows for the animation of faces without an extensive learning procedure for each individual.


2011 ◽  
pp. 70-129
Author(s):  
B. J. Lei ◽  
E. A. Hendriks ◽  
Aggelos K. Katsaggelos

This chapter presents an extensive overview of passive camera calibration techniques. Starting with a detailed introduction and mathematical description of the imaging process of an off-the-shelf camera, it reviews all existing passive calibration approaches with increasing complexity. All algorithms are presented in detail so that they are directly applicable. For completeness, a brief counting about the self-calibration is also provided. In addition, two typical applications are given of passive camera calibration methods for specific problems of face model reconstruction and telepresence and experimentally evaluated. It is expected that this chapter can serve as a standard reference. Researchers in various fields in which passive camera calibration is actively or potentially of interest can use this chapter to identify the appropriate techniques suitable for their applications.


2011 ◽  
pp. 201-234
Author(s):  
Ana C. Andres del Valle ◽  
Jean-Luc Dugelay

This chapter presents a state-of-the-art compilation on facial motion and expression analysis. The core of the chapter includes the description and comparison of methods currently being developed and tested to generate face animation from monocular static images and/or video sequences. These methods are categorized into three major groups: “those that retrieve emotion information,” “those that obtain parameters related to the Face Animation synthesis used,” and “those that use explicit face synthesis during the image analysis.” A general overview about the processing fundamentals involved in facial analysis is also provided. Readers will have a clear understanding of the ongoing research performed in the field of facial expression and motion analysis on monocular images by easily finding the right references to the detailed description of all mentioned methods.


2011 ◽  
pp. 295-316
Author(s):  
Markus Kampmann ◽  
Liang Zhang

This chapter introduces a complete framework for automatic adaptation of a 3D face model to a human face for visual communication applications like video conferencing or video telephony. First, facial features in a facial image are estimated. Then, the 3D face model is adapted using the estimated facial features. This framework is scalable with respect to complexity. Two complexity modes, a low complexity and a high complexity mode, are introduced. For the low complexity mode, only eye and mouth features are estimated and the low complexity face model Candide is adapted. For the high complexity mode, a more detailed face model is adapted, using eye and mouth features, eyebrow and nose features, and chin and cheek contours. Experimental results with natural videophone sequences show that with this framework automatic 3D face model adaptation with high accuracy is possible.


Sign in / Sign up

Export Citation Format

Share Document