Rigid 3D Registration of Pre-operative Information for Semi-Autonomous Surgery

Author(s):  
Nicola Piccinelli ◽  
Andrea Roberti ◽  
Eleonora Tagliabue ◽  
Francesco Setti ◽  
Gernot Kronreif ◽  
...  
Keyword(s):  
Author(s):  
Daniele Gibelli ◽  
Andrea Palamenghi ◽  
Pasquale Poppa ◽  
Chiarella Sforza ◽  
Cristina Cattaneo ◽  
...  

AbstractPersonal identification of the living from video surveillance systems usually involves 2D images. However, the potentiality of three-dimensional facial models in gaining personal identification through 3D-3D comparison still needs to be verified. This study aims at testing the reliability of a protocol for 3D-3D registration of facial models, potentially useful for personal identification. Fifty male subjects aged between 18 and 45 years were randomly chosen from a database of 3D facial models acquired through stereophotogrammetry. For each subject, two acquisitions were available; the 3D models of faces were then registered onto other models belonging to the same and different individuals according to the least point-to-point distance on the entire facial surface, for a total of 50 matches and 50 mismatches. RMS value (root mean square) of point-to-point distance between the two models was then calculated through the VAM® software. Intra- and inter-observer errors were assessed through calculation of relative technical error of measurement (rTEM). Possible statistically significant differences between matches and mismatches were assessed through Mann–Whitney test (p < 0.05). Both for intra- and inter-observer repeatability rTEM was between 2.2 and 5.2%. Average RMS point-to-point distance was 0.50 ± 0.28 mm in matches, 2.62 ± 0.56 mm in mismatches (p < 0.01). An RMS threshold of 1.50 mm could distinguish matches and mismatches in 100% of cases. This study provides an improvement to existing 3D-3D superimposition methods and confirms the great advantages which may derive to personal identification of the living from 3D facial analysis.


2021 ◽  
Vol 69 ◽  
pp. 101957
Author(s):  
Rewa R. Sood ◽  
Wei Shao ◽  
Christian Kunder ◽  
Nikola C. Teslovich ◽  
Jeffrey B. Wang ◽  
...  

1994 ◽  
Vol 14 (5) ◽  
pp. 749-762 ◽  
Author(s):  
Jean-François Mangin ◽  
Vincent Frouin ◽  
Isabelle Bloch ◽  
Bernard Bendriem ◽  
Jaime Lopez-Krahe

We propose a fully nonsupervised methodology dedicated to the fast registration of positron emission tomography (PET) and magnetic resonance images of the brain. First, discrete representations of the surfaces of interest (head or brain surface) are automatically extracted from both images. Then, a shape-independent surface-matching algorithm gives a rigid body transformation, which allows the transfer of information between both modalities. A three-dimensional (3D) extension of the chamfer-matching principle makes up the core of this surface-matching algorithm. The optimal transformation is inferred from the minimization of a quadratic generalized distance between discrete surfaces, taking into account between-modality differences in the localization of the segmented surfaces. The minimization process is efficiently performed via the precomputation of a 3D distance map. Validation studies using a dedicated brain-shaped phantom have shown that the maximum registration error was of the order of the PET pixel size (2 mm) for the wide variety of tested configurations. The software is routinely used today in a clinical context by the physicians of the Service Hospitalier Frédéric Joliot (>150 registrations performed). The entire registration process requires ∼5 min on a conventional workstation.


2009 ◽  
Author(s):  
C. S. Rajapakse ◽  
M. J. Wald ◽  
J. Magland ◽  
X. H. Zhang ◽  
X. S. Liu ◽  
...  
Keyword(s):  

Spine ◽  
2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Benyu Tang ◽  
Haoqun Yao ◽  
Shaobai Wang ◽  
Yanlong Zhong ◽  
Kai Cao ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document