Non-rigid registration of point clouds using landmarks and stochastic neighbor embedding

2021 ◽  
Vol 30 (03) ◽  
Author(s):  
Amar Maharjan ◽  
Xiaohui Yuan ◽  
Qiang Lu ◽  
Yuqi Fan ◽  
Tian Chen
Author(s):  
Liliane Rodrigues de Almeida ◽  
Gilson Antonio Giraldi ◽  
Marcelo Bernardes Vieira

2018 ◽  
Vol 34 (6-8) ◽  
pp. 1021-1030 ◽  
Author(s):  
Enkhbayar Altantsetseg ◽  
Oyundolgor Khorloo ◽  
Kouichi Konno

2017 ◽  
Vol 17 (01) ◽  
pp. 1750006 ◽  
Author(s):  
Luciano W. X. Cejnog ◽  
Fernando A. A. Yamada ◽  
Marcelo Bernardes Vieira

This work aims to enhance a classic method for rigid registration, the iterative closest point (ICP), modifying the closest point search in order to consider approximated information of local geometry combined to the Euclidean distance, originally used. For this, a preprocessing stage is applied, in which the local geometry is encoded in second-order orientation tensors. We define the CTSF, a similarity factor between tensors. Our method uses a strategy of weight variation between the CTSF and the Euclidean distance, in order to establish correspondences. Quantitative tests were made in point clouds with different geometric features, with variable levels of additive noise and outliers and in partial overlapping situations. Results show that the proposed modification increases the convergence probability of the method for higher angles, making the method comparable to state-of-art techniques.


2018 ◽  
Vol 9 (2) ◽  
pp. 1
Author(s):  
Fernando Akio Yamada ◽  
Gilson Antonio Giraldi ◽  
Marcelo Bernardes Vieira ◽  
Liliane Rodrigues Almeida ◽  
Antonio Lopes Apolinário Jr.

Pairwise rigid registration aims to find the rigid transformation that best registers two surfaces represented by point clouds. This work presents a comparison between seven algorithms, with different strategies to tackle rigid registration tasks. We focus on the frame-to-frame problem, in which the point clouds are extracted from a video sequence with depth information generating partial overlapping 3D data. We use both point clouds and RGB-D video streams in the experimental results. The former is considered under different viewpoints with the addition of a case-study simulating missing data. Since the ground truth rotation is provided, we discuss four different metrics to measure the rotation error in this case. Among the seven considered techniques, the Sparse ICP and Sparse ICP-CTSF outperform the other five ones in the point cloud registration experiments without considering incomplete data. However, the evaluation facing missing data indicates sensitivity for these methods against this problem and favors ICP-CTSF in such situations. In the tests with video sequences, the depth information is segmented in the first step, to get the target region. Next, the registration algorithms are applied and the average root mean squared error, rotation and translation errors are computed. Besides, we analyze the robustness of the algorithms against spatial and temporal sampling rates. We conclude from the experiments using a depth video sequences that ICP-CTSF is the best technique for frame-to-frame registration.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256340
Author(s):  
David Schunck ◽  
Federico Magistri ◽  
Radu Alexandru Rosu ◽  
André Cornelißen ◽  
Nived Chebrolu ◽  
...  

Understanding the growth and development of individual plants is of central importance in modern agriculture, crop breeding, and crop science. To this end, using 3D data for plant analysis has gained attention over the last years. High-resolution point clouds offer the potential to derive a variety of plant traits, such as plant height, biomass, as well as the number and size of relevant plant organs. Periodically scanning the plants even allows for performing spatio-temporal growth analysis. However, highly accurate 3D point clouds from plants recorded at different growth stages are rare, and acquiring this kind of data is costly. Besides, advanced plant analysis methods from machine learning require annotated training data and thus generate intense manual labor before being able to perform an analysis. To address these issues, we present with this dataset paper a multi-temporal dataset featuring high-resolution registered point clouds of maize and tomato plants, which we manually labeled for computer vision tasks, such as for instance segmentation and 3D reconstruction, providing approximately 260 million labeled 3D points. To highlight the usability of the data and to provide baselines for other researchers, we show a variety of applications ranging from point cloud segmentation to non-rigid registration and surface reconstruction. We believe that our dataset will help to develop new algorithms to advance the research for plant phenotyping, 3D reconstruction, non-rigid registration, and deep learning on raw point clouds. The dataset is freely accessible at https://www.ipb.uni-bonn.de/data/pheno4d/.


2017 ◽  
Vol 39 (6) ◽  
pp. 1713-1728 ◽  
Author(s):  
Li Yan ◽  
Junxiang Tan ◽  
Hua Liu ◽  
Hong Xie ◽  
Changjun Chen

2017 ◽  
Vol 17 (04) ◽  
pp. 1750021
Author(s):  
F. A. A. Yamada ◽  
L. W. X. Cejnog ◽  
M. B. Vieira ◽  
R. L. S. da Silva

In the pairwise rigid registration problem, we need to find a rigid transformation that aligns two point clouds. The classical and most common solution is the Iterative Closest Point (ICP) algorithm. However, the ICP and many of its variants require that the point clouds are already coarsely aligned. We present in this paper a method named Shape-based Weighting Covariance Iterative Closest Point (SWC-ICP) which improves the possibility to correctly align two point clouds, regardless of the initial pose, even when they are only partially overlapped, or in the presence of noise and outliers. It benefits from the local geometry of the points, encoded in second-order orientation tensors, to provide a second correspondences set to the ICP. The cross-covariance matrix computed from this set is combined with the usual cross-covariance matrix, following a heuristic strategy. In order to compare our method with some recent approaches, we present a detailed evaluation protocol to rigid registration. Results show that the SWC-ICP is among the best compared methods, with a better performance in situations of wide angular displacement of noisy point clouds.


2021 ◽  
Vol 13 (23) ◽  
pp. 4755
Author(s):  
Saishang Zhong ◽  
Mingqiang Guo ◽  
Ruina Lv ◽  
Jianguo Chen ◽  
Zhong Xie ◽  
...  

Rigid registration of 3D indoor scenes is a fundamental yet vital task in various fields that include remote sensing (e.g., 3D reconstruction of indoor scenes), photogrammetry measurement, geometry modeling, etc. Nevertheless, state-of-the-art registration approaches still have defects when dealing with low-quality indoor scene point clouds derived from consumer-grade RGB-D sensors. The major challenge is accurately extracting correspondences between a pair of low-quality point clouds when they contain considerable noise, outliers, or weak texture features. To solve the problem, we present a point cloud registration framework in view of RGB-D information. First, we propose a point normal filter for effectively removing noise and simultaneously maintaining sharp geometric features and smooth transition regions. Second, we design a correspondence extraction scheme based on a novel descriptor encoding textural and geometry information, which can robustly establish dense correspondences between a pair of low-quality point clouds. Finally, we propose a point-to-plane registration technology via a nonconvex regularizer, which can further diminish the influence of those false correspondences and produce an exact rigid transformation between a pair of point clouds. Compared to existing state-of-the-art techniques, intensive experimental results demonstrate that our registration framework is excellent visually and numerically, especially for dealing with low-quality indoor scenes.


Sign in / Sign up

Export Citation Format

Share Document