scholarly journals FAST PROBABILISTIC FUSION OF 3D POINT CLOUDS VIA OCCUPANCY GRIDS FOR SCENE CLASSIFICATION

Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.

Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.


Author(s):  
W. Barragán ◽  
A. Campos ◽  
G. Sanchez

The objective of this research is automatic generation of buildings in the interest areas. This research was developed by using high resolution vertical aerial photographs and the LIDAR point cloud through radiometric and geometric digital processes. The research methodology usesknown building heights and various segmentation algorithms and spectral band combination. The overall effectiveness of the algorithm is 97.2% with the test data.


Author(s):  
Robert Niederheiser ◽  
Martin Mokroš ◽  
Julia Lange ◽  
Helene Petschko ◽  
Günther Prasicek ◽  
...  

Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. <br><br> We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. <br><br> While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.


Author(s):  
W. Barragán ◽  
A. Campos ◽  
G. Sanchez

The objective of this research is automatic generation of buildings in the interest areas. This research was developed by using high resolution vertical aerial photographs and the LIDAR point cloud through radiometric and geometric digital processes. The research methodology usesknown building heights and various segmentation algorithms and spectral band combination. The overall effectiveness of the algorithm is 97.2% with the test data.


2021 ◽  
Vol 7 (5) ◽  
pp. 80
Author(s):  
Ahmet Firintepe ◽  
Carolin Vey ◽  
Stylianos Asteriadis ◽  
Alain Pagani ◽  
Didier Stricker

In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach “PointsToRotation” is based on a Deep Neural Network alone, whereas our second approach “PointsToPose” is a hybrid model combining Deep Learning and a voting-based mechanism. Our methods utilize a point cloud estimator, which we trained on multi-view infrared images in a semi-supervised manner, generating point clouds based on one image only. We generate a point cloud dataset with our point cloud estimator using the HMDPose dataset, consisting of multi-view infrared images of various AR glasses with the corresponding 6-DoF poses. In comparison to another point cloud-based 6-DoF pose estimation named CloudPose, we achieve an error reduction of around 50%. Compared to a state-of-the-art image-based method, we reduce the pose estimation error by around 96%.


Author(s):  
A. Tabkha ◽  
R. Hajji ◽  
R. Billen ◽  
F. Poux

Abstract. The raw nature of point clouds is an important challenge for their direct exploitation in architecture, engineering and construction applications. Particularly, their lack of semantics hinders their utility for automatic workflows (Poux, 2019). In addition, the volume and the irregularity of the structure of point clouds makes it difficult to directly and automatically classify datasets efficiently, especially when compared to the state-of-the art 2D raster classification. Recently, with the advances in deep learning models such as convolutional neural networks (CNNs) , the performance of image-based classification of remote sensing scenes has improved considerably (Chen et al., 2018; Cheng et al., 2017). In this research, we examine a simple and innovative approach that represent large 3D point clouds through multiple 2D projections to leverage learning approaches based on 2D images. In other words, the approach in this study proposes an automatic process for extracting 360° panoramas, enhancing these to be able to leverage raster data to obtain domain-base semantic enrichment possibilities. Indeed, it is very important to obtain a rigorous characterization for use in the classification of a point cloud. Especially because there is a very large variety of 3D point cloud domain applications. In order to test the adequacy of the method and its potential for generalization, several tests were performed on different datasets. The developed semantic augmentation algorithm uses only the attributes X, Y, Z and camera positions as inputs.


Author(s):  
Robert Niederheiser ◽  
Martin Mokroš ◽  
Julia Lange ◽  
Helene Petschko ◽  
Günther Prasicek ◽  
...  

Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. &lt;br&gt;&lt;br&gt; We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. &lt;br&gt;&lt;br&gt; While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.


Author(s):  
J. Wolf ◽  
R. Richter ◽  
S. Discher ◽  
J. Döllner

<p><strong>Abstract.</strong> In this work, we present an approach that uses an established image recognition convolutional neural network for the semantic classification of two-dimensional objects found in mobile mapping 3D point cloud scans of road environments, namely manhole covers and road markings. We show that the approach is capable of classifying these objects and that it can efficiently be applied on large datasets. Top-down view images from the point cloud are rendered and classified by a U-Net implementation. The results are integrated into the point cloud by setting an additional semantic attribute. Shape files can be computed from the classified points.</p>


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1228
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Yimin Chen ◽  
Wei Lang ◽  
Tingting Chen ◽  
...  

Ancient pagodas are usually parts of hot tourist spots in many oriental countries due to their unique historical backgrounds. They are usually polygonal structures comprised by multiple floors, which are separated by eaves. In this paper, we propose a new method to investigate both the rotational and reflectional symmetry of such polygonal pagodas through developing novel geometric models to fit to the 3D point clouds obtained from photogrammetric reconstruction. The geometric model consists of multiple polygonal pyramid/prism models but has a common central axis. The method was verified by four datasets collected by an unmanned aerial vehicle (UAV) and a hand-held digital camera. The results indicate that the models fit accurately to the pagodas’ point clouds. The symmetry was realized by rotating and reflecting the pagodas’ point clouds after a complete leveling of the point cloud was achieved using the estimated central axes. The results show that there are RMSEs of 5.04 cm and 5.20 cm deviated from the perfect (theoretical) rotational and reflectional symmetries, respectively. This concludes that the examined pagodas are highly symmetric, both rotationally and reflectionally. The concept presented in the paper not only work for polygonal pagodas, but it can also be readily transformed and implemented for other applications for other pagoda-like objects such as transmission towers.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 75
Author(s):  
Dario Carrea ◽  
Antonio Abellan ◽  
Marc-Henri Derron ◽  
Neal Gauvin ◽  
Michel Jaboyedoff

The use of 3D point clouds to improve the understanding of natural phenomena is currently applied in natural hazard investigations, including the quantification of rockfall activity. However, 3D point cloud treatment is typically accomplished using nondedicated (and not optimal) software. To fill this gap, we present an open-source, specific rockfall package in an object-oriented toolbox developed in the MATLAB® environment. The proposed package offers a complete and semiautomatic 3D solution that spans from extraction to identification and volume estimations of rockfall sources using state-of-the-art methods and newly implemented algorithms. To illustrate the capabilities of this package, we acquired a series of high-quality point clouds in a pilot study area referred to as the La Cornalle cliff (West Switzerland), obtained robust volume estimations at different volumetric scales, and derived rockfall magnitude–frequency distributions, which assisted in the assessment of rockfall activity and long-term erosion rates. An outcome of the case study shows the influence of the volume computation on the magnitude–frequency distribution and ensuing erosion process interpretation.


Sign in / Sign up

Export Citation Format

Share Document