scholarly journals AIRBORNE LIDAR POINT CLOUD CLASSIFICATION BASED ON MULTILEVEL POINT CLUSTER FEATURES

Author(s):  
Y. Gao ◽  
M. C. Li

Abstract. Airborne Light Detection And Ranging (LiDAR) has become an important means for efficient and high-precision acquisition of 3D spatial data of large scenes. It has important application value in digital cities and location-based services. The classification and identification of point cloud is the basis of its application, and it is also a hot and difficult problem in the field of geographic information science.The difficulty of LiDAR point cloud classification in large-scale urban scenes is: On the one hand, the urban scene LiDAR point cloud contains rich and complex features, many types of features, different shapes, complex structures, and mutual occlusion, resulting in large data loss; On the other hand, the LiDAR scanner is far away from the urban features, and is like a car, a pedestrian, etc., which is in motion during the scanning process, which causes a certain degree of data noise of the point cloud and uneven density of the point cloud.Aiming at the characteristics of LiDAR point cloud in urban scene.The main work of this paper implements a method based on the saliency dictionary and Latent Dirichlet Allocation (LDA) model for LiDAR point cloud classification. The method uses the tag information of the training data and the tag source of each dictionary item to construct a significant dictionary learning model in sparse coding to expresses the feature of the point set more accurately.And it also uses the multi-path AdaBoost classifier to perform the features of the multi-level point set. The classification of point clouds is realized based on the supervised method. The experimental results show that the feature set extracted by the method combined with the multi-path classifier can significantly improve the cloud classification accuracy of complex city market attractions.

2019 ◽  
Vol 11 (23) ◽  
pp. 2846 ◽  
Author(s):  
Tong ◽  
Li ◽  
Zhang ◽  
Chen ◽  
Zhang ◽  
...  

Accurate and effective classification of lidar point clouds with discriminative features expression is a challenging task for scene understanding. In order to improve the accuracy and the robustness of point cloud classification based on single point features, we propose a novel point set multi-level aggregation features extraction and fusion method based on multi-scale max pooling and latent Dirichlet allocation (LDA). To this end, in the hierarchical point set feature extraction, point sets of different levels and sizes are first adaptively generated through multi-level clustering. Then, more effective sparse representation is implemented by locality-constrained linear coding (LLC) based on single point features, which contributes to the extraction of discriminative individual point set features. Next, the local point set features are extracted by combining the max pooling method and the multi-scale pyramid structure constructed by the point’s coordinates within each point set. The global and the local features of the point sets are effectively expressed by the fusion of multi-scale max pooling features and global features constructed by the point set LLC-LDA model. The point clouds are classified by using the point set multi-level aggregation features. Our experiments on two scenes of airborne laser scanning (ALS) point clouds—a mobile laser scanning (MLS) scene point cloud and a terrestrial laser scanning (TLS) scene point cloud—demonstrate the effectiveness of the proposed point set multi-level aggregation features for point cloud classification, and the proposed method outperforms other related and compared algorithms.


2020 ◽  
Vol 47 (11) ◽  
pp. 1110002
Author(s):  
雷相达 Lei Xiangda ◽  
王宏涛 Wang Hongtao ◽  
赵宗泽 Zhao Zongze

2019 ◽  
Vol 27 (7) ◽  
pp. 1601-1612
Author(s):  
赵 传 ZHAO Chuan ◽  
张保明 ZHANG Bao-ming ◽  
余东行 YU Dong-hang ◽  
郭海涛 GUO Hai-tao ◽  
卢 俊 LU Jun

2020 ◽  
Vol 12 (14) ◽  
pp. 2181
Author(s):  
Hangbin Wu ◽  
Huimin Yang ◽  
Shengyu Huang ◽  
Doudou Zeng ◽  
Chun Liu ◽  
...  

The existing deep learning methods for point cloud classification are trained using abundant labeled samples and used to test only a few samples. However, classification tasks are diverse, and not all tasks have enough labeled samples for training. In this paper, a novel point cloud classification method for indoor components using few labeled samples is proposed to solve the problem of the requirement for abundant labeled samples for training with deep learning classification methods. This method is composed of four parts: mixing samples, feature extraction, dimensionality reduction, and semantic classification. First, the few labeled point clouds are mixed with unlabeled point clouds. Next, the mixed high-dimensional features are extracted using a deep learning framework. Subsequently, a nonlinear manifold learning method is used to embed the mixed features into a low-dimensional space. Finally, the few labeled point clouds in each cluster are identified, and semantic labels are provided for unlabeled point clouds in the same cluster by a neighborhood search strategy. The validity and versatility of the proposed method were validated by different experiments and compared with three state-of-the-art deep learning methods. Our method uses fewer than 30 labeled point clouds to achieve an accuracy that is 1.89–19.67% greater than existing methods. More importantly, the experimental results suggest that this method is not only suitable for single-attribute indoor scenarios but also for comprehensive complex indoor scenarios.


2019 ◽  
Vol 9 (5) ◽  
pp. 951 ◽  
Author(s):  
Yong Li ◽  
Guofeng Tong ◽  
Xiance Du ◽  
Xiang Yang ◽  
Jianjun Zhang ◽  
...  

3D point cloud classification has wide applications in the field of scene understanding. Point cloud classification based on points can more accurately segment the boundary region between adjacent objects. In this paper, a point cloud classification algorithm based on a single point multilevel features fusion and pyramid neighborhood optimization are proposed for a Airborne Laser Scanning (ALS) point cloud. First, the proposed algorithm determines the neighborhood region of each point, after which the features of each single point are extracted. For the characteristics of the ALS point cloud, two new feature descriptors are proposed, i.e., a normal angle distribution histogram and latitude sampling histogram. Following this, multilevel features of a single point are constructed by multi-resolution of the point cloud and multi-neighborhood spaces. Next, the features are trained by the Support Vector Machine based on a Gaussian kernel function, and the points are classified by the trained model. Finally, a classification results optimization method based on a multi-scale pyramid neighborhood constructed by a multi-resolution point cloud is used. In the experiment, the algorithm is tested by a public dataset. The experimental results show that the proposed algorithm can effectively classify large-scale ALS point clouds. Compared with the existing algorithms, the proposed algorithm has a better classification performance.


2018 ◽  
Vol 10 (8) ◽  
pp. 1192 ◽  
Author(s):  
Chen-Chieh Feng ◽  
Zhou Guo

The automating classification of point clouds capturing urban scenes is critical for supporting applications that demand three-dimensional (3D) models. Achieving this goal, however, is met with challenges because of the varying densities of the point clouds and the complexity of the 3D data. In order to increase the level of automation in the point cloud classification, this study proposes a segment-based parameter learning method that incorporates a two-dimensional (2D) land cover map, in which a strategy of fusing the 2D land cover map and the 3D points is first adopted to create labelled samples, and a formalized procedure is then implemented to automatically learn the following parameters of point cloud classification: the optimal scale of the neighborhood for segmentation, optimal feature set, and the training classifier. It comprises four main steps, namely: (1) point cloud segmentation; (2) sample selection; (3) optimal feature set selection; and (4) point cloud classification. Three datasets containing the point cloud data were used in this study to validate the efficiency of the proposed method. The first two datasets cover two areas of the National University of Singapore (NUS) campus while the third dataset is a widely used benchmark point cloud dataset of Oakland, Pennsylvania. The classification parameters were learned from the first dataset consisting of a terrestrial laser-scanning data and a 2D land cover map, and were subsequently used to classify both of the NUS datasets. The evaluation of the classification results showed overall accuracies of 94.07% and 91.13%, respectively, indicating that the transition of the knowledge learned from one dataset to another was satisfactory. The classification of the Oakland dataset achieved an overall accuracy of 97.08%, which further verified the transferability of the proposed approach. An experiment of the point-based classification was also conducted on the first dataset and the result was compared to that of the segment-based classification. The evaluation revealed that the overall accuracy of the segment-based classification is indeed higher than that of the point-based classification, demonstrating the advantage of the segment-based approaches.


Author(s):  
E. Barnefske ◽  
H. Sternberg

<p><strong>Abstract.</strong> Point clouds give a very detailed and sometimes very accurate representation of the geometry of captured objects. In surveying, point clouds captured with laser scanners or camera systems are an intermediate result that must be processed further. Often the point cloud has to be divided into regions of similar types (object classes) for the next process steps. These classifications are very time-consuming and cost-intensive compared to acquisition. In order to automate this process step, conventional neural networks (ConvNet), which take over the classification task, are investigated in detail. In addition to the network architecture, the classification performance of a ConvNet depends on the training data with which the task is learned. This paper presents and evaluates the point clould classification tool (PCCT) developed at HCU Hamburg. With the PCCT, large point cloud collections can be semi-automatically classified. Furthermore, the influence of erroneous points in three-dimensional point clouds is investigated. The network architecture PointNet is used for this investigation.</p>


Author(s):  
Wenju Wang ◽  
Tao Wang ◽  
Yu Cai

AbstractClassifying 3D point clouds is an important and challenging task in computer vision. Currently, classification methods using multiple views lose characteristic or detail information during the representation or processing of views. For this reason, we propose a multi-view attention-convolution pooling network framework for 3D point cloud classification tasks. This framework uses Res2Net to extract the features from multiple 2D views. Our attention-convolution pooling method finds more useful information in the input data related to the current output, effectively solving the problem of feature information loss caused by feature representation and the detail information loss during dimensionality reduction. Finally, we obtain the probability distribution of the model to be classified using a full connection layer and the softmax function. The experimental results show that our framework achieves higher classification accuracy and better performance than other contemporary methods using the ModelNet40 dataset.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Sign in / Sign up

Export Citation Format

Share Document