scholarly journals Reconstruction of polygonal prisms from point-clouds of engineering facilities

2016 ◽  
Vol 3 (4) ◽  
pp. 322-329 ◽  
Author(s):  
Akisato Chida ◽  
Hiroshi Masuda

Abstract The advent of high-performance terrestrial laser scanners has made it possible to capture dense point-clouds of engineering facilities. 3D shape acquisition from engineering facilities is useful for supporting maintenance and repair tasks. In this paper, we discuss methods to reconstruct box shapes and polygonal prisms from large-scale point-clouds. Since many faces may be partly occluded by other objects in engineering plants, we estimate possible box shapes and polygonal prisms and verify their compatibility with measured point-clouds. We evaluate our method using actual point-clouds of engineering plants. Highlights This paper proposes a point-based reconstruction method for boxes and polygonal prisms in engineering plants. Many faces may be partly occluded by other objects in engineering plants. In our method, possible shapes are estimated and they are verified using their compatibility with measured point-clouds. In our experiments, our method achieved high precision and recall rates.

Author(s):  
Hiroki Okamoto ◽  
Hiroshi Masuda

In this paper, we discuss methods to efficiently render stereoscopic scenes of large-scale point-clouds on inexpensive VR systems. Recently, terrestrial laser scanners are significantly improved, and they can easily capture tens of millions points in a short time from large fields, such as engineering plants. If 3D stereoscopic scenes of large-scale point-clouds could be easily rendered using inexpensive devices, they might be involved in casual product development phases. However, it is difficult to render a huge number of points using common PCs, because VR systems require high frame rates to avoid VR sickness. To solve this problem, we introduce an efficient culling method for large-scale point-clouds. In our method, we project all points onto angle-space panoramic images, whose axes are the azimuth and elevation angles of head directions. Then we eliminate occluded and redundant points according to the resolutions of devices. Once visible points are selected, they can be rendered in high frame rates. Visible points are updated when the user stays at a certain position to observe target objects. Since points are processed on image space in our method, preprocessing is very fast. In our experiments, our method could render stereoscopic views of large-scale point-clouds in high frame rates.


2012 ◽  
Vol 523-524 ◽  
pp. 333-338
Author(s):  
Hiroshi Masuda ◽  
Ryo Matsuoka ◽  
Yuji Abe

Engineering facilities can be digitized as large-scale point-clouds by using the state-of-art mid-range laser scanners. For utilizing captured data in CAD systems, it is important to convert point-clouds to parametric surfaces. In this paper, we describe a method for robustly extract cylindrical faces and planar faces. Edges and silhouette lines have to be generated to construct bounded faces, but unfortunately points on silhouettes are very noisy in the case of mid-range laser scanners. Our method applies region-growing method on spherical space and improves the robustness. In addition, we enhance the region-growing so that surface regions can be propagated to disconnect points using multiple overlapping point-clouds.


2016 ◽  
Vol 10 (2) ◽  
pp. 163-171 ◽  
Author(s):  
Takuma Watanabe ◽  
◽  
Takeru Niwa ◽  
Hiroshi Masuda ◽  

We proposed a registration method for aligning short-range point-clouds captured using a portable laser scanner (PLS) to a large-scale point-cloud captured using a terrestrial laser scanner (TLS). As a PLS covers a very limited region, it often fails to provide sufficient features for registration. In our method, the system analyzes large-scale point-clouds captured using a TLS and indicates candidate regions to be measured using a PLS. When the user measures a suggested region, the system aligns the captured short-range point-cloud to the large-scale point-cloud. Our experiments show that the registration method can adequately align point-clouds captured using a TLS and a PLS.


2018 ◽  
Vol 12 (3) ◽  
pp. 327-327
Author(s):  
Hiroshi Masuda ◽  
Hiroaki Date

Recently, terrestrial laser scanners have been significantly improved in terms of accuracy, measurement distance, measurement speed, and resolution. They enable us to capture dense 3D point clouds of large-scale objects and fields, such as factories, engineering plants, large equipment, and transport ships. In addition, the mobile mapping system, which is a vehicle equipped with laser scanners and GPSs, can be used for capturing large-scale point clouds from a wide range of roads, buildings, and roadside objects. Large-scale point clouds are useful in a variety of applications, such as renovation and maintenance of facilities, engineering simulation, asset management, and 3D mapping. To realize these applications, new techniques must be developed for processing large-scale point clouds. So far, point processing has been studied mainly for relatively small objects in the field of computer-aided design and computer graphics. However, in recent years, the application areas of point clouds are not limited to conventional domains, but also include manufacturing, civil engineering, construction, transportation, forestry, and so on. This is because the state-of-the-art laser scanner can be used to represent large objects or fields as dense point clouds. We believe that discussing new techniques and applications related to large-scale point clouds beyond the boundaries of traditional academic fields is very important.This special issue addresses the latest research advances in large-scale point cloud processing. This covers a wide area of point processing, including shape reconstruction, geometry processing, object recognition, registration, visualization, and applications. The papers will help readers explore and share their knowledge and experience in technologies and development techniques.All papers were refereed through careful peer reviews. We would like to express our sincere appreciation to the authors for their submissions and to the reviewers for their invaluable efforts for ensuring the success of this special issue.


2017 ◽  
Vol 55 (9) ◽  
pp. 4839-4854 ◽  
Author(s):  
Yangbin Lin ◽  
Cheng Wang ◽  
Bili Chen ◽  
Dawei Zai ◽  
Jonathan Li

2022 ◽  
Vol 193 ◽  
pp. 106653
Author(s):  
Hejun Wei ◽  
Enyong Xu ◽  
Jinlai Zhang ◽  
Yanmei Meng ◽  
Jin Wei ◽  
...  

2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
Z. Li ◽  
W. Zhang ◽  
J. Shan

Abstract. Building models are conventionally reconstructed by building roof points via planar segmentation and then using a topology graph to group the planes together. Roof edges and vertices are then mathematically represented by intersecting segmented planes. Technically, such solution is based on sequential local fitting, i.e., the entire data of one building are not simultaneously participating in determining the building model. As a consequence, the solution is lack of topological integrity and geometric rigor. Fundamentally different from this traditional approach, we propose a holistic parametric reconstruction method which means taking into consideration the entire point clouds of one building simultaneously. In our work, building models are reconstructed from predefined parametric (roof) primitives. We first use a well-designed deep neural network to segment and identify primitives in the given building point clouds. A holistic optimization strategy is then introduced to simultaneously determine the parameters of a segmented primitive. In the last step, the optimal parameters are used to generate a watertight building model in CityGML format. The airborne LiDAR dataset RoofN3D with predefined roof types is used for our test. It is shown that PointNet++ applied to the entire dataset can achieve an accuracy of 83% for primitive classification. For a subset of 910 buildings in RoofN3D, the holistic approach is then used to determine the parameters of primitives and reconstruct the buildings. The achieved overall quality of reconstruction is 0.08 meters for point-surface-distance or 0.7 times RMSE of the input LiDAR points. This study demonstrates the efficiency and capability of the proposed approach and its potential to handle large scale urban point clouds.


Sign in / Sign up

Export Citation Format

Share Document