scholarly journals BRUTE FORCE MATCHING BETWEEN CAMERA SHOTS AND SYNTHETIC IMAGES FROM POINT CLOUDS

Author(s):  
R. Boerner ◽  
M. Kröhnert

3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. <br><br> The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

Author(s):  
R. Boerner ◽  
M. Kröhnert

3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. <br><br> The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.


2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


2019 ◽  
Vol 8 (10) ◽  
pp. 460
Author(s):  
Gracchi ◽  
Gigli ◽  
Noël ◽  
Jaboyedoff ◽  
Madiai ◽  
...  

In this paper, a MATLAB tool for the automatic detection of the best locations to install a wireless sensor network (WSN) is presented. The implemented code works directly on high-resolution 3D point clouds and aims to help in positioning sensors that are part of a network requiring inter-visibility, namely, a clear line of sight (LOS). Indeed, with the development of LiDAR and Structure from Motion technologies, there is an opportunity to directly use 3D point cloud data to perform visibility analyses. By doing so, many disadvantages of traditional modelling and analysis methods can be bypassed. The algorithm points out the optimal deployment of devices following mainly two criteria: inter-visibility (using a modified version of the Hidden Point Removal operator) and inter-distance. Furthermore, an option to prioritize significant areas is provided. The proposed method was first validated on an artificial 3D model, and then on a landslide 3D point cloud acquired from terrestrial laser scanning for the real positioning of an ultrawide-band WSN already installed in 2016. The comparison between collected data and data acquired by the WSN installed following traditional patterns has demonstrated its ability for the optimal deployment of a WSN requiring inter-visibility.


Author(s):  
C. Beil ◽  
T. Kutzner ◽  
B. Schwab ◽  
B. Willenborg ◽  
A. Gawronski ◽  
...  

Abstract. A range of different and increasingly accessible acquisition methods, the possibility for frequent data updates of large areas, and a simple data structure are some of the reasons for the popularity of three-dimensional (3D) point cloud data. While there are multiple techniques for segmenting and classifying point clouds, capabilities of common data formats such as LAS for providing semantic information are mostly limited to assigning points to a certain category (classification). However, several fields of application, such as digital urban twins used for simulations and analyses, require more detailed semantic knowledge. This can be provided by semantic 3D city models containing hierarchically structured semantic and spatial information. Although semantic models are often reconstructed from point clouds, they are usually geometrically less accurate due to generalization processes. First, point cloud data structures / formats are discussed with respect to their semantic capabilities. Then, a new approach for integrating point clouds with semantic 3D city models is presented, consequently combining respective advantages of both data types. In addition to elaborate (and established) semantic concepts for several thematic areas, the new version 3.0 of the international Open Geospatial Consortium (OGC) standard CityGML also provides a PointCloud module. In this paper a scheme is shown, how CityGML 3.0 can be used to provide semantic structures for point clouds (directly or stored in a separate LAS file). Methods and metrics to automatically assign points to corresponding Level of Detail (LoD)2 or LoD3 models are presented. Subsequently, dataset examples implementing these concepts are provided for download.


Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4594
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Derek D. Lichti ◽  
Yeran Sun ◽  
Jun Wang ◽  
...  

Pipe elbow joints exist in almost every piping system supporting many important applications such as clean water supply. However, spatial information of the elbow joints is rarely extracted and analyzed from observations such as point cloud data obtained from laser scanning due to lack of a complete geometric model that can be applied to different types of joints. In this paper, we proposed a novel geometric model and several model adaptions for typical elbow joints including the 90° and 45° types, which facilitates the use of 3D point clouds of the elbow joints collected from laser scanning. The model comprises translational, rotational, and dimensional parameters, which can be used not only for monitoring the joints’ geometry but also other applications such as point cloud registrations. Both simulated and real datasets were used to verify the model, and two applications derived from the proposed model (point cloud registration and mounting bracket detection) were shown. The results of the geometric fitting of the simulated datasets suggest that the model can accurately recover the geometry of the joint with very low translational (0.3 mm) and rotational (0.064°) errors when ±0.02 m random errors were introduced to coordinates of a simulated 90° joint (with diameter equal to 0.2 m). The fitting of the real datasets suggests that the accuracy of the diameter estimate reaches 97.2%. The joint-based registration accuracy reaches sub-decimeter and sub-degree levels for the translational and rotational parameters, respectively.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.


Sign in / Sign up

Export Citation Format

Share Document