scholarly journals Scene Wireframes Sketching for Drones

Proceedings ◽  
2018 ◽  
Vol 2 (18) ◽  
pp. 1193
Author(s):  
Roi Santos ◽  
Xose Pardo ◽  
Xose Fdez-Vidal

The increasing use of autonomous UAVs inside buildings and around human-made structures demands new accurate and comprehensive representation of their operation environments. Most of the 3D scene abstraction methods use invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with inaccurate directions limit the understanding of scenes as those that include environments with poor texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a full method based on the matching of lines that provides a complementary approach to state-of-the-art methods when facing 3D scene representation of poor texture environments for future autonomous UAV.

2020 ◽  
Vol 12 (3) ◽  
pp. 543 ◽  
Author(s):  
Małgorzata Jarząbek-Rychard ◽  
Dong Lin ◽  
Hans-Gerd Maas

Targeted energy management and control is becoming an increasing concern in the building sector. Automatic analyses of thermal data, which minimize the subjectivity of the assessment and allow for large-scale inspections, are therefore of high interest. In this study, we propose an approach for a supervised extraction of façade openings (windows and doors) from photogrammetric 3D point clouds attributed to RGB and thermal infrared (TIR) information. The novelty of the proposed approach is in the combination of thermal information with other available characteristics of data for a classification performed directly in 3D space. Images acquired in visible and thermal infrared spectra serve as input data for the camera pose estimation and the reconstruction of 3D scene geometry. To investigate the relevance of different information types to the classification performance, a Random Forest algorithm is applied to various sets of computed features. The best feature combination is then used as an input for a Conditional Random Field that enables us to incorporate contextual information and consider the interaction between the points. The evaluation executed on a per-point level shows that the fusion of all available information types together with context consideration allows us to extract objects with 90% completeness and 95% correctness. A respective assessment executed on a per-object level shows 97% completeness and 88% accuracy.


Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.


2019 ◽  
Vol 39 (2-3) ◽  
pp. 339-355 ◽  
Author(s):  
Renaud Dubé ◽  
Andrei Cramariuc ◽  
Daniel Dugas ◽  
Hannes Sommer ◽  
Marcin Dymczyk ◽  
...  

Precisely estimating a robot’s pose in a prior, global map is a fundamental capability for mobile robotics, e.g., autonomous driving or exploration in disaster zones. This task, however, remains challenging in unstructured, dynamic environments, where local features are not discriminative enough and global scene descriptors only provide coarse information. We therefore present SegMap: a map representation solution for localization and mapping based on the extraction of segments in 3D point clouds. Working at the level of segments offers increased invariance to view-point and local structural changes, and facilitates real-time processing of large-scale 3D data. SegMap exploits a single compact data-driven descriptor for performing multiple tasks: global localization, 3D dense map reconstruction, and semantic information extraction. The performance of SegMap is evaluated in multiple urban driving and search and rescue experiments. We show that the learned SegMap descriptor has superior segment retrieval capabilities, compared with state-of-the-art handcrafted descriptors. As a consequence, we achieve a higher localization accuracy and a 6% increase in recall over state-of-the-art handcrafted descriptors. These segment-based localizations allow us to reduce the open-loop odometry drift by up to 50%. SegMap is open-source available along with easy to run demonstrations.


Author(s):  
Y. Ninsalam ◽  
R. Qin ◽  
J. Rekittke

In our study we use 3D scene understanding to detect the discharge of domestic solid waste along an urban river. Solid waste found along the Ciliwung River in the neighbourhoods of Bukit Duri and Kampung Melayu may be attributed to households. This is in part due to inadequate municipal waste infrastructure and services which has caused those living along the river to rely upon it for waste disposal. However, there has been little research to understand the prevalence of household waste along the river. Our aim is to develop a methodology that deploys a low cost sensor to identify point source discharge of solid waste using image classification methods. To demonstrate this we describe the following five-step method: 1) a strip of GoPro images are captured photogrammetrically and processed for dense point cloud generation; 2) depth for each image is generated through a backward projection of the point clouds; 3) a supervised image classification method based on Random Forest classifier is applied on the view dependent red, green, blue and depth (RGB-D) data; 4) point discharge locations of solid waste can then be mapped by projecting the classified images to the 3D point clouds; 5) then the landscape elements are classified into five types, such as vegetation, human settlement, soil, water and solid waste. While this work is still ongoing, the initial results have demonstrated that it is possible to perform quantitative studies that may help reveal and estimate the amount of waste present along the river bank.


Author(s):  
Robert Niederheiser ◽  
Martin Mokroš ◽  
Julia Lange ◽  
Helene Petschko ◽  
Günther Prasicek ◽  
...  

Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. <br><br> We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. <br><br> While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing.


Author(s):  
Y. Ninsalam ◽  
R. Qin ◽  
J. Rekittke

In our study we use 3D scene understanding to detect the discharge of domestic solid waste along an urban river. Solid waste found along the Ciliwung River in the neighbourhoods of Bukit Duri and Kampung Melayu may be attributed to households. This is in part due to inadequate municipal waste infrastructure and services which has caused those living along the river to rely upon it for waste disposal. However, there has been little research to understand the prevalence of household waste along the river. Our aim is to develop a methodology that deploys a low cost sensor to identify point source discharge of solid waste using image classification methods. To demonstrate this we describe the following five-step method: 1) a strip of GoPro images are captured photogrammetrically and processed for dense point cloud generation; 2) depth for each image is generated through a backward projection of the point clouds; 3) a supervised image classification method based on Random Forest classifier is applied on the view dependent red, green, blue and depth (RGB-D) data; 4) point discharge locations of solid waste can then be mapped by projecting the classified images to the 3D point clouds; 5) then the landscape elements are classified into five types, such as vegetation, human settlement, soil, water and solid waste. While this work is still ongoing, the initial results have demonstrated that it is possible to perform quantitative studies that may help reveal and estimate the amount of waste present along the river bank.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Qi Peng ◽  
Lifen Tu ◽  
Kaibing Zhang ◽  
Sidong Zhong

An effective automatic 3D reconstruction method using a portable four-camera photographic measurement system (PFCPMS) is proposed. By taking advantage of the complementary stereo information from four cameras, a fast and highly accurate feature point matching algorithm is developed for 3D reconstruction. Specifically, we first utilize a projection method to obtain a large number of dense feature points. And then a reduction and clustering treatment is applied to simplify the Delaunay triangulation process and reconstruct a 3D model for each scene. In addition, a 3D model stitching approach is proposed to further improve the performance of the limited field-of-view for image-based method. The experimental results tested on the 172 cave in Mogao Grottoes indicate that the proposed method is effective to reconstruct a 3D scene with a low-cost four-camera photographic measurement system.


2021 ◽  
Vol 7 (5) ◽  
pp. 80
Author(s):  
Ahmet Firintepe ◽  
Carolin Vey ◽  
Stylianos Asteriadis ◽  
Alain Pagani ◽  
Didier Stricker

In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach “PointsToRotation” is based on a Deep Neural Network alone, whereas our second approach “PointsToPose” is a hybrid model combining Deep Learning and a voting-based mechanism. Our methods utilize a point cloud estimator, which we trained on multi-view infrared images in a semi-supervised manner, generating point clouds based on one image only. We generate a point cloud dataset with our point cloud estimator using the HMDPose dataset, consisting of multi-view infrared images of various AR glasses with the corresponding 6-DoF poses. In comparison to another point cloud-based 6-DoF pose estimation named CloudPose, we achieve an error reduction of around 50%. Compared to a state-of-the-art image-based method, we reduce the pose estimation error by around 96%.


Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.


Sign in / Sign up

Export Citation Format

Share Document