scholarly journals Topologically Consistent Reconstruction for Complex Indoor Structures from Point Clouds

2021 ◽  
Vol 13 (19) ◽  
pp. 3844
Author(s):  
Mengchi Ai ◽  
Zhixin Li ◽  
Jie Shan

Indoor structures are composed of ceilings, walls and floors that need to be modeled for a variety of applications. This paper proposes an approach to reconstructing models of indoor structures in complex environments. First, semantic pre-processing, including segmentation and occlusion construction, is applied to segment the input point clouds to generate semantic patches of structural primitives with uniform density. Then, a primitives extraction method with detected boundary is introduced to approximate both the mathematical surface and the boundary of the patches. Finally, a constraint-based model reconstruction is applied to achieve the final topologically consistent structural model. Under this framework, both the geometric and structural constraints are considered in a holistic manner to assure topologic regularity. Experiments were carried out with both synthetic and real-world datasets. The accuracy of the proposed method achieved an overall reconstruction quality of approximately 4.60 cm of root mean square error (RMSE) and 94.10% Intersection over Union (IoU) of the input point cloud. The development can be applied for structural reconstruction of various complex indoor environments.

Author(s):  
J. Yan ◽  
N. Grasso ◽  
S. Zlatanova ◽  
R. C. Braggaar ◽  
D. B. Marx

Three-dimensional modelling plays a vital role in indoor 3D tracking, navigation, guidance and emergency evacuation. Reconstruction of indoor 3D models is still problematic, in part, because indoor spaces provide challenges less-documented than their outdoor counterparts. Challenges include obstacles curtailing image and point cloud capture, restricted accessibility and a wide array of indoor objects, each with unique semantics. Reconstruction of indoor environments can be achieved through a photogrammetric approach, e.g. by using image frames, aligned using recurring corresponding image points (CIP) to build coloured point clouds. Our experiments were conducted by flying a QUAV in three indoor environments and later reconstructing 3D models which were analysed under different conditions. Point clouds and meshes were created using Agisoft PhotoScan Professional. We concentrated on flight paths from two vantage points: 1) safety and security while flying indoors and 2) data collection needed for reconstruction of 3D models. We surmised that the main challenges in providing safe flight paths are related to the physical configuration of indoor environments, privacy issues, the presence of people and light conditions. We observed that the quality of recorded video used for 3D reconstruction has a high dependency on surface materials, wall textures and object types being reconstructed. Our results show that 3D indoor reconstruction predicated on video capture using a QUAV is indeed feasible, but close attention should be paid to flight paths and conditions ultimately influencing the quality of 3D models. Moreover, it should be decided in advance which objects need to be reconstructed, e.g. bare rooms or detailed furniture.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 533 ◽  
Author(s):  
Shengjun Tang ◽  
Yunjie Zhang ◽  
You Li ◽  
Zhilu Yuan ◽  
Yankun Wang ◽  
...  

Semantically rich indoor models are increasingly used throughout a facility’s life cycle for different applications. With the decreasing price of 3D sensors, it is convenient to acquire point cloud data from consumer-level scanners. However, most existing methods in 3D indoor reconstruction from point clouds involve a tedious manual or interactive process due to line-of-sight occlusions and complex space structures. Using the multiple types of data obtained by RGB-D devices, this paper proposes a fast and automatic method for reconstructing semantically rich indoor 3D building models from low-quality RGB-D sequences. Our method is capable of identifying and modelling the main structural components of indoor environments such as space, wall, floor, ceilings, windows, and doors from the RGB-D datasets. The method includes space division and extraction, opening extraction, and global optimization. For space division and extraction, rather than distinguishing room spaces based on the detected wall planes, we interactively define the start-stop position for each functional space (e.g., room, corridor, kitchen) during scanning. Then, an interior elements filtering algorithm is proposed for wall component extraction and a boundary generation algorithm is used for space layout determination. For opening extraction, we propose a new noise robustness method based on the properties of convex hull, octrees structure, Euclidean clusters and the camera trajectory for opening generation, which is inapplicable to the data collected in the indoor environments due to inevitable occlusion. A global optimization approach for planes is designed to eliminate the inconsistency of planes sharing the same global plane, and maintain plausible connectivity between the walls and the relationships between the walls and openings. The final model is stored according to the CityGML3.0 standard. Our approach allows for the robust generation of semantically rich 3D indoor models and has strong applicability and reconstruction power for complex real-world datasets.


Author(s):  
G. Jozkow ◽  
P. Wieczorek ◽  
M. Karpina ◽  
A. Walicka ◽  
A. Borkowski

The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.


Author(s):  
M. Maboudi ◽  
D. Bánhidi ◽  
M. Gerke

Up-to-date and reliable 3D information of indoor environments is a prerequisite for many location- based services. One possibility to capture the necessary 3D data is to make use of Mobile Mapping Systems (MMSs) which rely for instance on SLAM (simultaneous localization and mapping). In most indoor environments, MMSs are by far faster than classic static systems. Moreover, they might deliver the point clouds with higher degree of completeness. In this paper, the geometric quality of point clouds of a state-of-the-art MMS (Viametris iMS3D) is investigated. In order to quantify the quality of iMS3D MMS, four different evaluation strategies namely cloud to cloud, point to plane, target to target and model based evaluation are employed. We conclude that the measurement accuracies are better than 1 cm and the precision of the point clouds are better than 3 cm in our experiments. For indoor mapping applications with few centimeters accuracy, the system offers a very fast solution. Moreover, as a nature of the current SLAM-based approaches, trajectory loop should be closed, but in some practical situations, closing the local trajectory loop might not be always possible. Our observation reveals that performing continuous repeated scanning could decrease the destructive effect of local unclosed loops.


2020 ◽  
Vol 2020 (1) ◽  
pp. 74-77
Author(s):  
Simone Bianco ◽  
Luigi Celona ◽  
Flavio Piccoli

In this work we propose a method for single image dehazing that exploits a physical model to recover the haze-free image by estimating the atmospheric scattering parameters. Cycle consistency is used to further improve the reconstruction quality of local structures and objects in the scene as well. Experimental results on four real and synthetic hazy image datasets show the effectiveness of the proposed method in terms of two commonly used full-reference image quality metrics.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jia Hao ◽  
Yan Wang ◽  
Kui Zhou ◽  
Xiaochang Yu ◽  
Yiting Yu

AbstractThe design of micropolarizer array (MPA) patterns in Fourier domain provides an efficient approach to reconstruct and investigate the polarization information. Inspired by Alenin’s works, in this paper, we propose an improved design model to cover both 2 × N MPAs and other original MPAs, by which an entirely new class of MPA patterns is suggested. The performance of the new patterns is evaluated through Fourier domain analysis and numerical simulations compared with the existing MPAs. Particularly, we analyze the reconstruction accuracy of the first three Stokes parameters and degree of linear polarization (DoLP) in detail. The experimental results confirm that the 2 × 2 × 2 MPA provides the highest reconstruction quality of s0, s1, s2 and DoLP in terms of quantitative measures and visual quality, while the 3 × 3 diagonal MPA achieves the state-of-the-art best results in case of single-snapshot systems. The guidance of this extended model and new diagonal MPAs show its massive potential for the division of focal plane (DoFP) polarization imaging applications.


2021 ◽  
Vol 13 (5) ◽  
pp. 2945
Author(s):  
Isabel del Arco ◽  
Òscar Flores ◽  
Anabel Ramos-Pla

A quantitative study was conducted in order to know, from the perspective of university students, the relationship between the quality perceived (QP) during the period of confinement derived from the SARS-CoV-2 virus, with the variables teaching plan (PL), material resources (MR), interaction processes (IN), and the affective–emotional component (EM). An online questionnaire was designed, directed to students from 20 universities in Spain, with a total participation of 893 individuals. The results indicate that the perception of the students on the quality of online teaching is directly associated with the material resources provided by the professors and the professor–student interactions. However, this perception does not have any direct effect on the planning or the emotional state or affectation created by the unprecedented situation of confinement. Among the conclusions, we highlight the need for the universities to apply models of support and tutoring, especially for students in their first years at university, to develop competences such as autonomy, digital competence, and self-regulation, and the need for a change of approach of the students and the professors based on the new normality we are currently experiencing.


2021 ◽  
Vol 11 (12) ◽  
pp. 5503
Author(s):  
Munkhjargal Gochoo ◽  
Syeda Amna Rizwan ◽  
Yazeed Yasin Ghadi ◽  
Ahmad Jalal ◽  
Kibum Kim

Automatic head tracking and counting using depth imagery has various practical applications in security, logistics, queue management, space utilization and visitor counting. However, no currently available system can clearly distinguish between a human head and other objects in order to track and count people accurately. For this reason, we propose a novel system that can track people by monitoring their heads and shoulders in complex environments and also count the number of people entering and exiting the scene. Our system is split into six phases; at first, preprocessing is done by converting videos of a scene into frames and removing the background from the video frames. Second, heads are detected using Hough Circular Gradient Transform, and shoulders are detected by HOG based symmetry methods. Third, three robust features, namely, fused joint HOG-LBP, Energy based Point clouds and Fused intra-inter trajectories are extracted. Fourth, the Apriori-Association is implemented to select the best features. Fifth, deep learning is used for accurate people tracking. Finally, heads are counted using Cross-line judgment. The system was tested on three benchmark datasets: the PCDS dataset, the MICC people counting dataset and the GOTPD dataset and counting accuracy of 98.40%, 98%, and 99% respectively was achieved. Our system obtained remarkable results.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 230
Author(s):  
Xiangwei Dang ◽  
Zheng Rong ◽  
Xingdong Liang

Accurate localization and reliable mapping is essential for autonomous navigation of robots. As one of the core technologies for autonomous navigation, Simultaneous Localization and Mapping (SLAM) has attracted widespread attention in recent decades. Based on vision or LiDAR sensors, great efforts have been devoted to achieving real-time SLAM that can support a robot’s state estimation. However, most of the mature SLAM methods generally work under the assumption that the environment is static, while in dynamic environments they will yield degenerate performance or even fail. In this paper, first we quantitatively evaluate the performance of the state-of-the-art LiDAR-based SLAMs taking into account different pattens of moving objects in the environment. Through semi-physical simulation, we observed that the shape, size, and distribution of moving objects all can impact the performance of SLAM significantly, and obtained instructive investigation results by quantitative comparison between LOAM and LeGO-LOAM. Secondly, based on the above investigation, a novel approach named EMO to eliminating the moving objects for SLAM fusing LiDAR and mmW-radar is proposed, towards improving the accuracy and robustness of state estimation. The method fully uses the advantages of different characteristics of two sensors to realize the fusion of sensor information with two different resolutions. The moving objects can be efficiently detected based on Doppler effect by radar, accurately segmented and localized by LiDAR, then filtered out from the point clouds through data association and accurate synchronized in time and space. Finally, the point clouds representing the static environment are used as the input of SLAM. The proposed approach is evaluated through experiments using both semi-physical simulation and real-world datasets. The results demonstrate the effectiveness of the method at improving SLAM performance in accuracy (decrease by 30% at least in absolute position error) and robustness in dynamic environments.


2014 ◽  
Vol 615 ◽  
pp. 9-14 ◽  
Author(s):  
Claudio Bernal ◽  
Beatriz de Agustina ◽  
Marta María Marín ◽  
Ana Maria Camacho

Some manufacturers of 3D digitizing systems are developing and market more accurate, fastest and affordable systems of fringe projection based on blue light technology. The aim of the present work is the determination of the quality and accuracy of the data provided by the LED structured light scanner Comet L3D (Steinbichler). The quality and accuracy of the cloud of points produced by the scanner is determined by measuring a number of gauge blocks of different sizes. The accuracy range of the scanner has been established through multiple digitizations showing the dependence on different factors such as the characteristics of the object and scanning procedure. Although many factors influence, accuracies announced by manufacturer have been achieved under optimal conditions and it has been noted that the quality of the point clouds (density, noise, dispersion of points) provided by this system is higher than that obtained with laser technology devices.


Sign in / Sign up

Export Citation Format

Share Document