mesh models
Recently Published Documents


TOTAL DOCUMENTS

259
(FIVE YEARS 58)

H-INDEX

18
(FIVE YEARS 2)

Author(s):  
U. G. Sefercik ◽  
T. Kavzoglu ◽  
M. Nazar ◽  
C. Atalay ◽  
M. Madak

Abstract. Lately, improvements in game engines have increased the interest in virtual reality (VR) technologies, that engages users with an artificial environment, and have led to the adoption of VR systems to display geospatial data. Because of the ongoing COVID-19 pandemic, and thus the necessity to stay at home, VR tours became very popular. In this paper, we tried to create a three-dimensional (3D) virtual tour for Gebze Technical University (GTU) Southern Campus by transferring high-resolution unmanned air vehicle (UAV) data into a virtual domain. UAV data is preferred in various applications because of its high spatial resolution, low cost and fast processing time. In this application, the study area was captured from different modes and altitudes of UAV flights with a minimum ground sampling distance (GSD) of 2.18 cm using a 20 MP digital camera. The UAV data was processed in Structure from Motion (SfM) based photogrammetric evaluation software Agisoft Metashape and high-quality 3D textured mesh models were generated. Image orientation was completed using an optimal number of ground control points (GCPs), and the geometric accuracy was calculated as ±8 mm (~0.4 pixels). To create the VR tour, UAV-based mesh models were transferred into the Unity game engine and optimization processes were carried out by applying occlusion culling and space subdivision algorithms. To improve the visualization, 3D object models such as trees, lighting poles and arbours were positioned on VR. Finally, textual metadata about buildings and a player with a first-person camera were added for an informative VR experience.


2021 ◽  
Vol 14 (1) ◽  
pp. 6
Author(s):  
Roberto de de Lima-Hernandez ◽  
Maarten Vergauwen

An increased interest in computer-aided heritage reconstruction has emerged in recent years due to the maturity of sophisticated computer vision techniques. Concretely, feature-based matching methods have been conducted to reassemble heritage assets, yielding plausible results for data that contains enough salient points for matching. However, they fail to register ancient artifacts that have been badly deteriorated over the years. In particular, for monochromatic incomplete data, such as 3D sunk relief eroded decorations, damaged drawings, and ancient inscriptions. The main issue lies in the lack of regions of interest and poor quality of the data, which prevent feature-based algorithms from estimating distinctive descriptors. This paper addresses the reassembly of damaged decorations by deploying a Generative Adversarial Network (GAN) to predict the continuing decoration traces of broken heritage fragments. By extending the texture information of broken counterpart fragments, it is demonstrated that registration methods are now able to find mutual characteristics that allow for accurate optimal rigid transformation estimation for fragments alignment. This work steps away from feature-based approaches, hence employing Mutual Information (MI) as a similarity metric to estimate an alignment transformation. Moreover, high-resolution geometry and imagery are combined to cope with the fragility and severe damage of heritage fragments. Therefore, the testing data is composed of a set of ancient Egyptian decorated broken fragments recorded through 3D remote sensing techniques. More specifically, structured light technology for mesh models creation, as well as orthophotos, upon which digital drawings are created. Even though this study is restricted to Egyptian artifacts, the workflow can be applied to reconstruct different types of decoration patterns in the cultural heritage domain.


2021 ◽  
Vol 13 (21) ◽  
pp. 4434
Author(s):  
Chunhui Zhao ◽  
Chi Zhang ◽  
Yiming Yan ◽  
Nan Su

A novel framework for 3D reconstruction of buildings based on a single off-nadir satellite image is proposed in this paper. Compared with the traditional methods of reconstruction using multiple images in remote sensing, recovering 3D information that utilizes the single image can reduce the demands of reconstruction tasks from the perspective of input data. It solves the problem that multiple images suitable for traditional reconstruction methods cannot be acquired in some regions, where remote sensing resources are scarce. However, it is difficult to reconstruct a 3D model containing a complete shape and accurate scale from a single image. The geometric constraints are not sufficient as the view-angle, size of buildings, and spatial resolution of images are different among remote sensing images. To solve this problem, the reconstruction framework proposed consists of two convolutional neural networks: Scale-Occupancy-Network (Scale-ONet) and model scale optimization network (Optim-Net). Through reconstruction using the single off-nadir satellite image, Scale-Onet can generate water-tight mesh models with the exact shape and rough scale of buildings. Meanwhile, the Optim-Net can reduce the error of scale for these mesh models. Finally, the complete reconstructed scene is recovered by Model-Image matching. Profiting from well-designed networks, our framework has good robustness for different input images, with different view-angle, size of buildings, and spatial resolution. Experimental results show that an ideal reconstruction accuracy can be obtained both on the model shape and scale of buildings.


2021 ◽  
pp. 110773
Author(s):  
Patrick Zulian ◽  
Philipp Schädle ◽  
Liudmila Karagyaur ◽  
Maria G.C. Nestola

2021 ◽  
Vol 19 (3) ◽  
pp. 470-480
Author(s):  
Shouxin Chen ◽  
Ming Chen ◽  
Shenglian Lu

2021 ◽  
Vol 13 (19) ◽  
pp. 3801
Author(s):  
Yunsheng Zhang ◽  
Chi Zhang ◽  
Siyang Chen ◽  
Xueye Chen

Three-dimensional (3D) building façade model reconstruction is of great significance in urban applications and real-world visualization. This paper presents a newly developed method for automatically generating a 3D regular building façade model from the photogrammetric mesh model. To this end, the contour is tracked on irregular triangulation, and then the local contour tree method based on the topological relationship is employed to represent the topological structure of the photogrammetric mesh model. Subsequently, the segmented contour groups are found by analyzing the topological relationship of the contours, and the original mesh model is divided into various components from bottom to top through the iteration process. After that, each component is iteratively and robustly abstracted into cuboids. Finally, the parameters of each cuboid are adjusted to be close to the original mesh model, and a lightweight polygonal mesh model is taken from the adjusted cuboid. Typical buildings and a whole scene of photogrammetric mesh models are exploited to assess the proposed method quantitatively and qualitatively. The obtained results reveal that the proposed method can derive a regular façade model from a photogrammetric mesh model with a certain accuracy.


2021 ◽  
Author(s):  
Ying Li ◽  
Yue Yu ◽  
Bowen Li ◽  
Yue Yang
Keyword(s):  

Author(s):  
E. K. Stathopoulou ◽  
S. Rigon ◽  
R. Battisti ◽  
F. Remondino

Abstract. Mesh models generated by multi view stereo (MVS) algorithms often fail to represent in an adequate manner the sharp, natural edge details of the scene. The harsh depth discontinuities of edge regions are eventually a challenging task for dense reconstruction, while vertex displacement during mesh refinement frequently leads to smoothed edges that do not coincide with the fine details of the scene. Meanwhile, 3D edges have been used for scene representation, particularly man-made built environments, which are dominated by regular planar and linear structures. Indeed, 3D edge detection and matching are commonly exploited either to constrain camera pose estimation, or to generate an abstract representation of the most salient parts of the scene, and even to support mesh reconstruction. In this work, we attempt to jointly use 3D edge extraction and MVS mesh generation to promote edge detail preservation in the final result. Salient 3D edges of the scene are reconstructed with state-of-the-art algorithms and integrated in the dense point cloud to be further used in order to support the mesh triangulation step. Experimental results on benchmark dataset sequences using metric and appearance-based measures are performed in order to evaluate our hypothesis.


Sign in / Sign up

Export Citation Format

Share Document