semantic labelling
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 6)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
I. Farmakis ◽  
D. Bonneau ◽  
D. J. Hutchinson ◽  
N. Vlachopoulos

Abstract. Computer vision applications have been increasingly gaining space in the field of remote sensing and geosciences for automated terrain classification and semantic labelling purposes. The continuous and rapid development of monitoring techniques and enhancements in the spatial resolution of sensors have increased the demand for new remote sensing data analysis approaches. For semantic labelling of 2D (or 2.5D) image terrain representations for rock slopes, it has been shown that Object-Based Image Analysis (OBIA) results in high efficiency and accurate identification of landslide hazards. However, the application of such object-based approaches in 3D point cloud analysis is still under development for geospatial data analysis. In the field of engineering geology, which deals with complex rural landscapes, frequently the analysis needs to be conducted based solely on 3D geometrical information accounting for multiple scales simultaneously. In this study, the primary segmentation step of the object-based model is applied to a TLS-derived point cloud collected at a landslide-active rock slope. The 3D point cloud segmentation methodology proposed here builds on the principles of the Fractal Net Evolution Approach (FNEA). The objective is to provide a geometry-based point cloud segmentation framework that preserves the 3D character of the data throughout the process and favours the multi-scale analysis. The segmentation is performed on the basis of supervoxels based on purely geometrical local descriptors derived directly from the TLS point clouds and comprises the basis for the subsequent steps towards the development of an efficient Object-Based Point cloud Analysis (OBPA) framework in rock slope stability assessment by adding semantic meaning to the data through a homogenization process.


Author(s):  
Zoe Landgraf ◽  
Fabian Falck ◽  
Michael Bloesch ◽  
Stefan Leutenegger ◽  
Andrew J. Davison
Keyword(s):  

2020 ◽  
Vol 143 ◽  
pp. 113053 ◽  
Author(s):  
Daniel Ayala ◽  
Agustín Borrego ◽  
Inma Hernández ◽  
David Ruiz

This chapter describes the proposed semantic-based process mining and analysis framework (SPMaAF) and the main components applied for integration and ample implementation of the method. Technically, the conceptual method of analysis and how the book has designed the framework is explained in detail. The chapter also shows that the quality augmentation of the derived process models is as a result of employing process mining techniques that encodes the envisaged system with three rudimentary building blocks, namely semantic labelling (annotation), semantic representation (ontology), and semantic reasoning (reasoner).


2019 ◽  
Vol 128 (2) ◽  
pp. 319-335
Author(s):  
Armin Mustafa ◽  
Adrian Hilton

Abstract Simultaneous semantically coherent object-based long-term 4D scene flow estimation, co-segmentation and reconstruction is proposed exploiting the coherence in semantic class labels both spatially, between views at a single time instant, and temporally, between widely spaced time instants of dynamic objects with similar shape and appearance. In this paper we propose a framework for spatially and temporally coherent semantic 4D scene flow of general dynamic scenes from multiple view videos captured with a network of static or moving cameras. Semantic coherence results in improved 4D scene flow estimation, segmentation and reconstruction for complex dynamic scenes. Semantic tracklets are introduced to robustly initialize the scene flow in the joint estimation and enforce temporal coherence in 4D flow, semantic labelling and reconstruction between widely spaced instances of dynamic objects. Tracklets of dynamic objects enable unsupervised learning of long-term flow, appearance and shape priors that are exploited in semantically coherent 4D scene flow estimation, co-segmentation and reconstruction. Comprehensive performance evaluation against state-of-the-art techniques on challenging indoor and outdoor sequences with hand-held moving cameras shows improved accuracy in 4D scene flow, segmentation, temporally coherent semantic labelling, and reconstruction of dynamic scenes.


2019 ◽  
Vol 83 ◽  
pp. 57-68 ◽  
Author(s):  
Daniel Ayala ◽  
Inma Hernández ◽  
David Ruiz ◽  
Miguel Toro
Keyword(s):  

Author(s):  
L. Yan ◽  
W. Xia

<p><strong>Abstract.</strong> 2D texture cannot reflect the 3D object’s texture because it only considers the intensity distribution in the 2D image region but int real world the intensities of objects are distributed in 3D surface. This paper proposes a modified three-dimensional gray-level co-occurrence matrix (3D-GLCM) which is first introduced to process volumetric data but cannot be used directly to spectral images with digital surface model because of the data sparsity of the direction perpendicular to the image plane. Spectral and geometric features combined with no texture, 2D-GLCM and 3D-GLCM were put into random forest for comparing using ISPRS 2D semantic labelling challenge dataset, and the overall accuracy of the combination containing 3D GLCM improved by 2.4% and 1.3% compared to the combinations without textures or with 2D-GLCM correspondingly.</p>


Author(s):  
E.-K. Stathopoulou ◽  
F. Remondino

<p><strong>Abstract.</strong> Automatic semantic segmentation of images is becoming a very prominent research field with many promising and reliable solutions already available. Labelled images as input for the photogrammetric pipeline have enormous potential to improve the 3D reconstruction results. To support this argument, in this work we discuss the contribution of image semantic labelling towards image-based 3D reconstruction in photogrammetry. We experiment semantic information in various steps starting from feature matching to dense 3D reconstruction. Labelling in 2D is considered as an easier task in terms of data availability and algorithm maturity. However, since semantic labelling of all the images involved in the reconstruction may be a costly, laborious and time consuming task, we propose to use a deep learning architecture to automatically generate semantically segmented images. To this end, we have trained a Convolutional Neural Network (CNN) on historic building façade images that will be further enriched in the future. The first results of this study are promising, with an improved performance on the quality of the 3D reconstruction and the possibility to transfer the labelling results from 2D to 3D.</p>


Sign in / Sign up

Export Citation Format

Share Document