A new locally-adaptive classification method LAGMA for large-scale land cover mapping using remote-sensing data

2014 ◽  
Vol 5 (1) ◽  
pp. 55-64 ◽  
Author(s):  
S.A. Bartalev ◽  
V.A. Egorov ◽  
E.A. Loupian ◽  
S.A. Khvostikov
2021 ◽  
Author(s):  
Melanie Brandmeier ◽  
Eya Cherif

<p>Degradation of large forest areas such as the Brazilian Amazon due to logging and fires can increase the human footprint way beyond deforestation. Monitoring and quantifying such changes on a large scale has been addressed by several research groups (e.g. Souza et al. 2013) by making use of freely available remote sensing data such as the Landsat archive. However, fully automatic large-scale land cover/land use mapping is still one of the great challenges in remote sensing. One problem is the availability of reliable “ground truth” labels for training supervised learning algorithms. For the Amazon area, several landcover maps with 22 classes are available from the MapBiomas project that were derived by semi-automatic classification and verified by extensive fieldwork (Project MapBiomas). These labels cannot be considered real ground-truth as they were derived from Landsat data themselves but can still be used for weakly supervised training of deep-learning models that have a potential to improve predictions on higher resolution data nowadays available. The term weakly supervised learning was originally coined by (Zhou 2017) and refers to the attempt of constructing predictive models from incomplete, inexact and/or inaccurate labels as is often the case in remote sensing. To this end, we investigate advanced deep-learning strategies on Sentinel-1 timeseries and Sentinel-2 optical data to improve large-scale automatic mapping and monitoring of landcover changes in the Amazon area. Sentinel-1 data has the advantage to be resistant to cloud cover that often hinders optical remote sensing in the tropics.</p><p>We propose new architectures that are adapted to the particularities of remote sensing data (S1 timeseries and multispectral S2 data) and compare the performance to state-of-the-art models.  Results using only spectral data were very promising with overall test accuracies of 77.9% for Unet and 74.7% for a DeepLab implementation with ResNet50 backbone and F1 measures of 43.2% and 44.2% respectively.  On the other hand, preliminary results for new architectures leveraging the multi-temporal aspect of  SAR data have improved the quality of mapping, particularly for agricultural classes. For instance, our new designed network AtrousDeepForestM2 has a similar quantitative performances as DeepLab  (F1 of 58.1% vs 62.1%), however it produces better qualitative land cover maps.</p><p>To make our approach scalable and feasible for others, we integrate the trained models in a geoprocessing tool in ArcGIS that can also be deployed in a cloud environment and offers a variety of post-processing options to the user.</p><p>Souza, J., Carlos M., et al. (2013). "Ten-Year Landsat Classification of Deforestation and Forest Degradation in the Brazilian Amazon." Remote Sensing 5(11): 5493-5513.   </p><p>Zhou, Z.-H. (2017). "A brief introduction to weakly supervised learning." National Science Review 5(1): 44-53.</p><p>"Project MapBiomas - Collection  4.1 of Brazilian Land Cover & Use Map Series, accessed on January 2020 through the link: https://mapbiomas.org/colecoes-mapbiomas?cama_set_language=en"</p>


2006 ◽  
Author(s):  
H. S. Lim ◽  
M. Z. MatJafri ◽  
K. Abdullah ◽  
N. M. Saleh ◽  
C. J. Wong ◽  
...  

2021 ◽  
Vol 10 (8) ◽  
pp. 533
Author(s):  
Bin Hu ◽  
Yongyang Xu ◽  
Xiao Huang ◽  
Qimin Cheng ◽  
Qing Ding ◽  
...  

Accurate land cover mapping is important for urban planning and management. Remote sensing data have been widely applied for urban land cover mapping. However, obtaining land cover classification via optical remote sensing data alone is difficult due to spectral confusion. To reduce the confusion between dark impervious surface and water, the Sentinel-1A Synthetic Aperture Rader (SAR) data are synergistically combined with the Sentinel-2B Multispectral Instrument (MSI) data. The novel support vector machine with composite kernels (SVM-CK) approach, which can exploit the spatial information, is proposed to process the combination of Sentinel-2B MSI and Sentinel-1A SAR data. The classification based on the fusion of Sentinel-2B and Sentinel-1A data yields an overall accuracy (OA) of 92.12% with a kappa coefficient (KA) of 0.89, superior to the classification results using Sentinel-2B MSI imagery and Sentinel-1A SAR imagery separately. The results indicate that the inclusion of Sentinel-1A SAR data to Sentinel-2B MSI data can improve the classification performance by reducing the confusion between built-up area and water. This study shows that the land cover classification can be improved by fusing Sentinel-2B and Sentinel-1A imagery.


Author(s):  
Á. Barsi ◽  
Zs. Kugler ◽  
I. László ◽  
Gy. Szabó ◽  
H. M. Abdulmutalib

The technological developments in remote sensing (RS) during the past decade has contributed to a significant increase in the size of data user community. For this reason data quality issues in remote sensing face a significant increase in importance, particularly in the era of Big Earth data. Dozens of available sensors, hundreds of sophisticated data processing techniques, countless software tools assist the processing of RS data and contributes to a major increase in applications and users. In the past decades, scientific and technological community of spatial data environment were focusing on the evaluation of data quality elements computed for point, line, area geometry of vector and raster data. Stakeholders of data production commonly use standardised parameters to characterise the quality of their datasets. Yet their efforts to estimate the quality did not reach the general end-user community running heterogeneous applications who assume that their spatial data is error-free and best fitted to the specification standards. The non-specialist, general user group has very limited knowledge how spatial data meets their needs. These parameters forming the external quality dimensions implies that the same data system can be of different quality to different users. The large collection of the observed information is uncertain in a level that can decry the reliability of the applications.<br> Based on prior paper of the authors (in cooperation within the Remote Sensing Data Quality working group of ISPRS), which established a taxonomy on the dimensions of data quality in GIS and remote sensing domains, this paper is aiming at focusing on measures of uncertainty in remote sensing data lifecycle, focusing on land cover mapping issues. In the paper we try to introduce how quality of the various combination of data and procedures can be summarized and how services fit the users’ needs.<br> The present paper gives the theoretic overview of the issue, besides selected, practice-oriented approaches are evaluated too, finally widely-used dimension metrics like Root Mean Squared Error (RMSE) or confusion matrix are discussed. The authors present data quality features of well-defined and poorly defined object. The central part of the study is the land cover mapping, describing its accuracy management model, presented relevance and uncertainty measures of its influencing quality dimensions. In the paper theory is supported by a case study, where the remote sensing technology is used for supporting the area-based agricultural subsidies of the European Union, in Hungarian administration.


Author(s):  
M. Schmitt ◽  
J. Prexl ◽  
P. Ebel ◽  
L. Liebel ◽  
X. X. Zhu

Abstract. Fully automatic large-scale land cover mapping belongs to the core challenges addressed by the remote sensing community. Usually, the basis of this task is formed by (supervised) machine learning models. However, in spite of recent growth in the availability of satellite observations, accurate training data remains comparably scarce. On the other hand, numerous global land cover products exist and can be accessed often free-of-charge. Unfortunately, these maps are typically of a much lower resolution than modern day satellite imagery. Besides, they always come with a significant amount of noise, as they cannot be considered ground truth, but are products of previous (semi-)automatic prediction tasks. Therefore, this paper seeks to make a case for the application of weakly supervised learning strategies to get the most out of available data sources and achieve progress in high-resolution large-scale land cover mapping. Challenges and opportunities are discussed based on the SEN12MS dataset, for which also some baseline results are shown. These baselines indicate that there is still a lot of potential for dedicated approaches designed to deal with remote sensing-specific forms of weak supervision.


Sign in / Sign up

Export Citation Format

Share Document