Method of Improving Instance Segmentation for Very High Resolution Remote Sensing Imagery Using Deep Learning

Author(s):  
Volodymyr Hnatushenko ◽  
Vadym Zhernovyi
2018 ◽  
Vol 10 (1) ◽  
pp. 144 ◽  
Author(s):  
Yongyang Xu ◽  
Liang Wu ◽  
Zhong Xie ◽  
Zhanlong Chen

2020 ◽  
Vol 12 (15) ◽  
pp. 2426
Author(s):  
Alin-Ionuț Pleșoianu ◽  
Mihai-Sorin Stupariu ◽  
Ionuț Șandric ◽  
Ileana Pătru-Stupariu ◽  
Lucian Drăguț

Traditional methods for individual tree-crown (ITC) detection (image classification, segmentation, template matching, etc.) applied to very high-resolution remote sensing imagery have been shown to struggle in disparate landscape types or image resolutions due to scale problems and information complexity. Deep learning promised to overcome these shortcomings due to its superior performance and versatility, proven with reported detection rates of ~90%. However, such models still find their limits in transferability across study areas, because of different tree conditions (e.g., isolated trees vs. compact forests) and/or resolutions of the input data. This study introduces a highly replicable deep learning ensemble design for ITC detection and species classification based on the established single shot detector (SSD) model. The ensemble model design is based on varying the input data for the SSD models, coupled with a voting strategy for the output predictions. Very high-resolution unmanned aerial vehicles (UAV), aerial remote sensing imagery and elevation data are used in different combinations to test the performance of the ensemble models in three study sites with highly contrasting spatial patterns. The results show that ensemble models perform better than any single SSD model, regardless of the local tree conditions or image resolution. The detection performance and the accuracy rates improved by 3–18% with only as few as two participant single models, regardless of the study site. However, when more than two models were included, the performance of the ensemble models only improved slightly and even dropped.


2016 ◽  
Vol 8 (10) ◽  
pp. 814 ◽  
Author(s):  
ZhiYong Lv ◽  
Haiqing He ◽  
Jón Benediktsson ◽  
Hong Huang

2020 ◽  
Vol 12 (18) ◽  
pp. 2985 ◽  
Author(s):  
Yeneng Lin ◽  
Dongyun Xu ◽  
Nan Wang ◽  
Zhou Shi ◽  
Qiuxiao Chen

Automatic road extraction from very-high-resolution remote sensing images has become a popular topic in a wide range of fields. Convolutional neural networks are often used for this purpose. However, many network models do not achieve satisfactory extraction results because of the elongated nature and varying sizes of roads in images. To improve the accuracy of road extraction, this paper proposes a deep learning model based on the structure of Deeplab v3. It incorporates squeeze-and-excitation (SE) module to apply weights to different feature channels, and performs multi-scale upsampling to preserve and fuse shallow and deep information. To solve the problems associated with unbalanced road samples in images, different loss functions and backbone network modules are tested in the model’s training process. Compared with cross entropy, dice loss can improve the performance of the model during training and prediction. The SE module is superior to ResNext and ResNet in improving the integrity of the extracted roads. Experimental results obtained using the Massachusetts Roads Dataset show that the proposed model (Nested SE-Deeplab) improves F1-Score by 2.4% and Intersection over Union by 2.0% compared with FC-DenseNet. The proposed model also achieves better segmentation accuracy in road extraction compared with other mainstream deep-learning models including Deeplab v3, SegNet, and UNet.


Sign in / Sign up

Export Citation Format

Share Document