The Influence Measures of Light Intensity on Machine Learning for Semantic Segmentation

Author(s):  
Cheng-Hsien Chen ◽  
Yeong-Kang Lai
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
pp. 100057
Author(s):  
Peiran Li ◽  
Haoran Zhang ◽  
Zhiling Guo ◽  
Suxing Lyu ◽  
Jinyu Chen ◽  
...  

Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


2020 ◽  
Author(s):  
Binayak Ghosh ◽  
Mahdi Motagh ◽  
Mahmud Haghshenas Haghighi ◽  
Setareh Maghsudi

&lt;p&gt;&lt;span xml:lang=&quot;EN-US&quot; data-contrast=&quot;auto&quot;&gt;&lt;span&gt;Synthetic Aperture Radar (SAR) observations are widely used in emergency response for flood mapping and monitoring. Emergency responders frequently request satellite-based crisis information for flood monitoring to target the often-limited resources and to prioritize response actions throughout a disaster situation. Flood mapping algorithms are usually based on automatic thresholding algorithms for the initialization of the classification process in SAR amplitude data. These thresholding processes like Otsu thresholding, histogram leveling etc., are followed by clustering techniques like K-means, ISODATA for segmentation of water and non-water areas. These methods are capable of extracting the flood extent if there is a significant contrast between water and non-water areas in the SAR data. However, the classification result may be related to overestimations if non-water areas have a similar low backscatter as open water surfaces and also, these backscatter values differentiate from VV and VH polarizations. Our method aims at improving existing satellite-based emergency mapping methods by incorporating systematically acquired Sentinel-1A/B SAR data at high spatial (20m) and temporal (3-5 days) resolution. Our method involves a supervised learning method for flood detection by leveraging SAR intensity and interferometric coherence as well as polarimetry information.&amp;#160;&lt;/span&gt;&lt;/span&gt;&lt;span xml:lang=&quot;EN-US&quot; data-contrast=&quot;auto&quot;&gt;&lt;span&gt;It uses multi-temporal intensity and coherence conjunctively to extract flood information of varying flooded landscapes. By incorporating multitemporal satellite imagery, our method allows for rapid and accurate post-disaster damage assessment and can be used for better coordination of medium- and long-term financial assistance programs for affected areas. In this paper, we present a strategy using machine learning for semantic segmentation of the flood map, which extracts the&amp;#160;&lt;/span&gt;&lt;/span&gt;&lt;span xml:lang=&quot;EN-US&quot; data-contrast=&quot;auto&quot;&gt;&lt;span&gt;spatio&lt;/span&gt;&lt;/span&gt;&lt;span xml:lang=&quot;EN-US&quot; data-contrast=&quot;auto&quot;&gt;&lt;span&gt;-temporal information from the SAR images having both&amp;#160;&lt;/span&gt;&lt;/span&gt;&lt;span xml:lang=&quot;EN-US&quot; data-contrast=&quot;auto&quot;&gt;&lt;span&gt;intensity&lt;/span&gt;&lt;/span&gt;&lt;span xml:lang=&quot;EN-US&quot; data-contrast=&quot;auto&quot;&gt;&lt;span&gt;&amp;#160;as well coherence bands. The flood maps produced by the fusion of intensity and coherence are validated against state-of-the art methods for producing flood maps.&lt;/span&gt;&lt;/span&gt;&lt;span&gt;&amp;#160;&lt;/span&gt;&lt;/p&gt;


Author(s):  
M. Lu ◽  
L. Groeneveld ◽  
D. Karssenberg ◽  
S. Ji ◽  
R. Jentink ◽  
...  

Abstract. Spatiotemporal geomorphological mapping of intertidal areas is essential for understanding system dynamics and provides information for ecological conservation and management. Mapping the geomorphology of intertidal areas is very challenging mainly because spectral differences are oftentimes relatively small while transitions between geomorphological units are oftentimes gradual. Also, the intertidal areas are highly dynamic. Considerable challenges are to distinguish between different types of tidal flats, specifically, low and high dynamic shoal flats, sandy and silty low dynamic flats, and mega-ripple areas. In this study, we harness machine learning methods and compare between machine learning methods using features calculated in classical Object-Based Image Analysis (OBIA) vs. end-to-end deep convolutional neural networks that derive features directly from imagery, in automated geomorphological mapping. This study expects to gain us an in-depth understanding of features that contribute to tidal area classification and greatly improve the automation and prediction accuracy. We emphasise model interpretability and knowledge mining. By comparing and combing object-based and deep learning-based models, this study contributes to the development and integration of both methodology domains for semantic segmentation.


Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 230 ◽  
Author(s):  
Ahmed Rady ◽  
Joel Fischer ◽  
Stuart Reeves ◽  
Brian Logan ◽  
Nicholas James Watson

Food allergens present a significant health risk to the human population, so their presence must be monitored and controlled within food production environments. This is especially important for powdered food, which can contain nearly all known food allergens. Manufacturing is experiencing the fourth industrial revolution (Industry 4.0), which is the use of digital technologies, such as sensors, Internet of Things (IoT), artificial intelligence, and cloud computing, to improve the productivity, efficiency, and safety of manufacturing processes. This work studied the potential of small low-cost sensors and machine learning to identify different powdered foods which naturally contain allergens. The research utilised a near-infrared (NIR) sensor and measurements were performed on over 50 different powdered food materials. This work focussed on several measurement and data processing parameters, which must be determined when using these sensors. These included sensor light intensity, height between sensor and food sample, and the most suitable spectra pre-processing method. It was found that the K-nearest neighbour and linear discriminant analysis machine learning methods had the highest classification prediction accuracy for identifying samples containing allergens of all methods studied. The height between the sensor and the sample had a greater effect than the sensor light intensity and the classification models performed much better when the sensor was positioned closer to the sample with the highest light intensity. The spectra pre-processing methods, which had the largest positive impact on the classification prediction accuracy, were the standard normal variate (SNV) and multiplicative scattering correction (MSC) methods. It was found that with the optimal combination of sensor height, light intensity, and spectra pre-processing, a classification prediction accuracy of 100% could be achieved, making the technique suitable for use within production environments.


2020 ◽  
Vol 12 (21) ◽  
pp. 3555
Author(s):  
Manu Tom ◽  
Rajanie Prabha ◽  
Tianyu Wu ◽  
Emmanuel Baltsavias ◽  
Laura Leal-Taixé ◽  
...  

Continuous observation of climate indicators, such as trends in lake freezing, is important to understand the dynamics of the local and global climate system. Consequently, lake ice has been included among the Essential Climate Variables (ECVs) of the Global Climate Observing System (GCOS), and there is a need to set up operational monitoring capabilities. Multi-temporal satellite images and publicly available webcam streams are among the viable data sources capable of monitoring lake ice. In this work we investigate machine learning-based image analysis as a tool to determine the spatio-temporal extent of ice on Swiss Alpine lakes as well as the ice-on and ice-off dates, from both multispectral optical satellite images (VIIRS and MODIS) and RGB webcam images. We model lake ice monitoring as a pixel-wise semantic segmentation problem, i.e., each pixel on the lake surface is classified to obtain a spatially explicit map of ice cover. We show experimentally that the proposed system produces consistently good results when tested on data from multiple winters and lakes. Our satellite-based method obtains mean Intersection-over-Union (mIoU) scores > 93%, for both sensors. It also generalises well across lakes and winters with mIoU scores > 78% and >80% respectively. On average, our webcam approach achieves mIoU values of ≈87% and generalisation scores of ≈71% and ≈69% across different cameras and winters respectively. Additionally, we generate and make available a new benchmark dataset of webcam images (Photi-LakeIce) which includes data from two winters and three cameras.


Sign in / Sign up

Export Citation Format

Share Document