scholarly journals Hyperspectral and Multispectral Remote Sensing Image Fusion Based on Endmember Spatial Information

2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.

Author(s):  
Dr.Vani. K ◽  
Anto. A. Micheal

This paper is an attempt to combine high resolution panchromatic lunar image with low resolution multispectral lunar image to produce a composite image using wavelet approach. There are many sensors that provide us image data about the lunar surface. The spatial resolution and spectral resolution is unique for each sensor, thereby resulting in limitation in extraction of information about the lunar surface. The high resolution panchromatic lunar image has high spatial resolution but low spectral resolution; the low resolution multispectral image has low spatial resolution but high spectral resolution. Extracting features such as craters, crater morphology, rilles and regolith surfaces with a low spatial resolution in multispectral image may not yield satisfactory results. A sensor which has high spatial resolution can provide better information when fused with the high spectral resolution. These fused image results pertain to enhanced crater mapping and mineral mapping in lunar surface. Since fusion using wavelet preserve spectral content needed for mineral mapping, image fusion has been done using wavelet approach.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hoover Rueda-Chacon ◽  
Fernando Rojas ◽  
Henry Arguello

AbstractSpectral image fusion techniques combine the detailed spatial information of a multispectral (MS) image and the rich spectral information of a hyperspectral (HS) image into a high-spatial and high-spectral resolution image. Due to the data deluge entailed by such images, new imaging modalities have exploited their intrinsic correlations in such a way that, a computational algorithm can fuse them from few multiplexed linear projections. The latter has been coined compressive spectral image fusion. State-of-the-art research work have focused mainly on the algorithmic part, simulating instrumentation characteristics and assuming independently registered sensors to conduct compressed MS and HS imaging. In this manuscript, we report on the construction of a unified computational imaging framework that includes a proof-of-concept optical testbed to simultaneously acquire MS and HS compressed projections, and an alternating direction method of multipliers algorithm to reconstruct high-spatial and high-spectral resolution images from the fused compressed measurements. The testbed employs a digital micro-mirror device (DMD) to encode and split the input light towards two compressive imaging arms, which collect MS and HS measurements, respectively. This strategy entails full light throughput sensing since no light is thrown away by the coding process. Further, different resolutions can be dynamically tested by binning the DMD and sensors pixels. Real spectral responses and optical characteristics of the employed equipment are obtained through a per-pixel point spread function calibration approach to enable accurate compressed image fusion performance. The proposed framework is demonstrated through real experiments within the visible spectral range using as few as 5% of the data.


2019 ◽  
Vol 11 (19) ◽  
pp. 2203 ◽  
Author(s):  
He ◽  
Li ◽  
Yuan ◽  
Li ◽  
Shen

The quality of remotely sensed images is usually determined by their spatial resolution, spectral resolution, and coverage. However, due to limitations in the sensor hardware, the spectral resolution, spatial resolution, and swath width of the coverage are mutually constrained. Remote sensing image fusion aims at overcoming the different constraints of remote sensing images, to achieve the purpose of combining the useful information in the different images. However, the traditional spatial–spectral fusion approach is to use data in the same swath width that covers the same area and only considers the mutually constrained conditions between the spectral resolution and spatial resolution. To simultaneously solve the image fusion problems of the swath width, spatial resolution, and spectral resolution, this paper introduces a method with multi-scale feature extraction and residual learning with recurrent expanding. To discuss the sensitivity of convolution operation to different variables of images in different swath widths, we set the sensitivity experiments in the coverage ratio and offset position. We also performed the simulation and real experiments to verify the effectiveness of the proposed framework with the Sentinel-2 data, which simulated the different widths.


2021 ◽  
Vol 13 (9) ◽  
pp. 1693
Author(s):  
Anushree Badola ◽  
Santosh K. Panda ◽  
Dar A. Roberts ◽  
Christine F. Waigl ◽  
Uma S. Bhatt ◽  
...  

Alaska has witnessed a significant increase in wildfire events in recent decades that have been linked to drier and warmer summers. Forest fuel maps play a vital role in wildfire management and risk assessment. Freely available multispectral datasets are widely used for land use and land cover mapping, but they have limited utility for fuel mapping due to their coarse spectral resolution. Hyperspectral datasets have a high spectral resolution, ideal for detailed fuel mapping, but they are limited and expensive to acquire. This study simulates hyperspectral data from Sentinel-2 multispectral data using the spectral response function of the Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) sensor, and normalized ground spectra of gravel, birch, and spruce. We used the Uniform Pattern Decomposition Method (UPDM) for spectral unmixing, which is a sensor-independent method, where each pixel is expressed as the linear sum of standard reference spectra. The simulated hyperspectral data have spectral characteristics of AVIRIS-NG and the reflectance properties of Sentinel-2 data. We validated the simulated spectra by visually and statistically comparing it with real AVIRIS-NG data. We observed a high correlation between the spectra of tree classes collected from AVIRIS-NG and simulated hyperspectral data. Upon performing species level classification, we achieved a classification accuracy of 89% for the simulated hyperspectral data, which is better than the accuracy of Sentinel-2 data (77.8%). We generated a fuel map from the simulated hyperspectral image using the Random Forest classifier. Our study demonstrated that low-cost and high-quality hyperspectral data can be generated from Sentinel-2 data using UPDM for improved land cover and vegetation mapping in the boreal forest.


2019 ◽  
Vol 11 (17) ◽  
pp. 2007 ◽  
Author(s):  
Changhui Jiang ◽  
Yuwei Chen ◽  
Haohao Wu ◽  
Wei Li ◽  
Hui Zhou ◽  
...  

Non-contact and active vegetation or plant parameters extraction using hyperspectral information is a prospective research direction among the remote sensing community. Hyperspectral LiDAR (HSL) is an instrument capable of acquiring spectral and spatial information actively, which could mitigate the environmental illumination influence on the spectral information collection. However, HSL usually has limited spectral resolution and coverage, which is vital for vegetation parameter extraction. In this paper, to broaden the HSL spectral range and increase the spectral resolution, an Acousto-optical Tunable Filter based Hyperspectral LiDAR (AOTF-HSL) with 10 nm spectral resolution, consecutively covering from 500–1000 nm, was designed. The AOTF-HSL was employed and evaluated for vegetation parameters extraction. “Red Edge” parameters of four different plants with green and yellow leaves were extracted in the lab experiments for evaluating the HSL vegetation parameter extraction capacity. The experiments were composed of two parts. Firstly, the first-order derivative of the spectral reflectance was employed to extract the “Red Edge” position (REP), “Red Edge” slope (RES) and “Red Edge” area (REA) of these green and yellow leaves. The results were compared with the referenced value from a standard SVC© HR-1024 spectrometer for validation. Green leaf parameter differences between HSL and SVC results were minor, which supported that notion the HSL was practical for extracting the employed parameter as an active method. Secondly, another two different REP extraction methods, Linear Four-point Interpolation technology (LFPIT) and Linear Extrapolation technology (LET), were utilized for further evaluation of using the AOTF-HSL spectral profile to determine the REP value. The differences between the plant green leaves’ REP results extracted using the three methods were all below 10%, and the some of them were below 1%, which further demonstrated that the spectral data collected from HSL with this spectral range and resolution settings was applicable for “Red Edge” parameters extraction.


2020 ◽  
Vol 12 (23) ◽  
pp. 3979
Author(s):  
Shuwei Hou ◽  
Wenfang Sun ◽  
Baolong Guo ◽  
Cheng Li ◽  
Xiaobo Li ◽  
...  

Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the spatiotemporal fusion methods considers how the various temporal changes between different pixels affect the performance of the fusion results; to develop an improved fusion method, these temporal changes need to be integrated into one framework. Adaptive-SFSDAF extends the existing fusion method that incorporates sub-pixel class fraction change information in Flexible Spatiotemporal DAta Fusion (SFSDAF) by modifying spectral unmixing to select spectral unmixing adaptively in order to greatly improve the efficiency of the algorithm. Accordingly, the main contributions of the proposed adaptive-SFSDAF method are twofold. One is to address the detection of outliers of temporal change in the image during the period between the origin and prediction dates, as these pixels are the most difficult to estimate and affect the performance of the spatiotemporal fusion methods. The other primary contribution is to establish an adaptive unmixing strategy according to the guided mask map, thus effectively eliminating a great number of insignificant unmixed pixels. The proposed method is compared with the state-of-the-art Flexible Spatiotemporal DAta Fusion (FSDAF), SFSDAF, FIT-FC, and Unmixing-Based Data Fusion (UBDF) methods, and the fusion accuracy is evaluated both quantitatively and visually. The experimental results show that adaptive-SFSDAF achieves outstanding performance in balancing computational efficiency and the accuracy of the fusion results.


1987 ◽  
Vol 127 ◽  
pp. 417-418
Author(s):  
J. Bland ◽  
K. Taylor ◽  
P. D. Atherton

The TAURUS Imaging Fabry-Perot System (Taylor & Atherton 1980) has been used with the IPCS at the AAT to observe the ionized gas within NGC 5128 (Cen A) at [NII]λ6548 and Hα. Seven independent (x, y,λ) data cubes were obtained along the dust lane at high spectral resolution (30 km/s FWHM) and at a spatial resolution limited by the seeing (~1″). From these data, maps of the kinematics and intensities of the ionized gas were derived over a 420″ by 300″ region. The maps are the most complete to date for this object comprising 17500 and 5300 fitted spectra in Ha and [NII]λ6548 respectively. The dust lane system is found to be well understood in terms of a differentially rotating disc of gas and dust which is warped both along and perpendicular to the line-of-sight.


2018 ◽  
Vol 215 ◽  
pp. 01002
Author(s):  
Yuhendra ◽  
Minarni

Image fusion is a useful tool for integrating low spatial resolution multispectral (MS) images with a high spatial resolution panchromatic (PAN) image, thus producing a high resolution multispectral image for better understanding of the observed earth surface. A main proposed the research were the effectiveness of different image fusion methods while filtering methods added to speckle suppression in synthetic aperture radar (SAR) images. The quality assessment of the filtering fused image implemented by statistical parameter namely mean, standard deviation, bias, universal index quality image (UIQI) and root mean squared error (RMSE). In order to test the robustness of the image quality, either speckle noise (Gamma map filter) is intentionally added to the fused image. When comparing and testing result, Gram Scmidth (GS) methods have shown better results for good colour reproduction, as compared with high pass filtering (HPF). And the other hands, GS, and wavelet intensity hue saturation (W-IHS) have shown the preserving good colour with original image for Landsat TM data.


2014 ◽  
Vol 9 (S307) ◽  
pp. 297-300 ◽  
Author(s):  
Th. Rivinius ◽  
W.J. de Wit ◽  
Z. Demers ◽  
A. Quirrenbach ◽  

AbstractOHANA is an interferometric snapshot survey of the gaseous circumstellar environments of hot stars, carried out by the VLTI group at the Paranal observatory. It aims to characterize the mass-loss dynamics (winds/disks) at unexplored spatial scales for many stars. The survey employs the unique combination of AMBER's high spectral resolution with the unmatched spatial resolution provided by the VLTI. Because of the spatially unresolved central OBA-type star, with roughly neutral colour terms, their gaseous environments are among the easiest objects to be observed with AMBER, yet the extent and kinematics of the line emission regions are of high astrophysical interest.


2012 ◽  
Vol 263-266 ◽  
pp. 416-420 ◽  
Author(s):  
Xiao Qing Luo ◽  
Xiao Jun Wu

Enhance spectral fusion quality is the one of most significant targets in the field of remote sensing image fusion. In this paper, a statistical model based fusion method is proposed, which is the improved method for fusing remote sensing images on the basis of the framework of Principal Component Analysis(PCA) and wavelet decomposition-based image fusion. PCA is applied to the source images. In order to retain the entropy information of data, we select the principal component axes based on entropy contribution(ECA). The first entropy component and panchromatic image(PAN) are performed a multiresolution decompositon using wavelet transform. The low frequency subband fused by weighted aggregation approach and high frequency subband fused by statistical model. High resolution multispectral image is then obtained by an inverse wavelet and ECA transform. The experimental results demonstrate that the proposed method can retain the spectral information and spatial information in the fusion of PAN and multi-spectral image(MS).


Sign in / Sign up

Export Citation Format

Share Document