scholarly journals Research on fusion of GF-6 imagery and quality evaluation

2020 ◽  
Vol 165 ◽  
pp. 03016
Author(s):  
Guo Liu ◽  
Yizhe Wang ◽  
Li Guo ◽  
Cuifeng Ma

Gaofen-6 (GF-6) has the advantages of wide coverage, multiple resolutions, and multiple bands. Image fusion method is the key process in high resolution remote sensing application. Beijing Daxing International Airport was selected as the experiment area and four image fusion methods of HPF, NND, Gram-Schmidt and Pansharp were employed to process panchromatic and multispectral imaging. The results demonstrated that Pansharp was the best algorithm for image information and spectral fidelity of GF-6, taking into account the preservation of color effects of images and enhancement of spatial details, which can meet most fusion needs. HPF’s color retention is not as good as Pansharp algorithm. The contrast of the NND algorithm result is relatively high, which may cause the local image to be too bright and the texture to be lost. The GS algorithm has lower information entropy and average gradient. Compared with the other three algorithms, it has a worse effect on spatial details and texture expression. This conclusion can provide key reference for scientific research and engineering application using GF-6 satellite image.

2021 ◽  
Vol 290 ◽  
pp. 02009
Author(s):  
YiZhe Wang ◽  
Guo Liu ◽  
Bai Xue ◽  
Li Guo ◽  
XueLi Zhang

Gaofen-6 (GF-6) has the advantages of wide coverage, multiple resolutions, and multiple bands, and can provide richer information for remote sensing interpretation. Image fusion method is the key process in high resolution remote sensing application. Guangdong-Hong Kong-Macao Greater Bay Area was selected as experiment area and four image fusion methods of HPF (High-Pass Fliter), NND (Nearest Neighbor Diffusion), GS (Gram-Schmidt) and Pansharp were employed to process panchromatic and multispectral imaging. In order to evaluate the result performances, firstly, four kinds of fusion results were evaluated by visual,and then established mean, standard deviation, entropy, average gradient, correlation coefficient and spectral distortion for quantitative evaluation of fusion results. The results demonstrated that The Pansharp algorithm and the GS algorithm have the best comprehensive evaluation of the GF-6 satellite fusion effect, taking into account the color effect of the image and the enhancement of spatial details, which can meet most fusion requirements. For land-based fusion areas, the Pansharp fusion method can be selected, and the GS fusion method can better reflect the water body information and is the most suitable fusion method for GF-6 satellite water body information enhancement among the four algorithms.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2764 ◽  
Author(s):  
Xiaojun Li ◽  
Haowen Yan ◽  
Weiying Xie ◽  
Lu Kang ◽  
Yi Tian

Pulse-coupled neural network (PCNN) and its modified models are suitable for dealing with multi-focus and medical image fusion tasks. Unfortunately, PCNNs are difficult to directly apply to multispectral image fusion, especially when the spectral fidelity is considered. A key problem is that most fusion methods using PCNNs usually focus on the selection mechanism either in the space domain or in the transform domain, rather than a details injection mechanism, which is of utmost importance in multispectral image fusion. Thus, a novel pansharpening PCNN model for multispectral image fusion is proposed. The new model is designed to acquire the spectral fidelity in terms of human visual perception for the fusion tasks. The experimental results, examined by different kinds of datasets, show the suitability of the proposed model for pansharpening.


2019 ◽  
Vol 9 (17) ◽  
pp. 3612
Author(s):  
Liao ◽  
Chen ◽  
Mo

As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the effective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual effect and quantitative evaluation.


2011 ◽  
Vol 467-469 ◽  
pp. 1092-1096 ◽  
Author(s):  
Guang Ming Zhang ◽  
Zhi Ming Cui

Graph cuts as an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields, meanwhile beamlet transform as time-frequency and multiresolution analysis tool is often used in the domain of image processing, especially for image fusion. By analyzing the characters of DSA medical image, this paper proposes a novel DSA image fusion method which is combining beamlet transform and graph cuts theory. Firstly, the image was decomposed by beamlet transform to obtain the different subbands coefficients. Then an energy function based on graph cuts theory was constructed to adjust the weight of these coefficients to obtain an optimum fusion object. At last, an inverse of the beamlet transform reconstruct a synthesized DSA image which could contain more integrated accurate detail information of blood vessels. By contrast, the efficiency of our method is better than other traditional fusion methods.


2013 ◽  
Vol 448-453 ◽  
pp. 3621-3624 ◽  
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion method based on the non multi-scale take the original image as object of study, using various fusion rule of image fusion to fuse images, but not decomposition or transform to original images. So, it can also be called simple multi sensor image fusion methods. Its advantages are low computational complexity and simple principle. Image fusion method based on the non multi-scale is currently the most widely used image fusion methods. The basic principle of fuse method is directly to select large gray, small gray and weighted average among pixel on the source image, to fuse into a new image. Simple pixel level image fusion method mainly includes the pixel gray value being average or weighted average, pixel gray value being selected large and pixel gray value being selected small, etc. Basic principle of fusion process was introduced in detail in this paper, and pixel level fusion algorithm at present was summed up. Simulation results on fusion are presented to illustrate the proposed fusion scheme. In practice, fusion algorithm was selected according to imaging characteristics being retained.


Author(s):  
Zhiguang Yang ◽  
Youping Chen ◽  
Zhuliang Le ◽  
Yong Ma

Abstract In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Yong Yang ◽  
Wenjuan Zheng ◽  
Shuying Huang

The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.


2020 ◽  
Vol 11 (1) ◽  
pp. 288
Author(s):  
Xiaochen Lu ◽  
Dezheng Yang ◽  
Fengde Jia ◽  
Yifeng Zhao

In this paper, a detail-injection method based on a coupled convolutional neural network (CNN) is proposed for hyperspectral (HS) and multispectral (MS) image fusion with the goal of enhancing the spatial resolution of HS images. Owing to the excellent performance in spectral fidelity of the detail-injection model and the image spatial–spectral feature exploration ability of CNN, the proposed method utilizes a couple of CNN networks as the feature extraction method and learns details from the HS and MS images individually. By appending an additional convolutional layer, both the extracted features of two images are concatenated to predict the missing details of the anticipated HS image. Experiments on simulated and real HS and MS data show that compared with some state-of-the-art HS and MS image fusion methods, our proposed method achieves better fusion results, provides excellent spectrum preservation ability, and is easy to implement.


Author(s):  
Javier Medina ◽  
Nelson Vera ◽  
Erika Upegui

I<span>Image-fusion provide users with detailed information about the urban and rural environment, which is useful for applications such as urban planning and management when higher spatial resolution images are not available. There are different image fusion methods. This paper implements, evaluates, and compares six satellite image-fusion methods, namely wavelet 2D-M transform, gram schmidt, high-frequency modulation, high pass filter (HPF) transform, simple mean value, and PCA. An Ikonos image (Panchromatic-PAN and multispectral-MULTI) showing the northwest of Bogotá (Colombia) is used to generate six fused images</span>: MULTI<sub>Wavelet 2D-M</sub>, MULTI<sub>G-S</sub>, MULTI<sub>MHF</sub>, MULTI<sub>HPF</sub>, MULTI<sub>SMV</sub>, and MULTI<sub>PCA</sub>. <span>In order to assess the efficiency of the six image-fusion methods, the resulting images were evaluated in terms of both spatial quality and spectral quality. To this end, four metrics were applied, namely the correlation index, erreur relative globale adimensionnelle de synthese (ERGAS), relative average spectral error (RASE) and the Q index. The best results were obtained for the </span> MULTI<sub>SMV</sub> image, which exhibited spectral correlation higher than 0.85, a Q index of 0.84, and the highest scores in spectral assessment according to ERGAS and RASE, 4.36% and 17.39% respectively.


Sign in / Sign up

Export Citation Format

Share Document