scholarly journals Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary

Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.

2021 ◽  
pp. 1-20
Author(s):  
Yun Wang ◽  
Xin Jin ◽  
Jie Yang ◽  
Qian Jiang ◽  
Yue Tang ◽  
...  

Multi-focus image fusion is a technique that integrates the focused areas in a pair or set of source images with the same scene into a fully focused image. Inspired by transfer learning, this paper proposes a novel color multi-focus image fusion method based on deep learning. First, color multi-focus source images are fed into VGG-19 network, and the parameters of convolutional layer of the VGG-19 network are then migrated to a neural network containing multilayer convolutional layers and multilayer skip-connection structures for feature extraction. Second, the initial decision maps are generated using the reconstructed feature maps of a deconvolution module. Third, the initial decision maps are refined and processed to obtain the second decision maps, and then the source images are fused to obtain the initial fused images based on the second decision maps. Finally, the final fused image is produced by comparing the Q ABF metrics of the initial fused images. The experimental results show that the proposed method can effectively improve the segmentation performance of the focused and unfocused areas in the source images, and the generated fused images are superior in both subjective and objective metrics compared with most contrast methods.


2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


2013 ◽  
Vol 457-458 ◽  
pp. 1097-1101
Author(s):  
Jun Yong Ma ◽  
Sheng Wei Zhang ◽  
Cai Bing Yue

An image fusion method based on fuzzy regional characteristics is proposed in this paper. After the multi-resolution decomposition of an image, k-mean clustering is firstly done for the low frequency components of the each layer to decompose the low frequency image into important region, sub important region and background region. Then, all areas of the image are fuzzificated and fusion strategies are determined according to their fuzzy membership degrees. Finally, fusion result is obtained by the reconstruction from the multiresolution image representation. Experimental results and fusion quality assessments show the effectiveness of the proposed fusion method.


Author(s):  
LIU BIN ◽  
JIAXIONG PENG

In this paper, image fusion method based on a new class of wavelet — non-separable wavelet with compactly supported, linear phase, orthogonal and dilation matrix [Formula: see text] is presented. We first construct a non-separable wavelet filter bank. Using these filters, the images involved are decomposed into wavelet pyramids. Then the following fusion algorithm was proposed: for low-frequency part, the average value is selected for new pixel value, For the three high-frequency parts of each level, the standard deviation of each image patch over 3×3 window in the high-frequency sub-images is computed as activity measurement. If the standard deviation of the area 3×3 window is bigger than the standard deviation of the corresponding 3×3 window in the other high-frequency sub-image. The center pixel values of the area window that the weighted area energy is bigger are selected. Otherwise the weighted value of the pixel is computed. Then a new fused image is reconstructed. The performance of the method is evaluated using the entropy, cross-entropy, fusion symmetry, root mean square error and peak-to-peak signal-to-noise ratio. The experiment results show that the non-separable wavelet fusion method proposed in this paper is very close to the performance of the Haar separable wavelet fusion method.


2010 ◽  
Vol 108-111 ◽  
pp. 730-735
Author(s):  
Shu Ying Huang ◽  
Yong Yang

Image fusion has become an important and powerful technique for image analysis and computer vision. This paper presents a novel multiresolution image fusion method, which is based on wavelet transform combing with an effective fusion scheme. The main contribution of this research is that by considering the physical meaning of the wavelet coefficients, a selection scheme that treats the coefficients in different ways is proposed. This scheme selects the coefficients in the high frequency bands by a wavelet entropy based strategy, while selects the coefficients in the low frequency band by a variance based strategy. The performance of the proposed fusion method is compared with several existing fusion techniques. Comparison results show that the proposed method can effectively fuse the images with less error.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Chenhui Qiu ◽  
Yuanyuan Wang ◽  
Huan Zhang ◽  
Shunren Xia

Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.


Author(s):  
Cheng Zhao ◽  
Yongdong Huang

The rolling guidance filtering (RGF) has a good characteristic which can smooth texture and preserve the edges, and non-subsampled shearlet transform (NSST) has the features of translation invariance and direction selection based on which a new infrared and visible image fusion method is proposed. Firstly, the rolling guidance filter is used to decompose infrared and visible images into the base and detail layers. Then, the NSST is utilized on the base layer to get the high-frequency coefficients and low-frequency coefficients. The fusion of low-frequency coefficients uses visual saliency map as a fusion rule, and the coefficients of the high-frequency subbands use gradient domain guided filtering (GDGF) and improved Laplacian sum to fuse coefficients. Finally, the fusion of the detail layers combines phase congruency and gradient domain guided filtering as the fusion rule. As a result, the proposed method can not only extract the infrared targets, but also fully preserves the background information of the visible images. Experimental results indicate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.


Author(s):  
Yahui Zhu ◽  
Li Gao

To overcome the shortcomings of traditional image fusion algorithms based on multiscale transform, an infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set is proposed. Firstly, the non-subsampled contour transform is used to decompose the source image into low-frequency coefficients and high-frequency coefficients. Then the potential low-rank representation model is used to decompose low-frequency coefficients into basic sub-bands and salient sub-bands, in which the visual saliency map is taken as weighted coefficient. The weighted summation of low-frequency basic sub-bands is used as the fusion rule. The maximum absolute value of low-frequency salient sub-bands is also used as the fusion rule. The two fusion rules are superimposed to obtain low-frequency fusion coefficients. The intuitionistic fuzzy entropy is used as the fusion rule to measure the texture information and edge information of high-frequency coefficients. Finally, the infrared visible fusion image is obtained with the non-subsampled contour inverse transform. The comparison results on the objective and subjective evaluation of several sets of fusion images show that our image fusion method can effectively keep edge information and rich information on source images, thus producing better visual quality and objective evaluation than other image fusion methods.


2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


Sign in / Sign up

Export Citation Format

Share Document