Single image haze removal based on the improved atmospheric scattering model

2017 ◽  
Vol 260 ◽  
pp. 180-191 ◽  
Author(s):  
Mingye Ju ◽  
Zhenfei Gu ◽  
Dengyin Zhang
2018 ◽  
Vol 32 (34n36) ◽  
pp. 1840086 ◽  
Author(s):  
Ruxi Xiang ◽  
Feng Wu

In this paper, we propose a novel and effective method for removing haze based on a single image, which firstly computes the dark channel of the estimated radiance image by decomposing the dark channel of the haze input image, and the method then estimates the transmission map of the input image. Finally, the scene radiance image is restored by the classical atmospheric scattering model. Experimental results show that the proposed method outperforms He et al.’s method in terms of haze removal.


2018 ◽  
Vol 51 (17) ◽  
pp. 211-216
Author(s):  
Huang Dewei ◽  
Wang Weixing ◽  
Lu Jianqiang ◽  
Chen Kexin

2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740037 ◽  
Author(s):  
Xifang Zhu ◽  
Ruxi Xiang ◽  
Feng Wu ◽  
Xiaoyan Jiang

To improve the image quality and compensate deficiencies of haze removal, we presented a novel fusion method. By analyzing the darkness channel of each method, the effective darkness channel model that takes the correlation information of each darkness channel into account was constructed. This method was used to estimate the transmission map of the input image, and refined by the modified guided filter in order to further improve the image quality. Finally, the radiance image was restored by combining the monochrome atmospheric scattering model. Experimental results show that the proposed method not only effectively remove the haze of the image, but also outperform the other haze removal methods.


2017 ◽  
Vol 2017 ◽  
pp. 1-17 ◽  
Author(s):  
Zhenfei Gu ◽  
Mingye Ju ◽  
Dengyin Zhang

Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP), which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.


2017 ◽  
Vol 31 (19-21) ◽  
pp. 1740038
Author(s):  
Ruxi Xiang ◽  
Xifang Zhu ◽  
Feng Wu

In this paper, a novel method named Haze Removal based on Two Steps (HRTS) for removing the haze has been proposed based on two steps, which obviously improves the image qualities such as color and visibility caused by haze. The proposed method mainly consists of two steps: the preprocessing step by decomposing the input image to reduce the influence of ambient light and the removed haze step for restoring the radiance. We first reduce the effect of the ambient light by decomposing the haze image, estimate the transmission map based on the result of the decomposition, and then use the modified guided filter method to refine it. Finally, the monochrome atmospheric scattering model is combined to restore the radiance image. Experimental results show that the proposed method could effectively remove the haze and obviously improve the color and visibility of the image in the realistic scenes by comparing other existing haze removal methods.


Author(s):  
Hongyuan Zhu ◽  
Xi Peng ◽  
Vijay Chandrasekhar ◽  
Liyuan Li ◽  
Joo-Hwee Lim

Single image dehazing has been a classic topic in computer vision for years. Motivated by the atmospheric scattering model, the key to satisfactory single image dehazing relies on an estimation of two physical parameters, i.e., the global atmospheric light and the transmission coefficient. Most existing methods employ a two-step pipeline to estimate these two parameters with heuristics which accumulate errors and compromise dehazing quality. Inspired by differentiable programming, we re-formulate the atmospheric scattering model into a novel generative adversarial network (DehazeGAN). Such a reformulation and adversarial learning allow the two parameters to be learned simultaneously and automatically from data by optimizing the final dehazing performance so that clean images with faithful color and structures are directly produced. Moreover, our reformulation also greatly improves the GAN’s interpretability and quality for single image dehazing. To the best of our knowledge, our method is one of the first works to explore the connection among generative adversarial models, image dehazing, and differentiable programming, which advance the theories and application of these areas. Extensive experiments on synthetic and realistic data show that our method outperforms state-of-the-art methods in terms of PSNR, SSIM, and subjective visual quality.


Sign in / Sign up

Export Citation Format

Share Document