scholarly journals Artifact-Free Single Image Defogging

Atmosphere ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 577
Author(s):  
Gabriele Graffieti ◽  
Davide Maltoni

In this paper, we present a novel defogging technique, named CurL-Defog, with the aim of minimizing the insertion of artifacts while maintaining good contrast restoration and visibility enhancement. Many learning-based defogging approaches rely on paired data, where fog is artificially added to clear images; this usually provides good results on mildly fogged images but is not effective for difficult cases. On the other hand, the models trained with real data can produce visually impressive results, but unwanted artifacts are often present. We propose a curriculum learning strategy and an enhanced CycleGAN model to reduce the number of produced artifacts, where both synthetic and real data are used in the training procedure. We also introduce a new metric, called HArD (Hazy Artifact Detector), to numerically quantify the number of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. HArD is then combined with other defogging indicators to produce a solid metric that is not deceived by the presence of artifacts. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets.

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3896
Author(s):  
Dat Ngo ◽  
Gi-Dong Lee ◽  
Bongsoon Kang

Haze is a term that is widely used in image processing to refer to natural and human-activity-emitted aerosols. It causes light scattering and absorption, which reduce the visibility of captured images. This reduction hinders the proper operation of many photographic and computer-vision applications, such as object recognition/localization. Accordingly, haze removal, which is also known as image dehazing or defogging, is an apposite solution. However, existing dehazing algorithms unconditionally remove haze, even when haze occurs occasionally. Therefore, an approach for haze density estimation is highly demanded. This paper then proposes a model that is known as the haziness degree evaluator to predict haze density from a single image without reference to a corresponding haze-free image, an existing georeferenced digital terrain model, or training on a significant amount of data. The proposed model quantifies haze density by optimizing an objective function comprising three haze-relevant features that result from correlation and computation analysis. This objective function is formulated to maximize the image’s saturation, brightness, and sharpness while minimizing the dark channel. Additionally, this study describes three applications of the proposed model in hazy/haze-free image classification, dehazing performance assessment, and single image dehazing. Extensive experiments on both real and synthetic datasets demonstrate its efficacy in these applications.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
SungMin Suh ◽  
Yongeun Park ◽  
KyoungMin Ko ◽  
SeongMin Yang ◽  
Jaehyeong Ahn ◽  
...  

In the recent era of AI, instance segmentation has significantly advanced boundary and object detection especially in diverse fields (e.g., biological and environmental research). Despite its progress, edge detection amid adjacent objects (e.g., organism cells) still remains intractable. This is because homogeneous and heterogeneous objects are prone to being mingled in a single image. To cope with this challenge, we propose the weighted Mask R-CNN designed to effectively separate overlapped objects in virtue of extra weights to adjacent boundaries. For numerical study, a range of experiments are performed with applications to simulated data and real data (e.g., Microcystis, one of the most common algae genera and cell membrane images). It is noticeable that the weighted Mask R-CNN outperforms the standard Mask R-CNN, given that the analytic experiments show on average 92.5% of precision and 96.4% of recall in algae data and 94.5% of precision and 98.6% of recall in cell membrane data. Consequently, we found that a majority of sample boundaries in real and simulated data are precisely segmented in the midst of object mixtures.


Author(s):  
Maggie Makar ◽  
Adith Swaminathan ◽  
Emre Kıcıman

The potential for using machine learning algorithms as a tool for suggesting optimal interventions has fueled significant interest in developing methods for estimating heterogeneous or individual treatment effects (ITEs) from observational data. While several methods for estimating ITEs have been recently suggested, these methods assume no constraints on the availability of data at the time of deployment or test time. This assumption is unrealistic in settings where data acquisition is a significant part of the analysis pipeline, meaning data about a test case has to be collected in order to predict the ITE. In this work, we present Data Efficient Individual Treatment Effect Estimation (DEITEE), a method which exploits the idea that adjusting for confounding, and hence collecting information about confounders, is not necessary at test time. DEITEE allows the development of rich models that exploit all variables at train time but identifies a minimal set of variables required to estimate the ITE at test time. Using 77 semi-synthetic datasets with varying data generating processes, we show that DEITEE achieves significant reductions in the number of variables required at test time with little to no loss in accuracy. Using real data, we demonstrate the utility of our approach in helping soon-to-be mothers make planning and lifestyle decisions that will impact newborn health.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 63 ◽  
Author(s):  
Benjamin Guedj ◽  
Bhargav Srinivasa Desikan

We propose a new supervised learning algorithm for classification and regression problems where two or more preliminary predictors are available. We introduce KernelCobra, a non-linear learning strategy for combining an arbitrary number of initial predictors. KernelCobra builds on the COBRA algorithm introduced by Biau et al. (2016), which combined estimators based on a notion of proximity of predictions on the training data. While the COBRA algorithm used a binary threshold to declare which training data were close and to be used, we generalise this idea by using a kernel to better encapsulate the proximity information. Such a smoothing kernel provides more representative weights to each of the training points which are used to build the aggregate and final predictor, and KernelCobra systematically outperforms the COBRA algorithm. While COBRA is intended for regression, KernelCobra deals with classification and regression. KernelCobra is included as part of the open source Python package Pycobra (0.2.4 and onward), introduced by Srinivasa Desikan (2018). Numerical experiments were undertaken to assess the performance (in terms of pure prediction and computational complexity) of KernelCobra on real-life and synthetic datasets.


2019 ◽  
Vol 11 (23) ◽  
pp. 2857 ◽  
Author(s):  
Xiaoyu Dong ◽  
Zhihong Xi ◽  
Xu Sun ◽  
Lianru Gao

Image super-resolution (SR) reconstruction plays a key role in coping with the increasing demand on remote sensing imaging applications with high spatial resolution requirements. Though many SR methods have been proposed over the last few years, further research is needed to improve SR processes with regard to the complex spatial distribution of the remote sensing images and the diverse spatial scales of ground objects. In this paper, a novel multi-perception attention network (MPSR) is developed with performance exceeding those of many existing state-of-the-art models. By incorporating the proposed enhanced residual block (ERB) and residual channel attention group (RCAG), MPSR can super-resolve low-resolution remote sensing images via multi-perception learning and multi-level information adaptive weighted fusion. Moreover, a pre-train and transfer learning strategy is introduced, which improved the SR performance and stabilized the training procedure. Experimental comparisons are conducted using 13 state-of-the-art methods over a remote sensing dataset and benchmark natural image sets. The proposed model proved its excellence in both objective criterion and subjective perspective.


2018 ◽  
Vol 28 (5) ◽  
pp. 1508-1522 ◽  
Author(s):  
Qianya Qi ◽  
Li Yan ◽  
Lili Tian

In testing differentially expressed genes between tumor and healthy tissues, data are usually collected in paired form. However, incomplete paired data often occur. While extensive statistical researches exist for paired data with incompleteness in both arms, hardly any recent work can be found on paired data with incompleteness in single arm. This paper aims to fill this gap by proposing some new methods, namely, P-value pooling methods and a nonparametric combination test. Simulation studies are conducted to investigate the performance of the proposed methods in terms of type I error and power at small to moderate sample sizes. A real data set from The Cancer Genome Atlas (TCGA) breast cancer study is analyzed using the proposed methods.


Author(s):  
Lumin Liu

Removing undesired re ection from a single image is in demand for computational photography. Re ection removal methods are gradually effective because of the fast development of deep neural networks. However, current results of re ection removal methods usually leave salient re ection residues due to the challenge of recognizing diverse re ection patterns. In this paper, we present a one-stage re ection removal framework with an end-to-end manner that considers both low-level information correlation and efficient feature separation. Our approach employs the criss-cross attention mechanism to extract low-level features and to efficiently enhance contextual correlation. To thoroughly remove re ection residues in the background image, we punish the similar texture feature by contrasting the parallel feature separa- tion networks, and thus unrelated textures in the background image could be progressively separated during model training. Experiments on both real-world and synthetic datasets manifest our approach can reach the state-of-the-art effect quantitatively and qualitatively.


Author(s):  
Sen Deng ◽  
Yidan Feng ◽  
Mingqiang Wei ◽  
Haoran Xie ◽  
Yiping Chen ◽  
...  

We present a novel direction-aware feature-level frequency decomposition network for single image deraining. Compared with existing solutions, the proposed network has three compelling characteristics. First, unlike previous algorithms, we propose to perform frequency decomposition at feature-level instead of image-level, allowing both low-frequency maps containing structures and high-frequency maps containing details to be continuously refined during the training procedure. Second, we further establish communication channels between low-frequency maps and high-frequency maps to interactively capture structures from high-frequency maps and add them back to low-frequency maps and, simultaneously, extract details from low-frequency maps and send them back to high-frequency maps, thereby removing rain streaks while preserving more delicate features in the input image. Third, different from existing algorithms using convolutional filters consistent in all directions, we propose a direction-aware filter to capture the direction of rain streaks in order to more effectively and thoroughly purge the input images of rain streaks. We extensively evaluate the proposed approach in three representative datasets and experimental results corroborate our approach consistently outperforms state-of-the-art deraining algorithms.


2019 ◽  
Vol 29 (1) ◽  
pp. 282-292
Author(s):  
Tsung-Shan Tsou

We introduce a robust likelihood approach to inference about marginal distributional characteristics for paired data without modeling correlation/joint probabilities. This method is reproducible in that it is applicable to paired settings with various sizes. The virtue of the new strategy is elucidated via testing marginal homogeneity in paired triplet scenario. We use simulations and real data analysis to demonstrate the merit of our robust likelihood methodology.


Sign in / Sign up

Export Citation Format

Share Document