scholarly journals Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks

2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Wei Tang ◽  
Yu Liu ◽  
Chao Zhang ◽  
Juan Cheng ◽  
Hu Peng ◽  
...  

In the field of cell and molecular biology, green fluorescent protein (GFP) images provide functional information embodying the molecular distribution of biological cells while phase-contrast images maintain structural information with high resolution. Fusion of GFP and phase-contrast images is of high significance to the study of subcellular localization, protein functional analysis, and genetic expression. This paper proposes a novel algorithm to fuse these two types of biological images via generative adversarial networks (GANs) by carefully taking their own characteristics into account. The fusion problem is modelled as an adversarial game between a generator and a discriminator. The generator aims to create a fused image that well extracts the functional information from the GFP image and the structural information from the phase-contrast image at the same time. The target of the discriminator is to further improve the overall similarity between the fused image and the phase-contrast image. Experimental results demonstrate that the proposed method can outperform several representative and state-of-the-art image fusion methods in terms of both visual quality and objective evaluation.

2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Peng Feng ◽  
Jing Wang ◽  
Biao Wei ◽  
Deling Mi

A hybrid multiscale and multilevel image fusion algorithm for green fluorescent protein (GFP) image and phase contrast image of Arabidopsis cell is proposed in this paper. Combining intensity-hue-saturation (IHS) transform and sharp frequency localization Contourlet transform (SFL-CT), this algorithm uses different fusion strategies for different detailed subbands, which include neighborhood consistency measurement (NCM) that can adaptively find balance between color background and gray structure. Also two kinds of neighborhood classes based on empirical model are taken into consideration. Visual information fidelity (VIF) as an objective criterion is introduced to evaluate the fusion image. The experimental results of 117 groups of Arabidopsis cell image from John Innes Center show that the new algorithm cannot only make the details of original images well preserved but also improve the visibility of the fusion image, which shows the superiority of the novel method to traditional ones.


1996 ◽  
Author(s):  
Jesper Glueckstad ◽  
Haruyoshi Toyoda ◽  
Narihiro Yoshida ◽  
Tamiki Takemori ◽  
Tsutomu Hara

Author(s):  
A. Peterzol ◽  
J. Berthier ◽  
P. Duvauchelle ◽  
C. Ferrero ◽  
D. Babot

2013 ◽  
Vol 30 (2) ◽  
pp. 129-132
Author(s):  
Jinchuan Guo ◽  
Qinlao Yang ◽  
Bin Zhou ◽  
Hanben Niu

Author(s):  
Zhiguang Yang ◽  
Youping Chen ◽  
Zhuliang Le ◽  
Yong Ma

Abstract In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.


2011 ◽  
Vol 56 (3) ◽  
pp. 515-534 ◽  
Author(s):  
Marcus J Kitchen ◽  
David M Paganin ◽  
Kentaro Uesugi ◽  
Beth J Allison ◽  
Robert A Lewis ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document