scholarly journals Empirical Analysis of Deep Convolutional Generative Adversarial Network for Ultrasound Image Synthesis

2021 ◽  
Vol 15 (1) ◽  
pp. 71-77
Author(s):  
Dheeraj Kumar ◽  
Mayuri A. Mehta ◽  
Indranath Chatterjee

Introduction: Recent research on Generative Adversarial Networks (GANs) in the biomedical field has proven the effectiveness in generating synthetic images of different modalities. Ultrasound imaging is one of the primary imaging modalities for diagnosis in the medical domain. In this paper, we present an empirical analysis of the state-of-the-art Deep Convolutional Generative Adversarial Network (DCGAN) for generating synthetic ultrasound images. Aims: This work aims to explore the utilization of deep convolutional generative adversarial networks for the synthesis of ultrasound images and to leverage its capabilities. Background: Ultrasound imaging plays a vital role in healthcare for timely diagnosis and treatment. Increasing interest in automated medical image analysis for precise diagnosis has expanded the demand for a large number of ultrasound images. Generative adversarial networks have been proven beneficial for increasing the size of data by generating synthetic images. Objective: Our main purpose in generating synthetic ultrasound images is to produce a sufficient amount of ultrasound images with varying representations of a disease. Methods: DCGAN has been used to generate synthetic ultrasound images. It is trained on two ultrasound image datasets, namely, the common carotid artery dataset and nerve dataset, which are publicly available on Signal Processing Lab and Kaggle, respectively. Results: Results show that good quality synthetic ultrasound images are generated within 100 epochs of training of DCGAN. The quality of synthetic ultrasound images is evaluated using Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM). We have also presented some visual representations of the slices of generated images for qualitative comparison. Conclusion: Our empirical analysis reveals that synthetic ultrasound image generation using DCGAN is an efficient approach. Other: In future work, we plan to compare the quality of images generated through other adversarial methods such as conditional GAN, progressive GAN.

Author(s):  
Khaled ELKarazle ◽  
Valliappan Raman ◽  
Patrick Then

Age estimation models can be employed in many applications, including soft biometrics, content access control, targeted advertising, and many more. However, as some facial images are taken in unrestrained conditions, the quality relegates, which results in the loss of several essential ageing features. This study investigates how introducing a new layer of data processing based on a super-resolution generative adversarial network (SRGAN) model can influence the accuracy of age estimation by enhancing the quality of both the training and testing samples. Additionally, we introduce a novel convolutional neural network (CNN) classifier to distinguish between several age classes. We train one of our classifiers on a reconstructed version of the original dataset and compare its performance with an identical classifier trained on the original version of the same dataset. Our findings reveal that the classifier which trains on the reconstructed dataset produces better classification accuracy, opening the door for more research into building data-centric machine learning systems.


2020 ◽  
Vol 10 (13) ◽  
pp. 4528
Author(s):  
Je-Yeol Lee ◽  
Sang-Il Choi 

In this paper, we propose a new network model using variational learning to improve the learning stability of generative adversarial networks (GAN). The proposed method can be easily applied to improve the learning stability of GAN-based models that were developed for various purposes, given that the variational autoencoder (VAE) is used as a secondary network while the basic GAN structure is maintained. When the gradient of the generator vanishes in the learning process of GAN, the proposed method receives gradient information from the decoder of the VAE that maintains gradient stably, so that the learning processes of the generator and discriminator are not halted. The experimental results of the MNIST and the CelebA datasets verify that the proposed method improves the learning stability of the networks by overcoming the vanishing gradient problem of the generator, and maintains the excellent data quality of the conventional GAN-based generative models.


Author(s):  
Kaizheng Chen ◽  
◽  
Yaping Dai ◽  
Zhiyang Jia ◽  
Kaoru Hirota

In this paper, Spinning Detail Perceptual Generative Adversarial Networks (SDP-GAN) is proposed for single image de-raining. The proposed method adopts the Generative Adversarial Network (GAN) framework and consists of two following networks: the rain streaks generative network G and the discriminative network D. To reduce the background interference, we propose a rain streaks generative network which not only focuses on the high frequency detail map of rainy image, but also directly reduces the mapping range from input to output. To further improve the perceptual quality of generated images, we modify the perceptual loss by extracting high-level features from discriminative network D, rather than pre-trained networks. Furthermore, we introduce a new training procedure based on the notion of self spinning to improve the final de-raining performance. Extensive experiments on the synthetic and real-world datasets demonstrate that the proposed method achieves significant improvements over the recent state-of-the-art methods.


Author(s):  
A. Courtial ◽  
G. Touya ◽  
X. Zhang

Abstract. This article presents how a generative adversarial network (GAN) can be employed to produce a generalised map that combines several cartographic themes in the dense context of urban areas. We use as input detailed buildings, roads, and rivers from topographic datasets produced by the French national mapping agency (IGN), and we expect as output of the GAN a legible map of these elements at a target scale of 1:50,000. This level of detail requires to reduce the amount of information while preserving patterns; covering dense inner cities block by a unique polygon is also necessary because these blocks cannot be represented with enlarged individual buildings. The target map has a style similar to the topographic map produced by IGN. This experiment succeeded in producing image tiles that look like legible maps. It also highlights the impact of data and representation choices on the quality of predicted images, and the challenge of learning geographic relationships.


Author(s):  
Chaoyue Wang ◽  
Chaohui Wang ◽  
Chang Xu ◽  
Dacheng Tao

In this paper, we propose a principled Tag Disentangled Generative Adversarial Networks (TD-GAN) for re-rendering new images for the object of interest from a single image of it by specifying multiple scene properties (such as viewpoint, illumination, expression, etc.). The whole framework consists of a disentangling network, a generative network, a tag mapping net, and a discriminative network, which are trained jointly based on a given set of images that are completely/partially tagged (i.e., supervised/semi-supervised setting). Given an input image, the disentangling network extracts disentangled and interpretable representations, which are then used to generate images by the generative network. In order to boost the quality of disentangled representations, the tag mapping net is integrated to explore the consistency between the image and its tags. Furthermore, the discriminative network is introduced to implement the adversarial training strategy for generating more realistic images. Experiments on two challenging datasets demonstrate the state-of-the-art performance of the proposed framework in the problem of interest.


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


Author(s):  
Lingyu Yan ◽  
Jiarun Fu ◽  
Chunzhi Wang ◽  
Zhiwei Ye ◽  
Hongwei Chen ◽  
...  

AbstractWith the development of image recognition technology, face, body shape, and other factors have been widely used as identification labels, which provide a lot of convenience for our daily life. However, image recognition has much higher requirements for image conditions than traditional identification methods like a password. Therefore, image enhancement plays an important role in the process of image analysis for images with noise, among which the image of low-light is the top priority of our research. In this paper, a low-light image enhancement method based on the enhanced network module optimized Generative Adversarial Networks(GAN) is proposed. The proposed method first applied the enhancement network to input the image into the generator to generate a similar image in the new space, Then constructed a loss function and minimized it to train the discriminator, which is used to compare the image generated by the generator with the real image. We implemented the proposed method on two image datasets (DPED, LOL), and compared it with both the traditional image enhancement method and the deep learning approach. Experiments showed that our proposed network enhanced images have higher PNSR and SSIM, the overall perception of relatively good quality, demonstrating the effectiveness of the method in the aspect of low illumination image enhancement.


Author(s):  
Johannes Haubold ◽  
René Hosch ◽  
Lale Umutlu ◽  
Axel Wetter ◽  
Patrizia Haubold ◽  
...  

Abstract Objectives To reduce the dose of intravenous iodine-based contrast media (ICM) in CT through virtual contrast-enhanced images using generative adversarial networks. Methods Dual-energy CTs in the arterial phase of 85 patients were randomly split into an 80/20 train/test collective. Four different generative adversarial networks (GANs) based on image pairs, which comprised one image with virtually reduced ICM and the original full ICM CT slice, were trained, testing two input formats (2D and 2.5D) and two reduced ICM dose levels (−50% and −80%). The amount of intravenous ICM was reduced by creating virtual non-contrast series using dual-energy and adding the corresponding percentage of the iodine map. The evaluation was based on different scores (L1 loss, SSIM, PSNR, FID), which evaluate the image quality and similarity. Additionally, a visual Turing test (VTT) with three radiologists was used to assess the similarity and pathological consistency. Results The −80% models reach an SSIM of > 98%, PSNR of > 48, L1 of between 7.5 and 8, and an FID of between 1.6 and 1.7. In comparison, the −50% models reach a SSIM of > 99%, PSNR of > 51, L1 of between 6.0 and 6.1, and an FID between 0.8 and 0.95. For the crucial question of pathological consistency, only the 50% ICM reduction networks achieved 100% consistency, which is required for clinical use. Conclusions The required amount of ICM for CT can be reduced by 50% while maintaining image quality and diagnostic accuracy using GANs. Further phantom studies and animal experiments are required to confirm these initial results. Key Points • The amount of contrast media required for CT can be reduced by 50% using generative adversarial networks. • Not only the image quality but especially the pathological consistency must be evaluated to assess safety. • A too pronounced contrast media reduction could influence the pathological consistency in our collective at 80%.


Author(s):  
Huilin Zhou ◽  
Huimin Zheng ◽  
Qiegen Liu ◽  
Jian Liu ◽  
Yuhao Wang

Abstract Electromagnetic inverse-scattering problems (ISPs) are concerned with determining the properties of an unknown object using measured scattered fields. ISPs are often highly nonlinear, causing the problem to be very difficult to address. In addition, the reconstruction images of different optimization methods are distorted which leads to inaccurate reconstruction results. To alleviate these issues, we propose a new linear model solution of generative adversarial network-based (LM-GAN) inspired by generative adversarial networks (GAN). Two sub-networks are trained alternately in the adversarial framework. A linear deep iterative network as a generative network captures the spatial distribution of the data, and a discriminative network estimates the probability of a sample from the training data. Numerical results validate that LM-GAN has admirable fidelity and accuracy when reconstructing complex scatterers.


Sign in / Sign up

Export Citation Format

Share Document