image translation
Recently Published Documents


TOTAL DOCUMENTS

546
(FIVE YEARS 471)

H-INDEX

19
(FIVE YEARS 12)

Author(s):  
Ning Zhang ◽  
Jun Xiang ◽  
Jingan Wang ◽  
Ruru Pan ◽  
Weidong Gao
Keyword(s):  

Author(s):  
Qi Mao ◽  
Hung-Yu Tseng ◽  
Hsin-Ying Lee ◽  
Jia-Bin Huang ◽  
Siwei Ma ◽  
...  
Keyword(s):  

2022 ◽  
Vol 130 (3) ◽  
pp. 1-16
Author(s):  
Jong-In Choi ◽  
Soo-Kyun Kim ◽  
Shin-Jin Kang

Author(s):  
Wei Wang ◽  
Xinhua Yu ◽  
Bo Fang ◽  
Dianna-Yue Zhao ◽  
Yongyong Chen ◽  
...  

2022 ◽  
pp. 98-110
Author(s):  
Md Fazle Rabby ◽  
Md Abdullah Al Momin ◽  
Xiali Hei

Generative adversarial networks have been a highly focused research topic in computer vision, especially in image synthesis and image-to-image translation. There are a lot of variations in generative nets, and different GANs are suitable for different applications. In this chapter, the authors investigated conditional generative adversarial networks to generate fake images, such as handwritten signatures. The authors demonstrated an implementation of conditional generative adversarial networks, which can generate fake handwritten signatures according to a condition vector tailored by humans.


2022 ◽  
Vol 14 (1) ◽  
pp. 190
Author(s):  
Yuxiang Cai ◽  
Yingchun Yang ◽  
Qiyi Zheng ◽  
Zhengwei Shen ◽  
Yongheng Shang ◽  
...  

When segmenting massive amounts of remote sensing images collected from different satellites or geographic locations (cities), the pre-trained deep learning models cannot always output satisfactory predictions. To deal with this issue, domain adaptation has been widely utilized to enhance the generalization abilities of the segmentation models. Most of the existing domain adaptation methods, which based on image-to-image translation, firstly transfer the source images to the pseudo-target images, adapt the classifier from the source domain to the target domain. However, these unidirectional methods suffer from the following two limitations: (1) they do not consider the inverse procedure and they cannot fully take advantage of the information from the other domain, which is also beneficial, as confirmed by our experiments; (2) these methods may fail in the cases where transferring the source images to the pseudo-target images is difficult. In this paper, in order to solve these problems, we propose a novel framework BiFDANet for unsupervised bidirectional domain adaptation in the semantic segmentation of remote sensing images. It optimizes the segmentation models in two opposite directions. In the source-to-target direction, BiFDANet learns to transfer the source images to the pseudo-target images and adapts the classifier to the target domain. In the opposite direction, BiFDANet transfers the target images to the pseudo-source images and optimizes the source classifier. At test stage, we make the best of the source classifier and the target classifier, which complement each other with a simple linear combination method, further improving the performance of our BiFDANet. Furthermore, we propose a new bidirectional semantic consistency loss for our BiFDANet to maintain the semantic consistency during the bidirectional image-to-image translation process. The experiments on two datasets including satellite images and aerial images demonstrate the superiority of our method against existing unidirectional methods.


2021 ◽  
Vol 29 (4) ◽  
Author(s):  
Dejan Štepec ◽  
Danijel Skočaj

Detection of visual anomalies refers to the problem of finding patterns in different imaging data that do not conform to the expected visual appearance, and is a widely studied problem in different domains. Due to the nature of anomaly occurrences and underlying generating processes, it is hard to characterize them and obtain labelled data. Obtaining labelled data is especially difficult in biomedical applications, where only trained domain experts can provide labels, which are often diverse and complex to a large degree. The recently presented approaches for unsupervised detection of visual anomalies omit the need for labelled data and demonstrate promising results in domains where anomalous samples significantly deviate from the normal appearance. Despite promising results, the performance of such approaches still lags behind supervised approaches and does not provide a universal solution. In this work, we present an image-to-image translation-based framework that significantly surpasses the performance of existing unsupervised methods and approaches the performance of supervised methods in a challenging domain of cancerous region detection in histology imagery.


2021 ◽  
Author(s):  
Li-Yu Chen ◽  
I-Chao Shen ◽  
Bing-Yu Chen
Keyword(s):  

Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3066
Author(s):  
Do-Yeon Hwang ◽  
Seok-Hwan Choi ◽  
Jinmyeong Shin ◽  
Moonkyu Kim ◽  
Yoon-Ho Choi

In this paper, we propose a new deep learning-based image translation method to predict and generate images after hair transplant surgery from images before hair transplant surgery. Since existing image translation models use a naive strategy that trains the whole distribution of translation, the image translation models using the original image as the input data result in converting not only the hair transplant surgery region, which is the region of interest (ROI) for image translation, but also the other image regions, which are not the ROI. To solve this problem, we proposed a novel generative adversarial network (GAN)-based ROI image translation method, which converts only the ROI and retains the image for the non-ROI. Specifically, by performing image translation and image segmentation independently, the proposed method generates predictive images from the distribution of images after hair transplant surgery and specifies the ROI to be used for generated images. In addition, by applying the ensemble method to image segmentation, we propose a more robust method through complementing the shortages of various image segmentation models. From the experimental results using a real medical image dataset, e.g., 1394 images before hair transplantation and 896 images after hair transplantation, to train the GAN model, we show that the proposed GAN-based ROI image translation method performed better than the other GAN-based image translation methods, e.g., by 23% in SSIM (Structural Similarity Index Measure), 452% in IoU (Intersection over Union), and 42% in FID (Frechet Inception Distance), on average. Furthermore, the ensemble method that we propose not only improves ROI detection performance but also shows consistent performances in generating better predictive images from preoperative images taken from diverse angles.


Sign in / Sign up

Export Citation Format

Share Document