image separation
Recently Published Documents


TOTAL DOCUMENTS

188
(FIVE YEARS 28)

H-INDEX

20
(FIVE YEARS 3)

2021 ◽  
Vol 11 (20) ◽  
pp. 9416
Author(s):  
Fei Jia ◽  
Jindong Xu ◽  
Xiao Sun ◽  
Yongli Ma ◽  
Mengying Ni

To solve the challenge of single-channel blind image separation (BIS) caused by unknown prior knowledge during the separation process, we propose a BIS method based on cascaded generative adversarial networks (GANs). To ensure that the proposed method can perform well in different scenarios and to address the problem of an insufficient number of training samples, a synthetic network is added to the separation network. This method is composed of two GANs: a U-shaped GAN (UGAN), which is used to learn image synthesis, and a pixel-to-attention GAN (PAGAN), which is used to learn image separation. The two networks jointly complete the task of image separation. UGAN uses the unpaired mixed image and the unmixed image to learn the mixing style, thereby generating an image with the “true” mixing characteristics which addresses the problem of an insufficient number of training samples for the PAGAN. A self-attention mechanism is added to the PAGAN to quickly extract important features from the image data. The experimental results show that the proposed method achieves good results on both synthetic image datasets and real remote sensing image datasets. Moreover, it can be used for image separation in different scenarios which lack prior knowledge and training samples.


Author(s):  
J. Samuel Manoharan

Forgeries have recently become more prevalent in the society as a result of recent improvements in media generation technologies. In real-time, modern technology allows for the creation of a forged version of a single image obtained from a social network. Forgery detection algorithms have been created for a variety of areas; however they quickly become obsolete as new attack types exist. This paper presents a unique image forgery detection strategy based on deep learning algorithms. The proposed approach employs a convolutional neural network (CNN) to produce histogram representations from input RGB color images, which are then utilized to detect image forgeries. With the image separation method and copy-move detection applications in mind, the proposed CNN is combined with an intelligent approach and histogram mapping. It is used to detect fake or true images at the initial stage of our proposed work. Besides, it is specially designed for performing feature extraction in image layer separation with the help of CNN model. To capture both geographical and histogram information and the likelihood of presence at the same time, we use vectors in our dynamic capsule networks to detect the forgery kernels from reference images. The proposed research work integrates the intelligence with a feature engineering approach in an efficient manner. They are well-known and efficient in the identification of forged images. The performance metrics such as accuracy, recall, precision, and half total error rate (HTER) are computed and tabulated with the graph plot.


Author(s):  
Ofer M Springer ◽  
Eran O Ofek

Abstract Lensed quasars and supernovae can be used to study galaxies’ gravitational potential and measure cosmological parameters. The typical image separation of objects lensed by galaxies is of the order of 0.5″. Finding the ones with small separations, and measuring their time-delays using ground-based observations is challenging. We suggest a new method to identify lensed quasars and simultaneously measure their time-delays, using seeing-limited synoptic observations in which the lensed quasar images and the lensing galaxy are unresolved. We show that using the light curve of the combined flux, and the astrometric measurements of the center-of-light position of the lensed images, the lensed nature of a quasar can be identified, and its time-delay can be measured. We provide the analytic formalism to do so, taking into account the measurement errors and the fact that the power spectra of quasar light curves is red. We demonstrate our method on simulated data, while its implementation to real data will be presented in future papers. Our simulations suggest that, under reasonable assumptions, the new method has the potential to detect unresolved lensed quasars and measure their time delays, even when the image separation is about 0.2″, or the flux ratio between the faintest and brightest images is as low as 0.05. Python and MATLAB implementations are provided. In a companion paper, we present a method for measuring the time delay using the combined flux observations. This method may be useful in cases in which the astrometric information is not relevant (e.g., reverberation mapping).


Author(s):  
Ashik Shiby

In its definition, the term 'currency' defines an agreed-upon exchange item, the national currency being the legal entity used by the selected controlling entity. Throughout history, issuers have faced 1 common threat: counterfeit. In recent years fake money note has been printed that has resulted in significant losses and damage to society. Therefore, it becomes necessary to build a tool for earning money. This research project proposes a way to look at the note of counterfeit money distributed in our country through their image. After selecting an image use pre-processing. In pre-processing, the acquired image is cropped, smooth, and adjust. Change the image to grey-scale. After conversion use image separation. Features are extracted and reduce. Finally, compare the picture to be real or fake. Duplicate money has been a major problem in the market. There are currency counting machines available in banks and other trading venues to check financial authenticity. Most people do not have access to such programs which is why there is a need for fake money laundering software, which can be used by ordinary people. This proposed framework uses Image Processing to determine whether the money is real or counterfeit. The research project program is built entirely using Python's programming language. It has the methods such as grayscale conversion, edge detection, segmentation, etc.


2021 ◽  
Vol 32 (3) ◽  
pp. 339-355
Author(s):  
M. Jyothirmayi ◽  
S. Sethu Selvi ◽  
P. A. Dinesh

2021 ◽  
Author(s):  
Aryan Khodabandeh

X-ray Computed Tomography (CT) scans, while useful, emit harmful radiation which is why low-dose image acquisition is desired. However, noise corruption in these cases is a difficult obstacle. CT image denoising is a challenging topic because of the difficulty in modeling noise. In this study, we propose taking an image decomposition approach to removing noise from low-dose CT images. We model the image as the superposition of a structure layer and a noise layer. Total Variation (TV) minimization is used to learn two dictionaries to represent each layer independently, and sparse coding is used to separate them. Finally, an iterative post-processing stage is introduced that uses image-adapted curvelet dictionaries to recover blurred edges. Our results demonstrate that image separation is a viable alternative to the classic K-SVD denoising method.


2021 ◽  
Author(s):  
Aryan Khodabandeh

X-ray Computed Tomography (CT) scans, while useful, emit harmful radiation which is why low-dose image acquisition is desired. However, noise corruption in these cases is a difficult obstacle. CT image denoising is a challenging topic because of the difficulty in modeling noise. In this study, we propose taking an image decomposition approach to removing noise from low-dose CT images. We model the image as the superposition of a structure layer and a noise layer. Total Variation (TV) minimization is used to learn two dictionaries to represent each layer independently, and sparse coding is used to separate them. Finally, an iterative post-processing stage is introduced that uses image-adapted curvelet dictionaries to recover blurred edges. Our results demonstrate that image separation is a viable alternative to the classic K-SVD denoising method.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1873
Author(s):  
Xiao Xiao ◽  
Fan Yang ◽  
Amir Sadovnik

A blur detection problem which aims to separate the blurred and clear regions of an image is widely used in many important computer vision tasks such object detection, semantic segmentation, and face recognition, attracting increasing attention from researchers and industry in recent years. To improve the quality of the image separation, many researchers have spent enormous efforts on extracting features from various scales of images. However, the matter of how to extract blur features and fuse these features synchronously is still a big challenge. In this paper, we regard blur detection as an image segmentation problem. Inspired by the success of the U-net architecture for image segmentation, we propose a multi-scale dilated convolutional neural network called MSDU-net. In this model, we design a group of multi-scale feature extractors with dilated convolutions to extract textual information at different scales at the same time. The U-shape architecture of the MSDU-net can fuse the different-scale texture features and generated semantic features to support the image segmentation task. We conduct extensive experiments on two classic public benchmark datasets and show that the MSDU-net outperforms other state-of-the-art blur detection approaches.


Sign in / Sign up

Export Citation Format

Share Document