computational photography
Recently Published Documents


TOTAL DOCUMENTS

128
(FIVE YEARS 27)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Lumin Liu

Removing undesired re ection from a single image is in demand for computational photography. Re ection removal methods are gradually effective because of the fast development of deep neural networks. However, current results of re ection removal methods usually leave salient re ection residues due to the challenge of recognizing diverse re ection patterns. In this paper, we present a one-stage re ection removal framework with an end-to-end manner that considers both low-level information correlation and efficient feature separation. Our approach employs the criss-cross attention mechanism to extract low-level features and to efficiently enhance contextual correlation. To thoroughly remove re ection residues in the background image, we punish the similar texture feature by contrasting the parallel feature separa- tion networks, and thus unrelated textures in the background image could be progressively separated during model training. Experiments on both real-world and synthetic datasets manifest our approach can reach the state-of-the-art effect quantitatively and qualitatively.


2021 ◽  
Author(s):  
Vivek Ramakrishnan ◽  
D. J. Pete

Combining images with different exposure settings are of prime importance in the field of computational photography. Both transform domain approach and filtering based approaches are possible for fusing multiple exposure images, to obtain the well-exposed image. We propose a Discrete Cosine Trans- form (DCT-based) approach for fusing multiple exposure images. The input image stack is processed in the transform domain by an averaging operation and the inverse transform is performed on the averaged image obtained to generate the fusion of multiple exposure image. The experimental observation leads us to the conjecture that the obtained DCT coefficients are indicators of parameters to measure well-exposedness, contrast and saturation as specified in the traditional exposure fusion based approach and the averaging performed indicates equal weights assigned to the DCT coefficients in this non- parametric and non pyramidal approach to fuse the multiple exposure stack.


2021 ◽  
Vol 7 (1) ◽  
pp. 571-604
Author(s):  
Mauricio Delbracio ◽  
Damien Kelly ◽  
Michael S. Brown ◽  
Peyman Milanfar

The first mobile camera phone was sold only 20 years ago, when taking pictures with one's phone was an oddity, and sharing pictures online was unheard of. Today, the smartphone is more camera than phone. How did this happen? This transformation was enabled by advances in computational photography—the science and engineering of making great images from small-form-factor, mobile cameras. Modern algorithmic and computing advances, including machine learning, have changed the rules of photography, bringing to it new modes of capture, postprocessing, storage, and sharing. In this review, we give a brief history of mobile computational photography and describe some of the key technological components, including burst photography, noise reduction, and super-resolution. At each step, we can draw naive parallels to the human visual system.


2021 ◽  
Author(s):  
ANDO Shizutoshi

Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Exam?ples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like de?noising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.


2021 ◽  
Vol 43 (7) ◽  
pp. 2175-2178
Author(s):  
Yoav Y. Schechner ◽  
Kavita Bala ◽  
Ori Katz ◽  
Kalyan Sunkavalli ◽  
Ko Nishino

Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1815
Author(s):  
Ke Xian ◽  
Juewen Peng ◽  
Chao Zhang ◽  
Hao Lu ◽  
Zhiguo Cao

Shallow depth-of-field (DoF), focusing on the region of interest by blurring out the rest of the image, is challenging in computer vision and computational photography. It can be achieved either by adjusting the parameters (e.g., aperture and focal length) of a single-lens reflex camera or computational techniques. In this paper, we investigate the latter one, i.e., explore a computational method to render shallow DoF. The previous methods either rely on portrait segmentation or stereo sensing, which can only be applied to portrait photos and require stereo inputs. To address these issues, we study the problem of rendering shallow DoF from an arbitrary image. In particular, we propose a method that consists of a salient object detection (SOD) module, a monocular depth prediction (MDP) module, and a DoF rendering module. The SOD module determines the focal plane, while the MDP module controls the blur degree. Specifically, we introduce a label-guided ranking loss for both salient object detection and depth prediction. For salient object detection, the label-guided ranking loss comprises two terms: (i) heterogeneous ranking loss that encourages the sampled salient pixels to be different from background pixels; (ii) homogeneous ranking loss penalizes the inconsistency of salient pixels or background pixels. For depth prediction, the label-guided ranking loss mainly relies on multilevel structural information, i.e., from low-level edge maps to high-level object instance masks. In addition, we introduce a SOD and depth-aware blur rendering method to generate shallow DoF images. Comprehensive experiments demonstrate the effectiveness of our proposed method.


Author(s):  
Michael R. Peres

Author(s):  
Mahesh Manik Kumbhar ◽  
Bhalchandra B. Godbole

As the revolution in computational photography and computer vision applications facilitates fast and reliable information, quality of the scene and visual perception and is being increasingly used in various fields like public safety, traffic accident analysis, crime forensics, remote sensing area and military surveillance. We make an investigation of the dehazing effect of scenes affected by weather phenomena. ‘Dehazing’ has emerged as a promising technology to recover the clear image and video from an input hazy scene, such that the quality can be significantly enhanced. A scene captured in the outdoor environment affected by haze like fog mist and dust particles in the atmosphere. We are utilizing a ‘Dehazing algorithm, to remove this unwanted haze from videos and Real-time video. Also, remove haze from images, for this, we use a novel method of video dehazing based on contrast enhancement. From our observation, it is concluded that hazy image and video has low contrast, so we estimate transmission map to maximize the contrast of output scene. And we use the depth estimation process to detect or identify hidden parameters from the scene, and also creates a corresponding haze scene with high fidelity. Finally, we reconstruct or restore the seen without changing its originality. Hence dehazing performance with fewer artifacts and better coding efficiency and demonstrate that the proposed algorithm can remove haze efficiently and recover the parameters of the original scene.


Sign in / Sign up

Export Citation Format

Share Document