scholarly journals Visual saliency detection in image and video data

2013 ◽  
Author(s):  
Ye Luo
2012 ◽  
Vol 48 (25) ◽  
pp. 1591-1593 ◽  
Author(s):  
Di Wu ◽  
Xiudong Sun ◽  
Yongyuan Jiang ◽  
Chunfeng Hou

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 71422-71434 ◽  
Author(s):  
Zhenguo Gao ◽  
Naeem Ayoub ◽  
Danjie Chen ◽  
Bingcai Chen ◽  
Zhimao Lu

Author(s):  
Monika Singh ◽  
Anand Singh Singh Jalal ◽  
Ruchira Manke ◽  
Aamir Khan

Saliency detection has always been a challenging and interesting research area for researchers. The existing methodologies either focus on foreground regions or background regions of an image by computing low-level features. However, considering only low-level features did not produce worthy results. In this paper, low-level features, which are extracted using super pixels, are embodied with high-level priors. The background features are assumed as the low-level prior due to the similarity in the background areas and boundary of an image which are interconnected and have minimum distance in between them. High-level priors such as location, color, and semantic prior are incorporated with low-level prior to spotlight the salient area in the image. The experimental results illustrate that the proposed approach outperform the sate-of-the-art methods.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 121330-121343
Author(s):  
Alessandro Bruno ◽  
Francesco Gugliuzza ◽  
Roberto Pirrone ◽  
Edoardo Ardizzone

Information ◽  
2019 ◽  
Vol 10 (8) ◽  
pp. 257 ◽  
Author(s):  
Bashir Ghariba ◽  
Mohamed S. Shehata ◽  
Peter McGuire

Human eye movement is one of the most important functions for understanding our surroundings. When a human eye processes a scene, it quickly focuses on dominant parts of the scene, commonly known as a visual saliency detection or visual attention prediction. Recently, neural networks have been used to predict visual saliency. This paper proposes a deep learning encoder-decoder architecture, based on a transfer learning technique, to predict visual saliency. In the proposed model, visual features are extracted through convolutional layers from raw images to predict visual saliency. In addition, the proposed model uses the VGG-16 network for semantic segmentation, which uses a pixel classification layer to predict the categorical label for every pixel in an input image. The proposed model is applied to several datasets, including TORONTO, MIT300, MIT1003, and DUT-OMRON, to illustrate its efficiency. The results of the proposed model are quantitatively and qualitatively compared to classic and state-of-the-art deep learning models. Using the proposed deep learning model, a global accuracy of up to 96.22% is achieved for the prediction of visual saliency.


2010 ◽  
Vol 17 (8) ◽  
pp. 739-742 ◽  
Author(s):  
Junchi Yan ◽  
Mengyuan Zhu ◽  
Huanxi Liu ◽  
Yuncai Liu

Sign in / Sign up

Export Citation Format

Share Document