scholarly journals What do adversarial images tell us about human vision?

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Marin Dujmović ◽  
Gaurav Malhotra ◽  
Jeffrey S Bowers

Deep convolutional neural networks (DCNNs) are frequently described as the best current models of human and primate vision. An obvious challenge to this claim is the existence of adversarial images that fool DCNNs but are uninterpretable to humans. However, recent research has suggested that there may be similarities in how humans and DCNNs interpret these seemingly nonsense images. We reanalysed data from a high-profile paper and conducted five experiments controlling for different ways in which these images can be generated and selected. We show human-DCNN agreement is much weaker and more variable than previously reported, and that the weak agreement is contingent on the choice of adversarial images and the design of the experiment. Indeed, we find there are well-known methods of generating images for which humans show no agreement with DCNNs. We conclude that adversarial images still pose a challenge to theorists using DCNNs as models of human vision.

2020 ◽  
Author(s):  
Marin Dujmović ◽  
Gaurav Malhotra ◽  
Jeffrey Bowers

AbstractDeep convolutional neural networks (DCNNs) are frequently described as promising models of human and primate vision. An obvious challenge to this claim is the existence of adversarial images that fool DCNNs but are uninterpretable to humans. However, recent research has suggested that there may be similarities in how humans and DCNNs interpret these seemingly nonsense images. In this study, we reanalysed data from a high-profile paper and conducted four experiments controlling for different ways in which these images can be generated and selected. We show that agreement between humans and DCNNs is much weaker and more variable than previously reported, and that the weak agreement is contingent on the choice of adversarial images and the design of the experiment. Indeed, it is easy to generate images with no agreement. We conclude that adversarial images still challenge the claim that DCNNs constitute promising models of human and primate vision.


2019 ◽  
Author(s):  
Marek A. Pedziwiatr ◽  
Matthias Kümmerer ◽  
Thomas S.A. Wallis ◽  
Matthias Bethge ◽  
Christoph Teufel

AbstractEye movements are vital for human vision, and it is therefore important to understand how observers decide where to look. Meaning maps (MMs), a technique to capture the distribution of semantic importance across an image, have recently been proposed to support the hypothesis that meaning rather than image features guide human gaze. MMs have the potential to be an important tool far beyond eye-movements research. Here, we examine central assumptions underlying MMs. First, we compared the performance of MMs in predicting fixations to saliency models, showing that DeepGaze II – a deep neural network trained to predict fixations based on high-level features rather than meaning – outperforms MMs. Second, we show that whereas human observers respond to changes in meaning induced by manipulating object-context relationships, MMs and DeepGaze II do not. Together, these findings challenge central assumptions underlying the use of MMs to measure the distribution of meaning in images.


2021 ◽  
Author(s):  
Jiaqi Huang ◽  
Peter Gerhardstein

Multiple theories of human object recognition argue for the importance of semantic parts in the formation of intermediate representations. However, the role of semantic parts in Deep Convolutional Neural Networks (DCNN), which encapsulate the most recent and successful computer vision models, is poorly examined. We extract representations of DCNNs corresponding to differential performance with stimuli in which different parts of the same exemplar are deleted, and then compare these representations with those of human observers obtained in a behavioral experiment, using representational similarity analysis (RSA). We find that DCNN representations correlate strongly with those of observers, while acknowledging that these DCNN representations may not be part-based given an equally high correlation between DCNN output and part size. Additionally, the exemplars incorrectly identified by DCNNs tend to have less “human-like” representations, which demonstrates RSA as a potential novel method for interpreting error in intermediate processes of recognition of DCNNs.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Sign in / Sign up

Export Citation Format

Share Document