ARTISTIC COMPLEXITY AND SALIENCY: TWO FACES OF THE SAME COIN?

2013 ◽  
Vol 09 (02) ◽  
pp. 1350010 ◽  
Author(s):  
MATTEO CACCIOLA ◽  
GIANLUIGI OCCHIUTO ◽  
FRANCESCO CARLO MORABITO

Many computer vision problems consist of making a suitable content description of images usually aiming to extract the relevant information content. In case of images representing paintings or artworks, the information extracted is rather subject-dependent, thus escaping any universal quantification. However, we proposed a measure of complexity of such kinds of oeuvres which is related to brain processing. The artistic complexity measures the brain inability to categorize complex nonsense forms represented in modern art, in a dynamic process of acquisition that most involves top-down mechanisms. Here, we compare the quantitative results of our analysis on a wide set of paintings of various artists to the cues extracted from a standard bottom-up approach based on visual saliency concept. In every painting inspection, the brain searches for more informative areas at different scales, then connecting them in an attempt to capture the full impact of information content. Artistic complexity is able to quantify information which might have been individually lost in the fruition of a human observer thus identifying the artistic hand. Visual saliency highlights the most salient areas of the paintings standing out from their neighbours and grabbing our attention. Nevertheless, we will show that a comparison on the ways the two algorithms act, may manifest some interesting links, finally indicating an interplay between bottom-up and top-down modalities.

2001 ◽  
Vol 39 (2-3) ◽  
pp. 137-150 ◽  
Author(s):  
S Karakaş ◽  
C Başar-Eroğlu ◽  
Ç Özesmi ◽  
H Kafadar ◽  
Ö.Ü Erzengin
Keyword(s):  
Top Down ◽  

Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


Author(s):  
Mariana von Mohr ◽  
Aikaterini Fotopoulou

Pain and pleasant touch have been recently classified as interoceptive modalities. This reclassification lies at the heart of long-standing debates questioning whether these modalities should be defined as sensations on their basis of neurophysiological specificity at the periphery or as homeostatic emotions on the basis of top-down convergence and modulation at the spinal and brain levels. Here, we outline the literature on the peripheral and central neurophysiology of pain and pleasant touch. We next recast this literature within a recent Bayesian predictive coding framework, namely active inference. This recasting puts forward a unifying model of bottom-up and top-down determinants of pain and pleasant touch and the role of social factors in modulating the salience of peripheral signals reaching the brain.


2021 ◽  
Author(s):  
Uziel Jaramillo-Avila ◽  
Jonathan M. Aitken ◽  
Kevin Gurney ◽  
Sean R. Anderson

2019 ◽  
Author(s):  
Pantelis Leptourgos ◽  
Charles-Edouard Notredame ◽  
Marion Eck ◽  
Renaud Jardri ◽  
Sophie Denève

AbstractWhen facing fully ambiguous images, the brain cannot commit to a single percept and instead switches between mutually exclusive interpretations every few seconds, a phenomenon known as bistable perception. Despite years of research, there is still no consensus on whether bistability, and perception in general, is driven primarily by bottom-up or top-down mechanisms. Here, we adopted a Bayesian approach in an effort to reconcile these two theories. Fifty-five healthy participants were exposed to an adaptation of the Necker cube paradigm, in which we manipulated sensory evidence (by shadowing the cube) and prior knowledge (e.g., by varying instructions about what participants should expect to see). We found that manipulations of both sensory evidence and priors significantly affected the way participants perceived the Necker cube. However, we observed an interaction between the effect of the cue and the effect of the instructions, a finding incompatible with Bayes-optimal integration. In contrast, the data were well predicted by a circular inference model. In this model, ambiguous sensory evidence is systematically biased in the direction of current expectations, ultimately resulting in a bistable percept.


Author(s):  
Benjamin Schuman ◽  
Shlomo Dellal ◽  
Alvar Prönneke ◽  
Robert Machold ◽  
Bernardo Rudy

Many of our daily activities, such as riding a bike to work or reading a book in a noisy cafe, and highly skilled activities, such as a professional playing a tennis match or a violin concerto, depend upon the ability of the brain to quickly make moment-to-moment adjustments to our behavior in response to the results of our actions. Particularly, they depend upon the ability of the neocortex to integrate the information provided by the sensory organs (bottom-up information) with internally generated signals such as expectations or attentional signals (top-down information). This integration occurs in pyramidal cells (PCs) and their long apical dendrite, which branches extensively into a dendritic tuft in layer 1 (L1). The outermost layer of the neocortex, L1 is highly conserved across cortical areas and species. Importantly, L1 is the predominant input layer for top-down information, relayed by a rich, dense mesh of long-range projections that provide signals to the tuft branches of the PCs. Here, we discuss recent progress in our understanding of the composition of L1 and review evidence that L1 processing contributes to functions such as sensory perception, cross-modal integration, controlling states of consciousness, attention, and learning. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Author(s):  
Jeremiah D. Still ◽  
Christopher M. Masciocchi

In this chapter, the authors highlight the influence of visual saliency, or local contrast, on users’ searches of interfaces, particularly web pages. Designers have traditionally focused on the importance of goals and expectations (top-down processes) for the navigation of interfaces (Diaper & Stanton, 2004), with little consideration for the influence of saliency (bottom-up processes). The Handbook of Human-Computer Interaction (Sears & Jacko, 2008), for example, does not discuss the influence of bottom-up processing, potentially neglecting an important aspect of interface-based searches. The authors review studies that demonstrate how a user’s attention is rapidly drawn to visually salient locations in a variety of tasks and scenes, including web pages. They then describe an inexpensive, rapid technique that designers can use to identify visually salient locations in web pages, and discuss its advantages over similar methods.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Victor Pando-Naude ◽  
Agata Patyczek ◽  
Leonardo Bonetti ◽  
Peter Vuust

AbstractA remarkable feature of the human brain is its ability to integrate information from the environment with internally generated content. The integration of top-down and bottom-up processes during complex multi-modal human activities, however, is yet to be fully understood. Music provides an excellent model for understanding this since music listening leads to the urge to move, and music making entails both playing and listening at the same time (i.e., audio-motor coupling). Here, we conducted activation likelihood estimation (ALE) meta-analyses of 130 neuroimaging studies of music perception, production and imagery, with 2660 foci, 139 experiments, and 2516 participants. We found that music perception and production rely on auditory cortices and sensorimotor cortices, while music imagery recruits distinct parietal regions. This indicates that the brain requires different structures to process similar information which is made available either by an interaction with the environment (i.e., bottom-up) or by internally generated content (i.e., top-down).


Sign in / Sign up

Export Citation Format

Share Document