scholarly journals See what you hear – How the brain forms representations across the senses

Neuroforum ◽  
2018 ◽  
Vol 24 (4) ◽  
pp. A169-A181
Author(s):  
Uta Noppeney ◽  
Samuel A. Jones ◽  
Tim Rohe ◽  
Ambra Ferrari

Abstarct Our senses are constantly bombarded with a myriad of signals. To make sense of this cacophony, the brain needs to integrate signals emanating from a common source, but segregate signals originating from the different sources. Thus, multisensory perception relies critically on inferring the world’s causal structure (i. e. one common vs. multiple independent sources). Behavioural research has shown that the brain arbitrates between sensory integration and segregation consistent with the principles of Bayesian Causal Inference. At the neural level, recent functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) studies have shown that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple perceptual estimates across the sensory processing hierarchies. Only at the top of the hierarchy in anterior parietal cortices did the brain form perceptual estimates that take into account the observer’s uncertainty about the world’s causal structure consistent with Bayesian Causal Inference.

PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2018 ◽  
Author(s):  
Máté Aller ◽  
Uta Noppeney

AbstractTo form a percept of the environment, the brain needs to solve the binding problem – inferring whether signals come from a common cause and be integrated, or come from independent causes and be segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian Causal Inference; but, the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localization task, we show that the brain accomplishes Bayesian Causal Inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task-relevance into spatial priority maps that guide behavioural responses. Critically, as predicted by Bayesian Causal Inference, these spatial priority maps take into account the brain’s uncertainty about the world’s causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian Causal Inference, a statistical computation, crucial for effective interactions with the environment.


2018 ◽  
Author(s):  
Samuel A. Jones ◽  
Ulrik Beierholm ◽  
David Meijer ◽  
Uta Noppeney

AbstractAgeing has been shown to impact multisensory perception, but the underlying computational mechanisms are unclear. For effective interactions with the environment, observers should integrate signals that share a common source, weighted by their reliabilities, and segregate those from separate sources. Observers are thought to accumulate evidence about the world’s causal structure over time until a decisional threshold is reached.Combining psychophysics and Bayesian modelling, we investigated how ageing affects audiovisual perception of spatial signals. Older and younger adults were comparable in their final localisation and common-source judgement responses under both speeded and unspeeded conditions, but were disproportionately slower for audiovisually incongruent trials.Bayesian modelling showed that ageing did not affect the ability to arbitrate between integration and segregation under either unspeeded or speeded conditions. However, modelling the within-trial dynamics of evidence accumulation under speeded conditions revealed that older observers accumulate noisier auditory representations for longer, set higher decisional thresholds, and have impaired motor speed. Older observers preserve audiovisual localisation performance, despite noisier sensory representations, by sacrificing response speed.


2021 ◽  
Author(s):  
Guangyao Qi ◽  
Wen Fang ◽  
Shenghao Li ◽  
Junru Li ◽  
Liping Wang

ABSTRACTNatural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure and corresponding sensory representations during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.


2021 ◽  
Author(s):  
Marie Chancel ◽  
H. Henrik Ehrsson ◽  
Wei Ji Ma

How do we perceive the fundamental distinction between the physical self and the external world, and how do we come to sense that an object in view, for example, a hand, is part of our own body or not? Over the past two decades, many studies have investigated the contributions of vision, touch, and proprioception to the sense of body ownership, i.e., the multisensory perception of limbs and body parts as our own. However, the processes involved in subjectively experienced body ownership have only been qualitatively described, and the computational principles that determine such sensations remain unclear. To address this issue, we developed a detection-like psychophysics task based on the classic rubber hand illusion paradigm where participants were asked to report whether the rubber hand felt like their own (the illusion) or not. We manipulated illusion strength by varying the asynchrony of visual and tactile stimuli delivered to the rubber hand and the hidden real hand under different levels of visual noise. We found that (1) the probability of the emergence of the rubber hand illusion increased with the addition of visual noise and was well predicted by a causal inference model involving the observer computing the probability of the visual and tactile signals coming from a common source; (2) the causal inference model outperformed a non-Bayesian model involving the observer not taking into account sensory uncertainty; and (3) by comparing the casual inference of ownership and visuotactile synchrony detection, we found that the prior probability of inferring a common cause for the two types of multisensory precepts was correlated but greater for ownership, which suggests that individual differences in rubber hand illusion can be explained at the computational level as differences in how priors are used in the multisensory integration process. Collectively, these results demonstrate that the sense of body ownership is determined by Bayesian causal inference, which implies that the same statistical principles determine the perception of the bodily self and the external world.


2019 ◽  
Vol 3 (2) ◽  
pp. 367
Author(s):  
Sila Paramita ◽  
Naomi Soetikno ◽  
Florencia Irena

Perkembangan sensori merupakan perkembangan penting bagi individu. Sejak lahir, individu mulai memproses informasi sensori yang diperoleh dari lingkungan. Setiap informasi yang diterima sensori individu akan diintegrasikan dan diolah di otak sehingga menampilkan respons perilaku adaptif. Integrasi sensori dapat membantu individu untuk menguasai kemampuan dasar, seperti bahasa, pengendalian emosi, dan kemampuan berhitung. Masalah dalam integrasi sensori berkaitan dengan masalah dalam pemrosesan informasi sensori yang dikenal sebagai Regulatory Sensory Processing Disorder (RSPD). Ketika individu mengalami masalah dalam pemrosesan informasi sensori, maka individu akan mengalami hambatan baik dalam keberfungsiannya sehari-hari maupun perkembangannya. Masalah sensori dapat dikenali sejak dini melalui karakteristik perilaku yang ditampilkan anak. Oleh sebab itu, penelitian ini bertujuan untuk mengetahui gambaran perilaku anak dengan Regulatory Sensory Processing Disorder. Penelitian ini merupakan penelitian kualitatif dengan metode studi kasus. Partisipan dalam penelitian ini berjumlah satu orang yang merupakan pasien anak pada Klinik Tumbuh Kembang X. Metode pengambilan data menggunakan observasi, wawancara, dan asesmen psikologi. Ada pun sumber informasi diperoleh langsung melalui partisipan, orangtua, dan terapis. Untuk mengetahui gambaran fungsi sensori pada partisipan, peneliti menggunakan daftar observasi wawancara yang tertera pada ICDL-DMIC (2005). Hasil penelitian menunjukkan bahwa partisipan yang terlibat dalam penelitian ini mengalami gangguan pemrosesan sensori dengan tipe sensory-seeking. Partisipan penelitian menampilkan perilaku yang sangat aktif bergerak dan kesulitan memberikan atensi pada tugas yang diberikan. Hal tersebut berdampak pada performa akademis dan interaksi sosial yang dimiliki. Sensory development is an important development for individuals. From birth, individuals begin to process sensory information obtained from the environment. Every information received by an individual sensory receptor will be integrated and processed in the brain so that it displays an adaptive behavioral response. Sensory integration can help individuals to master basic abilities, such as language, emotional control, and numeracy skills. Problems in sensory integration are related to problems in processing sensory information known as Regulatory Sensory Processing Disorder (RSPD). When individuals experience problems in processing sensory information, individuals will experience obstacles both in their daily functioning and development. Sensory problems can be recognized early on through the behavioral characteristics displayed by children. Therefore, this study aims to describe the behaviour of children with Regulatory Sensory Processing Disorder. This research is a qualitative research with case study method. The sole participant in this study is a pediatric patient in the Growth and Development Clinic X. Data collection used observation, interviews, and psychological assessment. Information was also obtained directly through participants, parents, and therapists. To find out the description of sensory functions in participants, researchers used the interview observation list listed in ICDL-DMIC (2005). The results showed that the participants involved in this study experienced sensory-seeking type sensory processing disorders. Participant displayed very active behavior and difficulty in attending to the tasks assigned. This has an impact on academic performance and social interactions.


2019 ◽  
Vol 9 (3) ◽  
pp. 68 ◽  
Author(s):  
Emily Kilroy ◽  
Lisa Aziz-Zadeh ◽  
Sharon Cermak

Abnormal sensory-based behaviors are a defining feature of autism spectrum disorders (ASD). Dr. A. Jean Ayres was the first occupational therapist to conceptualize Sensory Integration (SI) theories and therapies to address these deficits. Her work was based on neurological knowledge of the 1970’s. Since then, advancements in neuroimaging techniques make it possible to better understand the brain areas that may underlie sensory processing deficits in ASD. In this article, we explore the postulates proposed by Ayres (i.e., registration, modulation, motivation) through current neuroimaging literature. To this end, we review the neural underpinnings of sensory processing and integration in ASD by examining the literature on neurophysiological responses to sensory stimuli in individuals with ASD as well as structural and network organization using a variety of neuroimaging techniques. Many aspects of Ayres’ hypotheses about the nature of the disorder were found to be highly consistent with current literature on sensory processing in children with ASD but there are some discrepancies across various methodological techniques and ASD development. With additional characterization, neurophysiological profiles of sensory processing in ASD may serve as valuable biomarkers for diagnosis and monitoring of therapeutic interventions, such as SI therapy.


2021 ◽  
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

AbstractTo obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying reliability. Visual spatial reliability was smaller, comparable to and greater than that of auditory stimuli. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During audiovisual recalibration, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its final estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, first increased and then decreased, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.Author summaryAudiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical recalibration task in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the recalibration task. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, and this model is able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a cue and its final estimate. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008877
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.


2018 ◽  
Author(s):  
Yinan Cao ◽  
Christopher Summerfield ◽  
Hame Park ◽  
Bruno L. Giordano ◽  
Christoph Kayser

When combining information across different senses humans need to flexibly select cues of a common origin whilst avoiding distraction from irrelevant inputs. The brain could solve this challenge using a hierarchical principle, by deriving rapidly a fused sensory estimate for computational expediency and, later and if required, filtering out irrelevant signals based on the inferred sensory cause(s). Analysing time- and source-resolved human magnetoencephalographic data we unveil a systematic spatio-temporal cascade of the relevant computations, starting with early segregated unisensory representations, continuing with sensory fusion in parietal-temporal regions and culminating as causal inference in the frontal lobe. Our results reconcile previous computational accounts of multisensory perception by showing that prefrontal cortex guides flexible integrative behaviour based on candidate representations established in sensory and association cortices, thereby framing multisensory integration in the generalised context of adaptive behaviour.


Sign in / Sign up

Export Citation Format

Share Document