sensory signals
Recently Published Documents


TOTAL DOCUMENTS

226
(FIVE YEARS 57)

H-INDEX

33
(FIVE YEARS 5)

2022 ◽  
Vol 4 ◽  
Author(s):  
Neil Cohn ◽  
Joost Schilperoord

Language is typically embedded in multimodal communication, yet models of linguistic competence do not often incorporate this complexity. Meanwhile, speech, gesture, and/or pictures are each considered as indivisible components of multimodal messages. Here, we argue that multimodality should not be characterized by whole interacting behaviors, but by interactions of similar substructures which permeate across expressive behaviors. These structures comprise a unified architecture and align within Jackendoff's Parallel Architecture: a modality, meaning, and grammar. Because this tripartite architecture persists across modalities, interactions can manifest within each of these substructures. Interactions between modalities alone create correspondences in time (ex. speech with gesture) or space (ex. writing with pictures) of the sensory signals, while multimodal meaning-making balances how modalities carry “semantic weight” for the gist of the whole expression. Here we focus primarily on interactions between grammars, which contrast across two variables: symmetry, related to the complexity of the grammars, and allocation, related to the relative independence of interacting grammars. While independent allocations keep grammars separate, substitutive allocation inserts expressions from one grammar into those of another. We show that substitution operates in interactions between all three natural modalities (vocal, bodily, graphic), and also in unimodal contexts within and between languages, as in codeswitching. Altogether, we argue that unimodal and multimodal expressions arise as emergent interactive states from a unified cognitive architecture, heralding a reconsideration of the “language faculty” itself.


2022 ◽  
pp. 095679762110326
Author(s):  
Christos Bechlivanidis ◽  
Marc J. Buehner ◽  
Emma C. Tecwyn ◽  
David A. Lagnado ◽  
Christoph Hoerl ◽  
...  

The goal of perception is to infer the most plausible source of sensory stimulation. Unisensory perception of temporal order, however, appears to require no inference, because the order of events can be uniquely determined from the order in which sensory signals arrive. Here, we demonstrate a novel perceptual illusion that casts doubt on this intuition: In three experiments ( N = 607), the experienced event timings were determined by causality in real time. Adult participants viewed a simple three-item sequence, ACB, which is typically remembered as ABC in line with principles of causality. When asked to indicate the time at which events B and C occurred, participants’ points of subjective simultaneity shifted so that the assumed cause B appeared earlier and the assumed effect C later, despite participants’ full attention and repeated viewings. This first demonstration of causality reversing perceived temporal order cannot be explained by postperceptual distortion, lapsed attention, or saccades.


2021 ◽  
Vol 17 (12) ◽  
pp. e1009654
Author(s):  
Andrea Ferrario ◽  
Andrey Palyanov ◽  
Stella Koutsikou ◽  
Wenchang Li ◽  
Steve Soffe ◽  
...  

How does the brain process sensory stimuli, and decide whether to initiate locomotor behaviour? To investigate this question we develop two whole body computer models of a tadpole. The “Central Nervous System” (CNS) model uses evidence from whole-cell recording to define 2000 neurons in 12 classes to study how sensory signals from the skin initiate and stop swimming. In response to skin stimulation, it generates realistic sensory pathway spiking and shows how hindbrain sensory memory populations on each side can compete to initiate reticulospinal neuron firing and start swimming. The 3-D “Virtual Tadpole” (VT) biomechanical model with realistic muscle innervation, body flexion, body-water interaction, and movement is then used to evaluate if motor nerve outputs from the CNS model can produce swimming-like movements in a volume of “water”. We find that the whole tadpole VT model generates reliable and realistic swimming. Combining these two models opens new perspectives for experiments.


2021 ◽  
Vol 15 ◽  
Author(s):  
Sergio Delle Monache ◽  
Iole Indovina ◽  
Myrka Zago ◽  
Elena Daprati ◽  
Francesco Lacquaniti ◽  
...  

Gravity is a physical constraint all terrestrial species have adapted to through evolution. Indeed, gravity effects are taken into account in many forms of interaction with the environment, from the seemingly simple task of maintaining balance to the complex motor skills performed by athletes and dancers. Graviceptors, primarily located in the vestibular otolith organs, feed the Central Nervous System with information related to the gravity acceleration vector. This information is integrated with signals from semicircular canals, vision, and proprioception in an ensemble of interconnected brain areas, including the vestibular nuclei, cerebellum, thalamus, insula, retroinsula, parietal operculum, and temporo-parietal junction, in the so-called vestibular network. Classical views consider this stage of multisensory integration as instrumental to sort out conflicting and/or ambiguous information from the incoming sensory signals. However, there is compelling evidence that it also contributes to an internal representation of gravity effects based on prior experience with the environment. This a priori knowledge could be engaged by various types of information, including sensory signals like the visual ones, which lack a direct correspondence with physical gravity. Indeed, the retinal accelerations elicited by gravitational motion in a visual scene are not invariant, but scale with viewing distance. Moreover, the “visual” gravity vector may not be aligned with physical gravity, as when we watch a scene on a tilted monitor or in weightlessness. This review will discuss experimental evidence from behavioral, neuroimaging (connectomics, fMRI, TMS), and patients’ studies, supporting the idea that the internal model estimating the effects of gravity on visual objects is constructed by transforming the vestibular estimates of physical gravity, which are computed in the brainstem and cerebellum, into internalized estimates of virtual gravity, stored in the vestibular cortex. The integration of the internal model of gravity with visual and non-visual signals would take place at multiple levels in the cortex and might involve recurrent connections between early visual areas engaged in the analysis of spatio-temporal features of the visual stimuli and higher visual areas in temporo-parietal-insular regions.


PLoS Biology ◽  
2021 ◽  
Vol 19 (11) ◽  
pp. e3001465
Author(s):  
Ambra Ferrari ◽  
Uta Noppeney

To form a percept of the multisensory world, the brain needs to integrate signals from common sources weighted by their reliabilities and segregate those from independent sources. Previously, we have shown that anterior parietal cortices combine sensory signals into representations that take into account the signals’ causal structure (i.e., common versus independent sources) and their sensory reliabilities as predicted by Bayesian causal inference. The current study asks to what extent and how attentional mechanisms can actively control how sensory signals are combined for perceptual inference. In a pre- and postcueing paradigm, we presented observers with audiovisual signals at variable spatial disparities. Observers were precued to attend to auditory or visual modalities prior to stimulus presentation and postcued to report their perceived auditory or visual location. Combining psychophysics, functional magnetic resonance imaging (fMRI), and Bayesian modelling, we demonstrate that the brain moulds multisensory inference via 2 distinct mechanisms. Prestimulus attention to vision enhances the reliability and influence of visual inputs on spatial representations in visual and posterior parietal cortices. Poststimulus report determines how parietal cortices flexibly combine sensory estimates into spatial representations consistent with Bayesian causal inference. Our results show that distinct neural mechanisms control how signals are combined for perceptual inference at different levels of the cortical hierarchy.


2021 ◽  
Vol 12 ◽  
Author(s):  
Marvin Liesner ◽  
Wilfried Kunde

Perceptual changes that an agent produces by efferent activity can become part of the agent’s minimal self. Yet, in human agents, efferent activities produce perceptual changes in various sensory modalities and in various temporal and spatial proximities. Some of these changes occur at the “biological” body, and they are to some extent conveyed by “private” sensory signals, whereas other changes occur in the environment of that biological body and are conveyed by “public” sensory signals. We discuss commonalties and differences of these signals for generating selfhood. We argue that despite considerable functional overlap of these sensory signals in generating self-experience, there are reasons to tell them apart in theorizing and empirical research about development of the self.


2021 ◽  
Author(s):  
Jason M Guest ◽  
Arco Bast ◽  
Rajeevan T Narayanan ◽  
Marcel Oberlaender

Perception is causally linked to a calcium-dependent spiking mechanism that is built into the distal dendrites of layer 5 pyramidal tract neurons – the major output cell type of the cerebral cortex. It is yet unclear which circuits activate this cellular mechanism upon sensory stimulation. Here we found that the same thalamocortical axons that relay sensory signals to layer 4 also densely target the dendritic domain by which pyramidal tract neurons initiate calcium spikes. Distal dendritic inputs, which normally appear greatly attenuated at the cell body, thereby generate bursts of action potentials in cortical output during sensory processing. Our findings indicate that thalamus gates an active dendritic mechanism to facilitate the combination of sensory signals with top-down information streams into cortical output. Thus, in addition to being the central hub for sensory signals, thalamus is also likely to ensure that the signals it relays to cortex are perceived by the animal.


2021 ◽  
Vol 15 ◽  
Author(s):  
Klaudia Grechuta ◽  
Javier De La Torre Costa ◽  
Belén Rubio Ballester ◽  
Paul Verschure

The unique ability to identify one’s own body and experience it as one’s own is fundamental in goal-oriented behavior and survival. However, the mechanisms underlying the so-called body ownership are yet not fully understood. Evidence based on Rubber Hand Illusion (RHI) paradigms has demonstrated that body ownership is a product of reception and integration of self and externally generated multisensory information, feedforward and feedback processing of sensorimotor signals, and prior knowledge about the body. Crucially, however, these designs commonly involve the processing of proximal modalities while the contribution of distal sensory signals to the experience of ownership remains elusive. Here we propose that, like any robust percept, body ownership depends on the integration and prediction across all sensory modalities, including distal sensory signals pertaining to the environment. To test our hypothesis, we created an embodied goal-oriented Virtual Air Hockey Task, in which participants were to hit a virtual puck into a goal. In two conditions, we manipulated the congruency of distal multisensory cues (auditory and visual) while preserving proximal and action-driven signals entirely predictable. Compared to a fully congruent condition, our results revealed a significant decrease on three dimensions of ownership evaluation when distal signals were incongruent, including the subjective report as well as physiological and kinematic responses to an unexpected threat. Together, these findings support the notion that the way we represent our body is contingent upon all the sensory stimuli, including distal and action-independent signals. The present data extend the current framework of body ownership and may also find applications in rehabilitation scenarios.


2021 ◽  
Author(s):  
Silvia García ◽  
Paulina Trejo ◽  
Alberto García

Considering the significance of improving natural disasters emergency management and recognizing that catastrophe scenes are almost impossible to reconstruct in real life, forcing persons to experience real hazards violates both law and morality, in this research is presented an engine for Virtual Reality/Augmented Reality (VR/AR) that works enhancing human capacities for prevention, response and recovery of natural phenomena effects. The selected novel techniques have very advantageous qualities to overcome the inconveniences detected in the most recent seismic devastating experience in Mexico City, the Sept 19th, 2017, earthquake M7.2: total collapse of more than 230 buildings, partial fall of 7 000 houses, 370 people were killed, and over 6,000 were injured. VR and AR provide researchers, government authorities and rescue teams with tools for recreating the emergencies entirely through computer-generated signals of sight, sound, and touch, when using VR, and overlays of sensory signals for experiences a rich juxtaposition of virtual and real worlds simultaneously, when AR is applied. The gap between knowledge and action is filled with visual, aural, and kinesthetic immersive experiences that poses a possibility to attend to the population in danger in a deeply efficient way, never experimented before.


2021 ◽  
Vol 22 (S1) ◽  
pp. 69-75
Author(s):  
Marko Nardini

AbstractOur experience of the world seems to unfold seamlessly in a unitary 3D space. For this to be possible, the brain has to merge many disparate cognitive representations and sensory inputs. How does it do so? I discuss work on two key combination problems: coordinating multiple frames of reference (e.g. egocentric and allocentric), and coordinating multiple sensory signals (e.g. visual and proprioceptive). I focus on two populations whose spatial processing we can observe at a crucial stage of being configured and optimised: children, whose spatial abilities are still developing significantly, and naïve adults learning new spatial skills, such as sensing distance using auditory cues. The work uses a model-based approach to compare participants’ behaviour with the predictions of alternative information processing models. This lets us see when and how—during development, and with experience—the perceptual-cognitive computations underpinning our experiences in space change. I discuss progress on understanding the limits of effective spatial computation for perception and action, and how lessons from the developing spatial cognitive system can inform approaches to augmenting human abilities with new sensory signals provided by technology.


Sign in / Sign up

Export Citation Format

Share Document