auditory enhancement
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 10)

H-INDEX

12
(FIVE YEARS 1)

Author(s):  
Tomoki Maezawa ◽  
Miho Kiyosawa ◽  
Jun I. Kawahara

2021 ◽  
Vol 118 (29) ◽  
pp. e2024794118
Author(s):  
Anahita H. Mehta ◽  
Lei Feng ◽  
Andrew J. Oxenham

The perception of sensory events can be enhanced or suppressed by the surrounding spatial and temporal context in ways that facilitate the detection of novel objects and contribute to the perceptual constancy of those objects under variable conditions. In the auditory system, the phenomenon known as auditory enhancement reflects a general principle of contrast enhancement, in which a target sound embedded within a background sound becomes perceptually more salient if the background is presented first by itself. This effect is highly robust, producing an effective enhancement of the target of up to 25 dB (more than two orders of magnitude in intensity), depending on the task. Despite the importance of the effect, neural correlates of auditory contrast enhancement have yet to be identified in humans. Here, we used the auditory steady-state response to probe the neural representation of a target sound under conditions of enhancement. The probe was simultaneously modulated in amplitude with two modulation frequencies to distinguish cortical from subcortical responses. We found robust correlates for neural enhancement in the auditory cortical, but not subcortical, responses. Our findings provide empirical support for a previously unverified theory of auditory enhancement based on neural adaptation of inhibition and point to approaches for improving sensory prostheses for hearing loss, such as hearing aids and cochlear implants.


i-Perception ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 204166952110187
Author(s):  
Yueying Li ◽  
Zimo Li ◽  
Aihui Deng ◽  
Hewu Zheng ◽  
Jianxin Chen ◽  
...  

Although emotional audiovisual integration has been investigated previously, whether emotional audiovisual integration is affected by the spatial allocation of visual attention is currently unknown. To examine this question, a variant of the exogenous spatial cueing paradigm was adopted, in which stimuli varying by facial expressions and nonverbal affective prosody were used to express six basic emotions (happiness, anger, disgust, sadness, fear, surprise) via a visual, an auditory, or an audiovisual modality. The emotional stimuli were preceded by an unpredictive cue that was used to attract participants’ visual attention. The results showed significantly higher accuracy and quicker response times in response to bimodal audiovisual stimuli than to unimodal visual or auditory stimuli for emotional perception under both valid and invalid cue conditions. The auditory facilitation effect was stronger than the visual facilitation effect under exogenous attention for the six emotions tested. Larger auditory enhancement was induced when the target was presented at the expected location than at the unexpected location. For emotional perception, happiness shared the biggest auditory enhancement among all six emotions. However, the influence of exogenous cueing effect on emotional perception seemed to be absent.


2020 ◽  
Vol 21 (6) ◽  
pp. 485-496
Author(s):  
Axel Ahrens ◽  
Suyash Narendra Joshi ◽  
Bastian Epp

Abstract The auditory system uses interaural time and level differences (ITD and ILD) as cues to localize and lateralize sounds. The availability of ITDs and ILDs in the auditory system is limited by neural phase-locking and by the head size, respectively. Although the frequency-specific limitations are well known, the relative contribution of ITDs and ILDs in individual frequency bands in broadband stimuli is unknown. To determine these relative contributions, or spectral weights, listeners were asked to lateralize stimuli consisting of eleven simultaneously presented 1-ERB-wide noise bands centered between 442 and 5544 Hz and separated by 1-ERB-wide gaps. Either ITDs or ILDs were varied independently across each noise band, while fixing the other interaural disparity to either 0 dB or 0 μs. The weights were obtained using a multiple linear regression analysis. In a second experiment, the effect of auditory enhancement on the spectral weights was investigated. The enhancement of single noise bands was realized by presenting ten of the noise bands as preceding and following sounds (pre- and post-cursors, respectively). Listeners were asked to lateralize the stimuli as in the first experiment. Results show that in the absence of pre- and post-cursors, only the lowest or highest frequency band received highest weight for ITD and ILD, respectively. Auditory enhancement led to significantly enhanced weights given to the band without the pre- and post-cursor. The weight enhancement could only be observed at low frequencies, when determined with ITD cues and for low and high frequencies for ILDs. Hence, the auditory system seems to be able to change the spectral weighting of binaural information depending on the information content.


Author(s):  
Sávia Leticia Menuzzo Quental ◽  
Maria Isabel Ramos do Amaral ◽  
Christiane Marques do Couto

2020 ◽  
Vol 31 (01) ◽  
pp. 030-039 ◽  
Author(s):  
Aaron C. Moberly ◽  
Kara J. Vasil ◽  
Christin Ray

AbstractAdults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual’s auditory proficiency, duration of hearing loss (HL), age, and other factors.The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs).Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation.Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant’s amount of “visual enhancement” (VE) and “auditory enhancement” (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio.VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio.A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.


2019 ◽  
Vol 145 (3) ◽  
pp. 1664-1664
Author(s):  
Anahita H. Mehta ◽  
Andrew J. Oxenham

Sign in / Sign up

Export Citation Format

Share Document