Lateralized Cerebral Processing of Abstract Linguistic Structure in Clear and Degraded Speech

2020 ◽  
Vol 31 (1) ◽  
pp. 591-602
Author(s):  
Qingqing Meng ◽  
Yiwen Li Hegner ◽  
Iain Giblin ◽  
Catherine McMahon ◽  
Blake W Johnson

Abstract Human cortical activity measured with magnetoencephalography (MEG) has been shown to track the temporal regularity of linguistic information in connected speech. In the current study, we investigate the underlying neural sources of these responses and test the hypothesis that they can be directly modulated by changes in speech intelligibility. MEG responses were measured to natural and spectrally degraded (noise-vocoded) speech in 19 normal hearing participants. Results showed that cortical coherence to “abstract” linguistic units with no accompanying acoustic cues (phrases and sentences) were lateralized to the left hemisphere and changed parametrically with intelligibility of speech. In contrast, responses coherent to words/syllables accompanied by acoustic onsets were bilateral and insensitive to intelligibility changes. This dissociation suggests that cerebral responses to linguistic information are directly affected by intelligibility but also powerfully shaped by physical cues in speech. This explains why previous studies have reported widely inconsistent effects of speech intelligibility on cortical entrainment and, within a single experiment, provided clear support for conclusions about language lateralization derived from a large number of separately conducted neuroimaging studies. Since noise-vocoded speech resembles the signals provided by a cochlear implant device, the current methodology has potential clinical utility for assessment of cochlear implant performance.

Author(s):  
Siriporn Dachasilaruk ◽  
Niphat Jantharamin ◽  
Apichai Rungruang

Cochlear implant (CI) listeners encounter difficulties in communicating with other persons in noisy listening environments. However, most CI research has been carried out using the English language. In this study, single-channel speech enhancement (SE) strategies as a pre-processing approach for the CI system were investigated in terms of Thai speech intelligibility improvement. Two SE algorithms, namely multi-band spectral subtraction (MBSS) and Weiner filter (WF) algorithms, were evaluated. Speech signals consisting of monosyllabic and bisyllabic Thai words were degraded by speech-shaped noise and babble noise at SNR levels of 0, 5, and 10 dB. Then the noisy words were enhanced using SE algorithms. The enhanced words were fed into the CI system to synthesize vocoded speech. The vocoded speech was presented to twenty normal-hearing listeners. The results indicated that speech intelligibility was marginally improved by the MBSS algorithm and significantly improved by the WF algorithm in some conditions. The enhanced bisyllabic words showed a noticeably higher intelligibility improvement than the enhanced monosyllabic words in all conditions, particularly in speech-shaped noise. Such outcomes may be beneficial to Thai-speaking CI listeners.


2019 ◽  
Vol 23 ◽  
pp. 233121651983786 ◽  
Author(s):  
Catherine L. Blackburn ◽  
Pádraig T. Kitterick ◽  
Gary Jones ◽  
Christian J. Sumner ◽  
Paula C. Stacey

Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1) or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech, the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single “independent noise” signal detection theory model predicted the overall visual speech benefit in some conditions but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration.


2017 ◽  
Vol 118 (6) ◽  
pp. 3144-3151 ◽  
Author(s):  
Lucas S. Baltzell ◽  
Ramesh Srinivasan ◽  
Virginia M. Richards

It has been suggested that cortical entrainment plays an important role in speech perception by helping to parse the acoustic stimulus into discrete linguistic units. However, the question of whether the entrainment response to speech depends on the intelligibility of the stimulus remains open. Studies addressing this question of intelligibility have, for the most part, significantly distorted the acoustic properties of the stimulus to degrade the intelligibility of the speech stimulus, making it difficult to compare across “intelligible” and “unintelligible” conditions. To avoid these acoustic confounds, we used priming to manipulate the intelligibility of vocoded speech. We used EEG to measure the entrainment response to vocoded target sentences that are preceded by natural speech (nonvocoded) prime sentences that are either valid (match the target) or invalid (do not match the target). For unintelligible speech, valid primes have the effect of restoring intelligibility. We compared the effect of priming on the entrainment response for both 3-channel (unintelligible) and 16-channel (intelligible) speech. We observed a main effect of priming, suggesting that the entrainment response depends on prior knowledge, but not a main effect of vocoding (16 channels vs. 3 channels). Furthermore, we found no difference in the effect of priming on the entrainment response to 3-channel and 16-channel vocoded speech, suggesting that for vocoded speech, entrainment response does not depend on intelligibility. NEW & NOTEWORTHY Neural oscillations have been implicated in the parsing of speech into discrete, hierarchically organized units. Our data suggest that these oscillations track the acoustic envelope rather than more abstract linguistic properties of the speech stimulus. Our data also suggest that prior experience with the stimulus allows these oscillations to better track the stimulus envelope.


Author(s):  
Qingqing Meng ◽  
Yiwen Li Hegner ◽  
Iain Giblin ◽  
Catherine McMahon ◽  
Blake W Johnson

AbstractProviding a plausible neural substrate of speech processing and language comprehension, cortical activity has been shown to track different levels of linguistic structure in connected speech (syllables, phrases and sentences), independent of the physical regularities of the acoustic stimulus. In the current study, we investigated the effect of speech intelligibility on this brain activity as well as the underlying neural sources. Using magnetoencephalography (MEG), brain responses to natural speech and noise-vocoded (spectrally-degraded) speech in nineteen normal hearing participants were measured. Results showed that cortical MEG coherence to linguistic structure changed parametrically with the intelligibility of the speech signal. Cortical responses coherent with phrase and sentence structures were lefthemisphere lateralized, whereas responses coherent to syllable/word structure were bilateral. The enhancement of coherence to intelligible compared to unintelligible speech was also left lateralized and localized to the parasylvian cortex. These results demonstrate that cortical responses to higher level linguistics structures (phrase and sentence level) are sensitive to speech intelligibility. Since the noise-vocoded sentences simulate the auditory input provided by a cochlear implant, such objective neurophysiological measures have potential clinical utility for assessment of cochlear implant performance.


Author(s):  
Faizah Mushtaq ◽  
Ian M. Wiggins ◽  
Pádraig T. Kitterick ◽  
Carly A. Anderson ◽  
Douglas E. H. Hartley

AbstractWhilst functional neuroimaging has been used to investigate cortical processing of degraded speech in adults, much less is known about how these signals are processed in children. An enhanced understanding of cortical correlates of poor speech perception in children would be highly valuable to oral communication applications, including hearing devices. We utilised vocoded speech stimuli to investigate brain responses to degraded speech in 29 normally hearing children aged 6–12 years. Intelligibility of the speech stimuli was altered in two ways by (i) reducing the number of spectral channels and (ii) reducing the amplitude modulation depth of the signal. A total of five different noise-vocoded conditions (with zero, partial or high intelligibility) were presented in an event-related format whilst participants underwent functional near-infrared spectroscopy (fNIRS) neuroimaging. Participants completed a word recognition task during imaging, as well as a separate behavioural speech perception assessment. fNIRS recordings revealed statistically significant sensitivity to stimulus intelligibility across several brain regions. More intelligible stimuli elicited stronger responses in temporal regions, predominantly within the left hemisphere, while right inferior parietal regions showed an opposite, negative relationship. Although there was some evidence that partially intelligible stimuli elicited the strongest responses in the left inferior frontal cortex, a region previous studies have suggested is associated with effortful listening in adults, this effect did not reach statistical significance. These results further our understanding of cortical mechanisms underlying successful speech perception in children. Furthermore, fNIRS holds promise as a clinical technique to help assess speech intelligibility in paediatric populations.


Author(s):  
Martin Chavant ◽  
Alexis Hervais-Adelman ◽  
Olivier Macherey

Purpose An increasing number of individuals with residual or even normal contralateral hearing are being considered for cochlear implantation. It remains unknown whether the presence of contralateral hearing is beneficial or detrimental to their perceptual learning of cochlear implant (CI)–processed speech. The aim of this experiment was to provide a first insight into this question using acoustic simulations of CI processing. Method Sixty normal-hearing listeners took part in an auditory perceptual learning experiment. Each subject was randomly assigned to one of three groups of 20 referred to as NORMAL, LOWPASS, and NOTHING. The experiment consisted of two test phases separated by a training phase. In the test phases, all subjects were tested on recognition of monosyllabic words passed through a six-channel “PSHC” vocoder presented to a single ear. In the training phase, which consisted of listening to a 25-min audio book, all subjects were also presented with the same vocoded speech in one ear but the signal they received in their other ear differed across groups. The NORMAL group was presented with the unprocessed speech signal, the LOWPASS group with a low-pass filtered version of the speech signal, and the NOTHING group with no sound at all. Results The improvement in speech scores following training was significantly smaller for the NORMAL than for the LOWPASS and NOTHING groups. Conclusions This study suggests that the presentation of normal speech in the contralateral ear reduces or slows down perceptual learning of vocoded speech but that an unintelligible low-pass filtered contralateral signal does not have this effect. Potential implications for the rehabilitation of CI patients with partial or full contralateral hearing are discussed.


2018 ◽  
Vol 11 (3) ◽  
pp. 306-316 ◽  
Author(s):  
Fernando Del Mando Lucchesi ◽  
Ana Claudia Moreira Almeida-Verdu ◽  
Deisy das Graças de Souza

2020 ◽  
Author(s):  
Lieber Po-Hung Li ◽  
Ji-Yan Han ◽  
Wei-Zhong Zheng ◽  
Ren-Jie Huang ◽  
Ying-Hui Lai

BACKGROUND The cochlear implant technology is a well-known approach to help deaf patients hear speech again. It can improve speech intelligibility in quiet conditions; however, it still has room for improvement in noisy conditions. More recently, it has been proven that deep learning–based noise reduction (NR), such as noise classification and deep denoising autoencoder (NC+DDAE), can benefit the intelligibility performance of patients with cochlear implants compared to classical noise reduction algorithms. OBJECTIVE Following the successful implementation of the NC+DDAE model in our previous study, this study aimed to (1) propose an advanced noise reduction system using knowledge transfer technology, called NC+DDAE_T, (2) examine the proposed NC+DDAE_T noise reduction system using objective evaluations and subjective listening tests, and (3) investigate which layer substitution of the knowledge transfer technology in the NC+DDAE_T noise reduction system provides the best outcome. METHODS The knowledge transfer technology was adopted to reduce the number of parameters of the NC+DDAE_T compared with the NC+DDAE. We investigated which layer should be substituted using short-time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ) scores, as well as t-distributed stochastic neighbor embedding to visualize the features in each model layer. Moreover, we enrolled ten cochlear implant users for listening tests to evaluate the benefits of the newly developed NC+DDAE_T. RESULTS The experimental results showed that substituting the middle layer (ie, the second layer in this study) of the noise-independent DDAE (NI-DDAE) model achieved the best performance gain regarding STOI and PESQ scores. Therefore, the parameters of layer three in the NI-DDAE were chosen to be replaced, thereby establishing the NC+DDAE_T. Both objective and listening test results showed that the proposed NC+DDAE_T noise reduction system achieved similar performances compared with the previous NC+DDAE in several noisy test conditions. However, the proposed NC+DDAE_T only needs a quarter of the number of parameters compared to the NC+DDAE. CONCLUSIONS This study demonstrated that knowledge transfer technology can help to reduce the number of parameters in an NC+DDAE while keeping similar performance rates. This suggests that the proposed NC+DDAE_T model may reduce the implementation costs of this noise reduction system and provide more benefits for cochlear implant users.


2010 ◽  
Vol 10 ◽  
pp. 329-339 ◽  
Author(s):  
Torsten Rahne ◽  
Michael Ziese ◽  
Dorothea Rostalski ◽  
Roland Mühler

This paper describes a logatome discrimination test for the assessment of speech perception in cochlear implant users (CI users), based on a multilingual speech database, the Oldenburg Logatome Corpus, which was originally recorded for the comparison of human and automated speech recognition. The logatome discrimination task is based on the presentation of 100 logatome pairs (i.e., nonsense syllables) with balanced representations of alternating “vowel-replacement” and “consonant-replacement” paradigms in order to assess phoneme confusions. Thirteen adult normal hearing listeners and eight adult CI users, including both good and poor performers, were included in the study and completed the test after their speech intelligibility abilities were evaluated with an established sentence test in noise. Furthermore, the discrimination abilities were measured electrophysiologically by recording the mismatch negativity (MMN) as a component of auditory event-related potentials. The results show a clear MMN response only for normal hearing listeners and CI users with good performance, correlating with their logatome discrimination abilities. Higher discrimination scores for vowel-replacement paradigms than for the consonant-replacement paradigms were found. We conclude that the logatome discrimination test is well suited to monitor the speech perception skills of CI users. Due to the large number of available spoken logatome items, the Oldenburg Logatome Corpus appears to provide a useful and powerful basis for further development of speech perception tests for CI users.


Sign in / Sign up

Export Citation Format

Share Document