scholarly journals Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions

2019 ◽  
Vol 146 (1) ◽  
pp. 195-210 ◽  
Author(s):  
Erin R. O'Neill ◽  
Heather A. Kreft ◽  
Andrew J. Oxenham
2010 ◽  
Vol 10 ◽  
pp. 329-339 ◽  
Author(s):  
Torsten Rahne ◽  
Michael Ziese ◽  
Dorothea Rostalski ◽  
Roland Mühler

This paper describes a logatome discrimination test for the assessment of speech perception in cochlear implant users (CI users), based on a multilingual speech database, the Oldenburg Logatome Corpus, which was originally recorded for the comparison of human and automated speech recognition. The logatome discrimination task is based on the presentation of 100 logatome pairs (i.e., nonsense syllables) with balanced representations of alternating “vowel-replacement” and “consonant-replacement” paradigms in order to assess phoneme confusions. Thirteen adult normal hearing listeners and eight adult CI users, including both good and poor performers, were included in the study and completed the test after their speech intelligibility abilities were evaluated with an established sentence test in noise. Furthermore, the discrimination abilities were measured electrophysiologically by recording the mismatch negativity (MMN) as a component of auditory event-related potentials. The results show a clear MMN response only for normal hearing listeners and CI users with good performance, correlating with their logatome discrimination abilities. Higher discrimination scores for vowel-replacement paradigms than for the consonant-replacement paradigms were found. We conclude that the logatome discrimination test is well suited to monitor the speech perception skills of CI users. Due to the large number of available spoken logatome items, the Oldenburg Logatome Corpus appears to provide a useful and powerful basis for further development of speech perception tests for CI users.


2015 ◽  
Vol 26 (06) ◽  
pp. 572-581 ◽  
Author(s):  
Stanley Sheft ◽  
Min-Yu Cheng ◽  
Valeriy Shafiro

Background: Past work has shown that low-rate frequency modulation (FM) may help preserve signal coherence, aid segmentation at word and syllable boundaries, and benefit speech intelligibility in the presence of a masker. Purpose: This study evaluated whether difficulties in speech perception by cochlear implant (CI) users relate to a deficit in the ability to discriminate among stochastic low-rate patterns of FM. Research Design: This is a correlational study assessing the association between the ability to discriminate stochastic patterns of low-rate FM and the intelligibility of speech in noise. Study Sample: Thirteen postlingually deafened adult CI users participated in this study. Data Collection and Analysis: Using modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, thresholds were measured in terms of frequency excursion both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio in the presence of a speech-babble masker. Speech perception ability was assessed in the presence of the same speech-babble masker. Relationships were evaluated with Pearson product–moment correlation analysis with correction for family-wise error, and commonality analysis to determine the unique and common contributions across psychoacoustic variables to the association with speech ability. Results: Significant correlations were obtained between masked speech intelligibility and three metrics of FM discrimination involving either signal-to-noise ratio or stimulus duration, with shared variance among the three measures accounting for much of the effect. Compared to past results from young normal-hearing adults and older adults with either normal hearing or a mild-to-moderate hearing loss, mean FM discrimination thresholds obtained from CI users were higher in all conditions. Conclusions: The ability to process the pattern of frequency excursions of stochastic FM may, in part, have a common basis with speech perception in noise. Discrimination of differences in the temporally distributed place coding of the stimulus could serve as this common basis for CI users.


2011 ◽  
Vol 22 (09) ◽  
pp. 623-632 ◽  
Author(s):  
René H. Gifford ◽  
Amy P. Olund ◽  
Melissa DeJong

Background: Current cochlear implant recipients are achieving increasingly higher levels of speech recognition; however, the presence of background noise continues to significantly degrade speech understanding for even the best performers. Newer generation Nucleus cochlear implant sound processors can be programmed with SmartSound strategies that have been shown to improve speech understanding in noise for adult cochlear implant recipients. The applicability of these strategies for use in children, however, is not fully understood nor widely accepted. Purpose: To assess speech perception for pediatric cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether Nucleus sound processor SmartSound strategies yield improved sentence recognition in noise for children who learn language through the implant. Research Design: Single subject, repeated measures design. Study Sample: Twenty-two experimental subjects with cochlear implants (mean age 11.1 yr) and 25 control subjects with normal hearing (mean age 9.6 yr) participated in this prospective study. Intervention: Speech reception thresholds (SRT) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the experimental subjects’ everyday program incorporating Adaptive Dynamic Range Optimization (ADRO) as well as with the addition of Autosensitivity control (ASC). Data Collection and Analysis: Adaptive SRTs with the Hearing In Noise Test (HINT) sentences were obtained for all 22 experimental subjects, and performance—in percent correct—was assessed in a fixed +6 dB SNR (signal-to-noise ratio) for a six-subject subset. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the SmartSound setting on the SRT in noise. Results: The primary findings mirrored those reported previously with adult cochlear implant recipients in that the addition of ASC to ADRO significantly improved speech recognition in noise for pediatric cochlear implant recipients. The mean degree of improvement in the SRT with the addition of ASC to ADRO was 3.5 dB for a mean SRT of 10.9 dB SNR. Thus, despite the fact that these children have acquired auditory/oral speech and language through the use of their cochlear implant(s) equipped with ADRO, the addition of ASC significantly improved their ability to recognize speech in high levels of diffuse background noise. The mean SRT for the control subjects with normal hearing was 0.0 dB SNR. Given that the mean SRT for the experimental group was 10.9 dB SNR, despite the improvements in performance observed with the addition of ASC, cochlear implants still do not completely overcome the speech perception deficit encountered in noisy environments accompanying the diagnosis of severe-to-profound hearing loss. Conclusion: SmartSound strategies currently available in latest generation Nucleus cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise for pediatric cochlear implant recipients. Despite the reluctance of pediatric audiologists to utilize SmartSound settings for regular use, the results of the current study support the addition of ASC to ADRO for everyday listening environments to improve speech perception in a child's typical everyday program.


2021 ◽  
Author(s):  
Joel I. Berger ◽  
Phillip E. Gander ◽  
Subong Kim ◽  
Adam T. Schwalje ◽  
Jihwan Woo ◽  
...  

AbstractObjectivesUnderstanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group (Kim et al., 2021, Neuroimage) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The current study examined neural predictors of speech-in-noise ability in a large cohort of cochlear-implant (CI) users, with the long-term goal of developing a simple electrophysiological correlate that could be implemented in clinics.DesignWe recorded electroencephalography (EEG) in 114 post-lingually deafened CI users while they completed the California Consonant Test (CCT): a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (Consonant-Nucleus-Consonant [CNC]) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a single vertex electrode (Cz), to maximize generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of speech in noise performance.ResultsIn general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the CCT (which was conducted simultaneously with EEG recording), and the CNC (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise.ConclusionsThese data indicate a neurophysiological correlate of speech-in-noise performance that can be relatively easily captured within the clinic, thereby revealing a richer profile of an individual’s hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


Author(s):  
Yones Lotfi ◽  
Jamileh Chupani ◽  
Mohanna Javanbakht ◽  
Enayatollah Bakhshi

Background and Aim: In most everyday sett­ings, speech is heard in the presence of com­peting sounds and speech perception in noise is affected by various factors, including cognitive factors. In this regard, bilingualism is a pheno­menon that changes cognitive and behavioral processes as well as the nervous system. This study aimed to evaluate speech perception in noise and compare differences in Kurd-Persian bilinguals versus Persian monolinguals. Methods: This descriptive-analytic study was performed on 92 students with normal hearing, 46 of whom were bilingual Kurd-Persian with a mean (SD) age of 22.73 (1.92) years, and 46 other Persian monolinguals with a mean (SD) age of 22.71 (2.28) years. They were examined by consonant-vowel in noise (CV in noise) test and quick speech in noise (Q-SIN) test. The obtained data were analyzed by SPSS 21. Results: The comparison of the results showed differences in both tests between bilingual and monolingual subjects. In both groups, the reduc­tion of signal-to-noise ratio led to lower scores, but decrease in CV in noise test in bilinguals was less than monolinguals (p < 0.001) and in the Q-SIN test, the drop in bilinguals’ score was  more than monolinguals (p = 0.002). Conclusion: Kurd-Persian bilinguals had a bet­ter performance in CV in noise test but had a worse performance in Q-SIN test than Persian monolinguals.


2020 ◽  
Author(s):  
Chelsea Blankenship ◽  
Jareen Meinzen-Derr ◽  
Fawen Zhang

Objective: Individual differences in temporal processing contributes strongly to the large variability in speech recognition performance observed among cochlear implant (CI) recipients. Temporal processing is traditionally measured using a behavioral gap detection task, and therefore, it can be challenging or infeasible to obtain reliable responses from young children and individuals with disabilities. Within-frequency gap detection (pre- and post-gap markers are identical in frequency) is more common, yet across-frequency gap detection (pre- and post-gap markers are spectrally distinct), is thought to be more important for speech perception because the phonemes that proceed and follow the rapid temporal cues are rarely identical in frequency. However, limited studies have examined across-frequency temporal processing in CI recipients. None of which have included across-frequency cortical auditory evoked potentials (CAEP), nor was the correlation between across-frequency gap detection and speech perception examined. The purpose of the study is to evaluate behavioral and electrophysiological measures of across-frequency temporal processing and speech recognition in normal hearing (NH) and CI recipients. Design: Eleven post-lingually deafened adult CI recipients (n = 15 ears, mean age = 50.4 yrs.) and eleven age- and gender-matched NH individuals participated (n = 15 ears; mean age = 49.0 yrs.). Speech perception was evaluated using the Minimum Speech Test Battery for Adult Cochlear Implant Users (CNC, AzBio, BKB-SIN). Across-frequency behavioral gap detection thresholds (GDT; 2 kHz to 1 kHz post-gap tone) were measured using an adaptive, two-alternative, forced-choice paradigm. Across-frequency CAEPs were measured using four gap duration conditions; supra-threshold (behavioral GDT x 3), threshold (behavioral GDT), sub-threshold (behavioral GDT/3), and reference (no gap) condition. Group differences in behavioral GDTs, and CAEP amplitude and latency were evaluated using multiple mixed effects models. Bivariate and multivariate canonical correlation analyses were used to evaluate the relationship between the CAEP amplitude and latency, behavioral GDTs, and speech perception. Results: A significant effect of participant group was not observed for across-frequency GDTs, instead older participants (> 50 yrs.) displayed larger GDTs than younger participants. CI recipients displayed increased P1 and N1 latency compared to NH participants and older participants displayed delayed N1 and P2 latency compared to younger adults. Bivariate correlation analysis between behavioral GDTs and speech perception measures were not significant (p > 0.01). Across-frequency canonical correlation analysis showed a significant relationship between CAEP reference condition and behavioral measures of speech perception and temporal processing. Conclusions: CI recipients show similar across-frequency temporal GDTs compared to NH participants, however older participants (> 50 yrs.) displayed poorer temporal processing (larger GDTs) compared to younger participants. CI recipients and older participants displayed less efficient neural processing of the acoustic stimulus and slower transmission to the auditory cortex. An effect of gap duration on CAEP amplitude or latency was not observed. Canonical correlation analysis suggests better cortical detection of frequency changes is correlated with better word and sentence understanding in quiet and noise.


Sign in / Sign up

Export Citation Format

Share Document