scholarly journals Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners

2021 ◽  
Vol 149 (2) ◽  
pp. 1224-1239
Author(s):  
Erin R. O'Neill ◽  
Morgan N. Parke ◽  
Heather A. Kreft ◽  
Andrew J. Oxenham
2010 ◽  
Vol 10 ◽  
pp. 329-339 ◽  
Author(s):  
Torsten Rahne ◽  
Michael Ziese ◽  
Dorothea Rostalski ◽  
Roland Mühler

This paper describes a logatome discrimination test for the assessment of speech perception in cochlear implant users (CI users), based on a multilingual speech database, the Oldenburg Logatome Corpus, which was originally recorded for the comparison of human and automated speech recognition. The logatome discrimination task is based on the presentation of 100 logatome pairs (i.e., nonsense syllables) with balanced representations of alternating “vowel-replacement” and “consonant-replacement” paradigms in order to assess phoneme confusions. Thirteen adult normal hearing listeners and eight adult CI users, including both good and poor performers, were included in the study and completed the test after their speech intelligibility abilities were evaluated with an established sentence test in noise. Furthermore, the discrimination abilities were measured electrophysiologically by recording the mismatch negativity (MMN) as a component of auditory event-related potentials. The results show a clear MMN response only for normal hearing listeners and CI users with good performance, correlating with their logatome discrimination abilities. Higher discrimination scores for vowel-replacement paradigms than for the consonant-replacement paradigms were found. We conclude that the logatome discrimination test is well suited to monitor the speech perception skills of CI users. Due to the large number of available spoken logatome items, the Oldenburg Logatome Corpus appears to provide a useful and powerful basis for further development of speech perception tests for CI users.


2015 ◽  
Vol 43 (2) ◽  
pp. 310-337 ◽  
Author(s):  
MARCEL R. GIEZEN ◽  
PAOLA ESCUDERO ◽  
ANNE E. BAKER

AbstractThis study investigates the role of acoustic salience and hearing impairment in learning phonologically minimal pairs. Picture-matching and object-matching tasks were used to investigate the learning of consonant and vowel minimal pairs in five- to six-year-old deaf children with a cochlear implant (CI), and children of the same age with normal hearing (NH). In both tasks, the CI children showed clear difficulties with learning minimal pairs. The NH children also showed some difficulties, however, particularly in the picture-matching task. Vowel minimal pairs were learned more successfully than consonant minimal pairs, particularly in the object-matching task. These results suggest that the ability to encode phonetic detail in novel words is not fully developed at age six and is affected by task demands and acoustic salience. CI children experience persistent difficulties with accurately mapping sound contrasts to novel meanings, but seem to benefit from the relative acoustic salience of vowel sounds.


2015 ◽  
Vol 26 (06) ◽  
pp. 572-581 ◽  
Author(s):  
Stanley Sheft ◽  
Min-Yu Cheng ◽  
Valeriy Shafiro

Background: Past work has shown that low-rate frequency modulation (FM) may help preserve signal coherence, aid segmentation at word and syllable boundaries, and benefit speech intelligibility in the presence of a masker. Purpose: This study evaluated whether difficulties in speech perception by cochlear implant (CI) users relate to a deficit in the ability to discriminate among stochastic low-rate patterns of FM. Research Design: This is a correlational study assessing the association between the ability to discriminate stochastic patterns of low-rate FM and the intelligibility of speech in noise. Study Sample: Thirteen postlingually deafened adult CI users participated in this study. Data Collection and Analysis: Using modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, thresholds were measured in terms of frequency excursion both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio in the presence of a speech-babble masker. Speech perception ability was assessed in the presence of the same speech-babble masker. Relationships were evaluated with Pearson product–moment correlation analysis with correction for family-wise error, and commonality analysis to determine the unique and common contributions across psychoacoustic variables to the association with speech ability. Results: Significant correlations were obtained between masked speech intelligibility and three metrics of FM discrimination involving either signal-to-noise ratio or stimulus duration, with shared variance among the three measures accounting for much of the effect. Compared to past results from young normal-hearing adults and older adults with either normal hearing or a mild-to-moderate hearing loss, mean FM discrimination thresholds obtained from CI users were higher in all conditions. Conclusions: The ability to process the pattern of frequency excursions of stochastic FM may, in part, have a common basis with speech perception in noise. Discrimination of differences in the temporally distributed place coding of the stimulus could serve as this common basis for CI users.


1990 ◽  
Vol 33 (1) ◽  
pp. 163-173 ◽  
Author(s):  
Brian E. Walden ◽  
Allen A. Montgomery ◽  
Robert A. Prosek ◽  
David B. Hawkins

Intersensory biasing occurs when cues in one sensory modality influence the perception of discrepant cues in another modality. Visual biasing of auditory stop consonant perception was examined in two related experiments in an attempt to clarify the role of hearing impairment on susceptibility to visual biasing of auditory speech perception. Fourteen computer-generated acoustic approximations of consonant-vowel syllables forming a /ba-da-ga/ continuum were presented for labeling as one of the three exemplars, via audition alone and in synchrony with natural visual articulations of /ba/ and of /ga/. Labeling functions were generated for each test condition showing the percentage of /ba/, /da/, and /ga/ responses to each of the 14 synthetic syllables. The subjects of the first experiment were 15 normal-hearing and 15 hearing-impaired observers. The hearing-impaired subjects demonstrated a greater susceptibility to biasing from visual cues than did the normal-hearing subjects. In the second experiment, the auditory stimuli were presented in a low-level background noise to 15 normal-hearing observers. A comparison of their labeling responses with those from the first experiment suggested that hearing-impaired persons may develop a propensity to rely on visual cues as a result of long-term hearing impairment. The results are discussed in terms of theories of intersensory bias.


2018 ◽  
Vol 57 (11) ◽  
pp. 851-857 ◽  
Author(s):  
Hilal Dincer D’Alessandro ◽  
Patrick J. Boyle ◽  
Deborah Ballantyne ◽  
Marco De Vincentiis ◽  
Patrizia Mancini

Sign in / Sign up

Export Citation Format

Share Document