The Impact of Neurocognitive Skills on Recognition of Spectrally Degraded Sentences

2021 ◽  
Vol 32 (08) ◽  
pp. 528-536
Author(s):  
Jessica H. Lewis ◽  
Irina Castellanos ◽  
Aaron C. Moberly

Abstract Background Recent models theorize that neurocognitive resources are deployed differently during speech recognition depending on task demands, such as the severity of degradation of the signal or modality (auditory vs. audiovisual [AV]). This concept is particularly relevant to the adult cochlear implant (CI) population, considering the large amount of variability among CI users in their spectro-temporal processing abilities. However, disentangling the effects of individual differences in spectro-temporal processing and neurocognitive skills on speech recognition in clinical populations of adult CI users is challenging. Thus, this study investigated the relationship between neurocognitive functions and recognition of spectrally degraded speech in a group of young adult normal-hearing (NH) listeners. Purpose The aim of this study was to manipulate the degree of spectral degradation and modality of speech presented to young adult NH listeners to determine whether deployment of neurocognitive skills would be affected. Research Design Correlational study design. Study Sample Twenty-one NH college students. Data Collection and Analysis Participants listened to sentences in three spectral-degradation conditions: no degradation (clear sentences); moderate degradation (8-channel noise-vocoded); and high degradation (4-channel noise-vocoded). Thirty sentences were presented in an auditory-only (A-only) modality and an AV fashion. Visual assessments from The National Institute of Health Toolbox Cognitive Battery were completed to evaluate working memory, inhibition-concentration, cognitive flexibility, and processing speed. Analyses of variance compared speech recognition performance among spectral degradation condition and modality. Bivariate correlation analyses were performed among speech recognition performance and the neurocognitive skills in the various test conditions. Results Main effects on sentence recognition were found for degree of degradation (p = < 0.001) and modality (p = < 0.001). Inhibition-concentration skills moderately correlated (r = 0.45, p = 0.02) with recognition scores for sentences that were moderately degraded in the A-only condition. No correlations were found among neurocognitive scores and AV speech recognition scores. Conclusions Inhibition-concentration skills are deployed differentially during sentence recognition, depending on the level of signal degradation. Additional studies will be required to study these relations in actual clinical populations such as adult CI users.

1981 ◽  
Vol 24 (2) ◽  
pp. 207-216 ◽  
Author(s):  
Brian E. Walden ◽  
Sue A. Erdman ◽  
Allen A. Montgomery ◽  
Daniel M. Schwartz ◽  
Robert A. Prosek

The purpose of this research was to determine some of the effects of consonant recognition training on the speech recognition performance of hearing-impaired adults. Two groups of ten subjects each received seven hours of either auditory or visual consonant recognition training, in addition to a standard two-week, group-oriented, inpatient aural rehabilitation program. A third group of fifteen subjects received the standard two-week program, but no supplementary individual consonant recognition training. An audiovisual sentence recognition test, as well as tests of auditory and visual consonant recognition, were administered both before and ibltowing training. Subjects in all three groups significantly increased in their audiovisual sentence recognition performance, but subjects receiving the individual consonant recognition training improved significantly more than subjects receiving only the standard two-week program. A significant increase in consonant recognition performance was observed in the two groups receiving the auditory or visual consonant recognition training. The data are discussed from varying statistical and clinical perspectives.


2019 ◽  
Vol 30 (02) ◽  
pp. 131-144 ◽  
Author(s):  
Erin M. Picou ◽  
Todd A. Ricketts

AbstractPeople with hearing loss experience difficulty understanding speech in noisy environments. Beamforming microphone arrays in hearing aids can improve the signal-to-noise ratio (SNR) and thus also speech recognition and subjective ratings. Unilateral beamformer arrays, also known as directional microphones, accomplish this improvement using two microphones in one hearing aid. Bilateral beamformer arrays, which combine information across four microphones in a bilateral fitting, further improve the SNR. Early bilateral beamformers were static with fixed attenuation patterns. Recently adaptive, bilateral beamformers have been introduced in commercial hearing aids.The purpose of this article was to evaluate the potential benefits of adaptive unilateral and bilateral beamformers for improving sentence recognition and subjective ratings in a laboratory setting. A secondary purpose was to identify potential participant factors that explain some of the variability in beamformer benefit.Participants were fitted with study hearing aids equipped with commercially available adaptive unilateral and bilateral beamformers. Participants completed sentence recognition testing in background noise using three hearing aid settings (omnidirectional, unilateral beamformer, bilateral beamformer) and two noise source configurations (surround, side). After each condition, participants made subjective ratings of their perceived work, desire to control the situation, willingness to give up, and tiredness.Eighteen adults (50–80 yr, M = 66.2, σ = 8.6) with symmetrical mild sloping to severe hearing loss participated.Sentence recognition scores and subjective ratings were analyzed separately using generalized linear models with two within-subject factors (hearing aid microphone and noise configuration). Two benefit scores were calculated: (1) unilateral beamformer benefit (relative to performance with omnidirectional) and (2) additional bilateral beamformer benefit (relative to performance with unilateral beamformer). Hierarchical multiple linear regression was used to determine if beamformer benefit was associated with participant factors (age, degree of hearing loss, unaided speech in noise ability, spatial release from masking, and performance in omnidirectional).Sentence recognition and subjective ratings of work, control, and tiredness were better with both types of beamformers relative to the omnidirectional conditions. In addition, the bilateral beamformer offered small additional improvements relative to the unilateral beamformer in terms of sentence recognition and subjective ratings of tiredness. Speech recognition performance and subjective ratings were generally independent of noise configuration. Performance in the omnidirectional setting and pure-tone average were independently related to unilateral beamformer benefits. Those with the lowest performance or the largest degree of hearing loss benefited the most. No factors were significantly related to additional bilateral beamformer benefit.Adaptive bilateral beamformers offer additional advantages over adaptive unilateral beamformers in hearing aids. The small additional advantages with the adaptive beamformer are comparable to those reported in the literature with static beamformers. Although the additional benefits are small, they positively affected subjective ratings of tiredness. These data suggest that adaptive bilateral beamformers have the potential to improve listening in difficult situations for hearing aid users. In addition, patients who struggle the most without beamforming microphones may also benefit the most from the technology.


2011 ◽  
Vol 22 (01) ◽  
pp. 013-022 ◽  
Author(s):  
Ursula M. Findlen ◽  
Christina M. Roup

Background: The effects of stimulus material, lexical content, and response condition on dichotic speech recognition performance characteristics were examined for normal-hearing young adult listeners. No previous investigation has systematically examined the effects of stimulus material with constant phonetic content but varied lexical content across three response conditions typically used to evaluate binaural auditory processing abilities. Purpose: To examine how dichotic speech recognition performance varies for stimulus materials with constant phonetic content but varied lexical content across the free recall, directed recall right, and directed recall left response conditions. Research Design: Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed right, and directed left response conditions, a repeated measures experimental design. Study Sample: Thirty normal-hearing young adults (15 male, 15 female) served as participants. Participants ranged in age from 18 to 31 yr and were all right-handed. Data Collection and Analysis: Participants engaged in monaural speech recognition and dichotic speech recognition tasks. Percent correct recognition per ear, as well as ear advantage for dichotic speech recognition, were calculated and evaluated using a repeated measures analysis of variance (ANOVA) statistical procedure. Results: Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words, suggesting that lexical content impacts performance on dichotic speech recognition tasks. Performance also varied across response condition, which is consistent with previous studies of dichotic speech recognition. Conclusions: Lexical content of stimulus materials impacts performance characteristics for dichotic speech recognition tasks in the normal-hearing young adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance.


1993 ◽  
Vol 36 (6) ◽  
pp. 1276-1285 ◽  
Author(s):  
Sandra Gordon-Salant ◽  
Peter J. Fitzgibbons

This study investigated factors that contribute to deficits of elderly listeners in recognizing speech that is degraded by temporal waveform distortion. Young and elderly listeners with normal hearing sensitivity and with mild-to-moderate, sloping sensorineural hearing losses were evaluated. Low-predictability (LP) sentences from the Revised Speech Perception in Noise test (R-SPIN) (Bilger, Nuetzel, Rabinowitz, & Rzeczkowski, 1984) were presented to subjects in undistorted form and in three forms of distortion: time compression, reverberation, and interruption. Percent-correct recognition scores indicated that age and hearing impairment contributed independently to deficits in recognizing all forms of temporally distorted speech. In addition, subjects’ auditory temporal processing abilities were assessed on duration discrimination and gap detection tasks. Canonical correlation procedures showed that some of the suprathreshold temporal processing measures, especially gap duration discrimination, contributed to the ability to recognize reverberant speech. The overall conclusion is that age-related factors other than peripheral hearing loss contribute to diminished speech recognition performance of elderly listeners.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0244632
Author(s):  
Matthew J. Goupell ◽  
Garrison T. Draves ◽  
Ruth Y. Litovsky

A vocoder is used to simulate cochlear-implant sound processing in normal-hearing listeners. Typically, there is rapid improvement in vocoded speech recognition, but it is unclear if the improvement rate differs across age groups and speech materials. Children (8–10 years) and young adults (18–26 years) were trained and tested over 2 days (4 hours) on recognition of eight-channel noise-vocoded words and sentences, in quiet and in the presence of multi-talker babble at signal-to-noise ratios of 0, +5, and +10 dB. Children achieved poorer performance than adults in all conditions, for both word and sentence recognition. With training, vocoded speech recognition improvement rates were not significantly different between children and adults, suggesting that improvement in learning how to process speech cues degraded via vocoding is absent of developmental differences across these age groups and types of speech materials. Furthermore, this result confirms that the acutely measured age difference in vocoded speech recognition persists after extended training.


2019 ◽  
Vol 62 (10) ◽  
pp. 3834-3850 ◽  
Author(s):  
Todd A. Ricketts ◽  
Erin M. Picou ◽  
James Shehorn ◽  
Andrew B. Dittberner

Purpose Previous evidence supports benefits of bilateral hearing aids, relative to unilateral hearing aid use, in laboratory environments using audio-only (AO) stimuli and relatively simple tasks. The purpose of this study was to evaluate bilateral hearing aid benefits in ecologically relevant laboratory settings, with and without visual cues. In addition, we evaluated the relationship between bilateral benefit and clinically viable predictive variables. Method Participants included 32 adult listeners with hearing loss ranging from mild–moderate to severe–profound. Test conditions varied by hearing aid fitting type (unilateral, bilateral) and modality (AO, audiovisual). We tested participants in complex environments that evaluated the following domains: sentence recognition, word recognition, behavioral listening effort, gross localization, and subjective ratings of spatialization. Signal-to-noise ratio was adjusted to provide similar unilateral speech recognition performance in both modalities and across procedures. Results Significant and similar bilateral benefits were measured for both modalities on all tasks except listening effort, where bilateral benefits were not identified in either modality. Predictive variables were related to bilateral benefits in some conditions. With audiovisual stimuli, increasing hearing loss, unaided speech recognition in noise, and unaided subjective spatial ability were significantly correlated with increased benefits for many outcomes. With AO stimuli, these same predictive variables were not significantly correlated with outcomes. No predictive variables were correlated with bilateral benefits for sentence recognition in either modality. Conclusions Hearing aid users can expect significant bilateral hearing aid advantages for ecologically relevant, complex laboratory tests. Although future confirmatory work is necessary, these data indicate the presence of vision strengthens the relationship between bilateral benefits and degree of hearing loss.


Vision ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 18
Author(s):  
Olga Lukashova-Sanz ◽  
Siegfried Wahl ◽  
Thomas S. A. Wallis ◽  
Katharina Rifai

With rapidly developing technology, visual cues became a powerful tool for deliberate guiding of attention and affecting human performance. Using cues to manipulate attention introduces a trade-off between increased performance in cued, and decreased in not cued, locations. For higher efficacy of visual cues designed to purposely direct user’s attention, it is important to know how manipulation of cue properties affects attention. In this verification study, we addressed how varying cue complexity impacts the allocation of spatial endogenous covert attention in space and time. To gradually vary cue complexity, the discriminability of the cue was systematically modulated using a shape-based design. Performance was compared in attended and unattended locations in an orientation-discrimination task. We evaluated additional temporal costs due to processing of a more complex cue by comparing performance at two different inter-stimulus intervals. From preliminary data, attention scaled with cue discriminability, even for supra-threshold cue discriminability. Furthermore, individual cue processing times partly impacted performance for the most complex, but not simpler cues. We conclude that, first, cue complexity expressed by discriminability modulates endogenous covert attention at supra-threshold cue discriminability levels, with increasing benefits and decreasing costs; second, it is important to consider the temporal processing costs of complex visual cues.


Sign in / Sign up

Export Citation Format

Share Document