Evaluating Communicative Abilities of a Highly Unintelligible Preschooler

2002 ◽  
Vol 11 (3) ◽  
pp. 236-242 ◽  
Author(s):  
Barbara W. Hodson ◽  
Julie A. Scherz ◽  
Kathy H. Strattman

Procedures to examine the communication abilities of a highly unintelligible 4-year-old during a 90-minute evaluation session are explained in this article. Phonology, metaphonology, speech rate, stimulability, and receptive language are evaluated formally and informally. A conversational speech sample is used to provide information for assessing intelligibility/understandability, fluency, voice quality, prosody, and mean length of response. Methods for determining treatment goals are discussed in the final section.

2021 ◽  
pp. 1-26
Author(s):  
Teresa Pratt

Abstract This article argues for a focus on affect in sociolinguistic style. I integrate recent scholarship on affective practice (Wetherell 2015) and the circulation of affective value (Ahmed 2004b) in order to situate the linguistic and bodily semiotics of affect as components of stylistic practice. At a Bay Area public arts high school, ideologically distinct affects of chill or high-energy are co-constructed across signs and subjects. I analyze a group of cisgender young men's use of creaky voice quality, speech rate, and bodily hexis in enacting and circulating these affective values. Crucially, affect co-constructs students’ positioning within the high school political economy (as college-bound or not, artistically driven or not), highlighting the ideological motivations of stylistic practice. Building on recent scholarship, I propose that a more thorough consideration of affect can deepen our understanding of meaning-making as it occurs in everyday interaction in institutional settings. (Affect, political economy, embodiment, bricolage, voice quality, speech rate, high school)


2002 ◽  
Vol 45 (4) ◽  
pp. 689-699 ◽  
Author(s):  
Donald G. Jamieson ◽  
Vijay Parsa ◽  
Moneca C. Price ◽  
James Till

We investigated how standard speech coders, currently used in modern communication systems, affect the quality of the speech of persons who have common speech and voice disorders. Three standardized speech coders (GSM 6.10 RPELTP, FS1016 CELP, and FS1015 LPC) and two speech coders based on subband processing were evaluated for their performance. Coder effects were assessed by measuring the quality of speech samples both before and after processing by the speech coders. Speech quality was rated by 10 listeners with normal hearing on 28 different scales representing pitch and loudness changes, speech rate, laryngeal and resonatory dysfunction, and coder-induced distortions. Results showed that (a) nine scale items were consistently and reliably rated by the listeners; (b) all coders degraded speech quality on these nine scales, with the GSM and CELP coders providing the better quality speech; and (c) interactions between coders and individual voices did occur on several voice quality scales.


1997 ◽  
Vol 40 (4) ◽  
pp. 708-722 ◽  
Author(s):  
Lawrence D. Shriberg ◽  
Diane Austin ◽  
Barbara A. Lewis ◽  
Jane L. McSweeny ◽  
David L. Wilson

Research in normal and disordered phonology requires measures of speech production that are biolinguistically appropriate and psychometrically robust. Their conceptual and numeric properties must be well characterized, particularly because speech measures are increasingly appearing in large-scale epidemiologic, genetic, and other descriptive-explanatory database studies. This work provides a rationale for extensions to an articulation competence metric titled the Percentage of Consonants Correct (PCC; Shriberg & Kwiatkowski, 1982; Shriberg, Kwiatkowski, Best, Hengst, & Terselic-Weber, 1986), which is computed from a 5- to 10-minute conversational speech sample. Reliability and standard error of measurement estimates are provided for 9 of a set of 10 speech metrics, including the PCC. Discussion includes rationale for selecting one or more of the 10 metrics for specific clinical and research needs.


2019 ◽  
Vol 23 (4) ◽  
pp. 187-192
Author(s):  
Jeeun Yoo ◽  
Hongyeop Oh ◽  
Seungyeop Jeong ◽  
In-Ki Jin

2021 ◽  
Vol 1 (2) ◽  
pp. 17-33
Author(s):  
Aline Neves Pessoa ◽  
Beatriz Cavalcanti de Albuquerque Caiuby Novaes ◽  
Lilian Kuhn Pereira ◽  
Zuleica Antonia Camargo

Acoustic and perceptual auditory analysis procedures present themselves as clinical tools which give support to the understanding of the speech features of hearing impaired children (HIC). Voice quality stems from the overlapped action of the larynx, the supralaryngeal vocal tract and the level of muscular tension throughout the speech flow. Nonetheless, voice dynamics is characterized by frequency, duration and intensity variations. This research aimed at investigating acoustic and perceptive correlates of a HIC child’s voice and dynamic quality. The child, who has a cochlear implanted, had his speech samples collected during speech therapy sessions. The male subject (R), who uses a unilateral cochlear implant (UCI), had his speech production samples recorded when he was 5 (05 samples) and 6 years old (05 samples), and which were later tagged Cut A and Cut B respectively. The recorded corpus was acoustically analyzed through the use of the SGEXpressionEvaluator script (Barbosa, 2009) running on the free software Praat v10. The measures which were automatically extracted by the script correspond to the fundamental frequency –f0, first f0 derivative, intensity, spectral fall and long term spectrum. The perceptual auditory analysis of the voice quality was based on the VPAS-PB script (Camargo e Madureira, 2008). The perceptual auditory judgments and the acoustic measures were subjected to statistical analysis procedures. At first the, the data (perceptual and acoustic) were separately analyzed through a hierarchical and agglomerative cluster analysis. Subsequently, they were examined together through the principal component analysis. Results revealed the existence of correspondence between the acoustic and perceptual auditory data. In the audio recorded data samples from Cut B (one year after the first one) greater variability tendencies in acoustic measures of f0 could be observed associated with laryngeal hyper function at the perceptual level plus silent pauses and the reduction of speech rate. From the integrated acoustic and perceptual analysis it was possible to keep a record of the child’s oral language development process. The data analysis in this study allowed the observation of several interaction levels between the vocal tract (lip movement extension adjusts, tongue and jaw, associated with velopharyngeal adjusts and muscular tension from the larynx), plus the inspection of speech dynamics elements (habitual pitch and speech rate) of a child’s speech who has a UCI implanted during a one-year-speech-therapy-process period. This source of information made the characterization of the child’s evolution possible, especially in terms of perceptual auditory analysis descriptions being phonetically motivated by the speech dynamics quality.


2019 ◽  
Vol 10 (5) ◽  
pp. 68-74
Author(s):  
Phakkharawat Sittiprapaporn

Background: The Token Test as originally conceived by De Renzi and Vignol is a subtle test of receptive language functions. Although it has been employed in numerous clinical studies since 1962, no one has studied the linguistic properties of the commands in in Yawi-speaking aphasic patients. Aims and Objectives: The study aimed to describe the development of the Yawi Token Test (YWTT) and to investigate the test performances of the normal Yawi-speaking participants before applying with the Yawi-Speaking aphasic patients. Materials and Methods: An adaptation of the Yawi Token Test (YWTT) was administered to one-hundred normal Yawi-speaking participants, ranging in age from 18-45 years, with minimal educational level of Prathom 4 who were living in Pattani Province, South of Thailand. Results: Performance on Part I-V and overall performance were reported. Overall, participants in the trial version performed not significantly on the Yawi Token Test (YWTT) overall composite score compared to final version. The mean Yawi Token Test (YWTT) score of the trail and final versions were 59.40 (S.D. = 1.29; range: 56 – 61) and 60.44 (S.D. = 1.39; range: 56 – 61), respectively. The mean Yawi Token Test (YWTT) score for overall (100 participants) was 60.42 (S.D. = 1.32; range score: 56 – 61). Comparing with the trial version, participants did obviously lower number of errors of all parts in the final version. Conclusion: Yawi Token Test (YWTT) was applicable to the differential diagnosis of the communicative abilities of Yawi-speaking aphasic patients. This test will be helpful for assessing auditory language comprehension Yawi-speaking aphasic patients.


1994 ◽  
Vol 37 (2) ◽  
pp. 254-263 ◽  
Author(s):  
Patricia M. Zebrowski

The purpose of this study was to measure the duration of sound prolongations and sound/syllable repetitions (stutterings) in the conversational speech of school-age children who stutter. The relationships between duration and (a) frequency and type of speech disfluency, (b) number and rate of repeated units per instance of sound/syllable repetition, (c) overall speech rate, and (d) articulatory rate were also examined. Results indicated that for the children in this study the average duration of stuttering was approximately three-quarters of a second, and was not significantly correlated with age, length of post-onset interval, or frequency of speech disfluency. In addition, findings can be taken to suggest that part of the clinical significance of stuttering duration for children who stutter might lie in its relationship to the amount of sound prolongations these children produce, as well as their articulatory rate during fluent speech.


1978 ◽  
Vol 21 (1) ◽  
pp. 87-111 ◽  
Author(s):  
C. Chevrie-Muller ◽  
N. Seguier ◽  
A. Spira ◽  
M. Dordain

Through the rating of certain predetermined items with a group of 74 psychiatric patients with varying diagnoses and a group of 46 schizophrenic patients, the following areas were studied: (1) psychiatric symptomatology, (2) voice characteristics determined when listening to a record interview, (3) the personality of the patient perceived by the listener using the same recording. The relationships between the items in the three areas were tested by chisquare analysis. Significant relationships were established. The vocal characteristics speech rate and melody are linked to the perceived degree of extroversion and dynamism of the subject listened to. Some psychiatric symptoms (impaired motor behaviour, withdrawal syndrome, anxiety, thinking disturbance) are related to certain voice characteristics. Some symptoms of schizophrenia and manic-depressive psychosis are related to personality traits perceived by the listener (passive, unemotional, uncommunicative, depressed ....).


1970 ◽  
Vol 109 (3) ◽  
pp. 105-108 ◽  
Author(s):  
A. Kajackas ◽  
A. Anskaitis ◽  
D. Gursnys

In this paper, method for evaluation of varying conversational speech quality in wireless communications is proposed. The proposed algorithm evaluates quality degradations using indicators based on count of lost frames and voice activity indications. The correctness of proposed algorithm is investigated by comparison of test results with results obtained using PESQ algorithm under same conditions. The achieved average correlation coefficient is 0.975. This result is independent of frame loss model and percentage of silence in test sentences. Proposed algorithm can be implemented in mobile stations and used for speech quality evaluation by real conversation. Ill. 3, bibl. 13, tabl. 3 (in English; abstracts in English and Lithuanian).http://dx.doi.org/10.5755/j01.eee.109.3.182


PLoS ONE ◽  
2021 ◽  
Vol 16 (5) ◽  
pp. e0250969
Author(s):  
Tirza Biron ◽  
Daniel Baum ◽  
Dominik Freche ◽  
Nadav Matalon ◽  
Netanel Ehrmann ◽  
...  

Automatic speech recognition (ASR) and natural language processing (NLP) are expected to benefit from an effective, simple, and reliable method to automatically parse conversational speech. The ability to parse conversational speech depends crucially on the ability to identify boundaries between prosodic phrases. This is done naturally by the human ear, yet has proved surprisingly difficult to achieve reliably and simply in an automatic manner. Efforts to date have focused on detecting phrase boundaries using a variety of linguistic and acoustic cues. We propose a method which does not require model training and utilizes two prosodic cues that are based on ASR output. Boundaries are identified using discontinuities in speech rate (pre-boundary lengthening and phrase-initial acceleration) and silent pauses. The resulting phrases preserve syntactic validity, exhibit pitch reset, and compare well with manual tagging of prosodic boundaries. Collectively, our findings support the notion of prosodic phrases that represent coherent patterns across textual and acoustic parameters.


Sign in / Sign up

Export Citation Format

Share Document