Speech stream segregation to control an ERP-based auditory BCI

Author(s):  
Francisco Velasco-Álvarez ◽  
Álvaro Fernández-Rodríguez ◽  
M. Teresa Medina-Juliá ◽  
Ricardo Ron-Angevin
Author(s):  
Ana Franco ◽  
Julia Eberlen ◽  
Arnaud Destrebecqz ◽  
Axel Cleeremans ◽  
Julie Bertels

Abstract. The Rapid Serial Visual Presentation procedure is a method widely used in visual perception research. In this paper we propose an adaptation of this method which can be used with auditory material and enables assessment of statistical learning in speech segmentation. Adult participants were exposed to an artificial speech stream composed of statistically defined trisyllabic nonsense words. They were subsequently instructed to perform a detection task in a Rapid Serial Auditory Presentation (RSAP) stream in which they had to detect a syllable in a short speech stream. Results showed that reaction times varied as a function of the statistical predictability of the syllable: second and third syllables of each word were responded to faster than first syllables. This result suggests that the RSAP procedure provides a reliable and sensitive indirect measure of auditory statistical learning.


Author(s):  
Viktoriia Sviatchenko

The article provides a thorough account on A. A. Potebnia’s views on the systemic nature of the language presented in his works on historical phonetics of the Eastern Slavic languages. The practical implementation of his ideas in this respect is studied. The comprehension of the systemic character of phonetic changes of the Khrakiv linguistic school representative has urged the search of their interrelations as well as the attempt to identify homogeneous phonetic laws that share a common cause and act in a certain period of the language history, which is emphasized by the author of the article. It is noted that A. A. Potebnia focused on consonant changes that took place in different conditions. The causes of phonetic laws mentioned in the article can not be reduced to the interaction of sounds in a speech stream, the material provided by A. A. Potebnia proves that they are to be found within the phonetic system itself. The author of the article shares the views of V. A. Glushchenko that Potebnia’s investigations embrace all phonetic laws in the history of the Eastern Slavic languages’ consonant systems. The relevance of Potebnia’s research on the systemic nature of the language that has retained their value for the linguistics of the XX — beginning of XXI century is identified.


2021 ◽  
Vol 11 (1) ◽  
pp. 39
Author(s):  
Álvaro Fernández-Rodríguez ◽  
Ricardo Ron-Angevin ◽  
Ernesto J. Sanz-Arigita ◽  
Antoine Parize ◽  
Juliette Esquirol ◽  
...  

Studies so far have analyzed the effect of distractor stimuli in different types of brain–computer interface (BCI). However, the effect of a background speech has not been studied using an auditory event-related potential (ERP-BCI), a convenient option when the visual path cannot be adopted by users. Thus, the aim of the present work is to examine the impact of a background speech on selection performance and user workload in auditory BCI systems. Eleven participants tested three conditions: (i) auditory BCI control condition, (ii) auditory BCI with a background speech to ignore (non-attentional condition), and (iii) auditory BCI while the user has to pay attention to the background speech (attentional condition). The results demonstrated that, despite no significant differences in performance, shared attention to auditory BCI and background speech required a higher cognitive workload. In addition, the P300 target stimuli in the non-attentional condition were significantly higher than those in the attentional condition for several channels. The non-attentional condition was the only condition that showed significant differences in the amplitude of the P300 between target and non-target stimuli. The present study indicates that background speech, especially when it is attended to, is an important interference that should be avoided while using an auditory BCI.


2014 ◽  
Vol 136 (1) ◽  
pp. 5-8 ◽  
Author(s):  
Marion David ◽  
Mathieu Lavandier ◽  
Nicolas Grimault
Keyword(s):  

1976 ◽  
Vol 42 (3_suppl) ◽  
pp. 1071-1074 ◽  
Author(s):  
Betty Tuller ◽  
James R. Lackner

Primary auditory stream segregation, the perceptual segregation of acoustically related elements within a continuous auditory sequence into distinct spatial streams, prevents subjects from resolving the relative constituent order of repeated sequences of tones (Bregman & Campbell, 1971) or repeated sequences of consonant and vowel sounds (Lackner & Goldstein, 1974). To determine why primary auditory stream segregation does not interfere with the resolution of natural speech, 8 subjects were required to indicate the degree of stream segregation undergone by 24 repeated sequences of English monosyllables which varied in terms of the degrees of syntactic and intonational structure present. All sequences underwent primary auditory stream segregation to some extent but the amount of apparent spatial separation was less when syntactic and intonational structure was present.


Sign in / Sign up

Export Citation Format

Share Document