Projection Back onto Filtered Observations for Speech Separation with Distributed Microphone Array

Author(s):  
Shoko Araki ◽  
Nobutaka Ono ◽  
Keisuke Kinoshita ◽  
Marc Delcroix
Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3527
Author(s):  
Ching-Feng Liu ◽  
Wei-Siang Ciou ◽  
Peng-Ting Chen ◽  
Yi-Chun Du

In the context of assisted human, identifying and enhancing non-stationary speech targets speech in various noise environments, such as a cocktail party, is an important issue for real-time speech separation. Previous studies mostly used microphone signal processing to perform target speech separation and analysis, such as feature recognition through a large amount of training data and supervised machine learning. The method was suitable for stationary noise suppression, but relatively limited for non-stationary noise and difficult to meet the real-time processing requirement. In this study, we propose a real-time speech separation method based on an approach that combines an optical camera and a microphone array. The method was divided into two stages. Stage 1 used computer vision technology with the camera to detect and identify interest targets and evaluate source angles and distance. Stage 2 used beamforming technology with microphone array to enhance and separate the target speech sound. The asynchronous update function was utilized to integrate the beamforming control and speech processing to reduce the effect of the processing delay. The experimental results show that the noise reduction in various stationary and non-stationary noise environments were 6.1 dB and 5.2 dB respectively. The response time of speech processing was less than 10ms, which meets the requirements of a real-time system. The proposed method has high potential to be applied in auxiliary listening systems or machine language processing like intelligent personal assistant.


2021 ◽  
Vol 69 (2) ◽  
pp. 2705-2716
Author(s):  
Lin Zhou ◽  
Yue Xu ◽  
Tianyi Wang ◽  
Kun Feng ◽  
Jingang Shi

2020 ◽  
Vol 10 (7) ◽  
pp. 2593
Author(s):  
Ke Zhang ◽  
Yangjie Wei ◽  
Dan Wu ◽  
Yi Wang

Voice signals acquired by a microphone array often include considerable noise and mutual interference, seriously degrading the accuracy and speed of speech separation. Traditional beamforming is simple to implement, but its source interference suppression is not adequate. In contrast, independent component analysis (ICA) can improve separation, but imposes an iterative and time-consuming process to calculate the separation matrix. As a supporting method, principle component analysis (PCA) contributes to reduce the dimension, retrieve fast results, and disregard false sound sources. Considering the sparsity of frequency components in a mixed signal, we propose an adaptive fast speech separation algorithm based on multiple sound source localization as preprocessing to select between beamforming and frequency domain ICA according to different mixing conditions per frequency bin. First, a fast positioning algorithm allows calculating the maximum number of components per frequency bin of a mixed speech signal to prevent the occurrence of false sound sources. Then, PCA reduces the dimension to adaptively adjust the weight of beamforming and ICA for speech separation. Subsequently, the ICA separation matrix is initialized based on the sound source localization to notably reduce the iteration time and mitigate permutation ambiguity. Simulation and experimental results verify the effectiveness and speedup of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document