Target conversation extraction: Source separation using turn-taking dynamics

Tuochao Chen*   Qirui Wang*   Bohan Wu*   Malek Itani*

Emre Sefik Eskimez✝,   Takuya Yoshiya ◊,   Shyamnath Gollakota*,

*University of Washington, ✝Microsoft, ◊AssemblyAI

25th Interspeech Conference (Interspeech 2024)



[Paper]   [Code]   [Dataset]  

Abstract

Extracting the speech of participants in a conversation amidst interfering speakers and noise presents a challenging problem. In this paper, we introduce the novel task of target conversation extraction, where the goal is to extract the audio of a target conversation based on the speaker embedding of one of its participants. To accomplish this, we propose leveraging temporal patterns inherent in human conversations, particularly turn-taking dynamics, which uniquely characterize speakers engaged in conversation and distinguish them from interfering speakers and noise. Using neural networks, we show the feasibility of our approach on English and Mandarin conversation datasets. In the presence of interfering speakers, our results show an 8.19~dB improvement in signal-to-noise ratio for 2-speaker conversations and a 7.92~dB improvement for 2-4-speaker conversations

The goal of target conversation extraction in this illustration is as follows: given a clean enrollment audio or embedding for B, we want to extract audio for the conversation between A, B and C, amidst interference from speaker D.






Audio Samples

We show some audio samples to extract the target conversation from the mixture. We demonstrate our model on both Mandarin and English Conversation.



Mandarin Conversation (2 people)


Input Mix Enrollment Output GT




English Conversation (2-4 people)


Input Mix Enrollment Output GT


Keywords: Sound source extraction, Conversation analysis