In this paper we present DAG: a full-band (48kHz) waveform synthesizer based on diffusion-based generative modeling! And style transfer comes for free.. check out our demo! This is great work led by Santi.
Category Archives: Paper is out
Preprint: “Adversarial permutation invariant training for universal sound sepration”
I’m very proud of our recent work, because by simply improving the loss (keeping the same model and dataset) we obtain an improvement of 1.4 dB SI-SNRi! 1 dB in source separation is a lot, and is perceptually noticeable. This is great work led by Emilian, who worked with us as an intern during the summer of 2022.
Interspeech 2022 paper: “PodcastMix: A dataset for separating music and speech in podcasts”
Today we release “PodcastMix”, a dataset for separating music and speech in podcasts:
Preprint: “Universal speech enhancement with score-based diffusion”
In this work we propose to consider the task of speech enhancement as a holistic endeavor, and present a universal speech enhancement system that tackles 55 different distortions at the same time. Our approach consists of a generative model that employs score-based diffusion. We show that this approach significantly outperforms the state of the art in a subjective test performed by expert listeners.
Check our project website, and paper on arXiv!
ICASSP 2022 paper: “On loss functions and evaluation metrics for music source separation”
During his internship at Dolby, Enric run an exhaustive evaluation of various loss functions for music source separation. After evaluating those losses objectively and subjectively, we recommend training with the following spectrogram-based losses: L2freq, SISDRfreq, LOGL2freq or LOGL1freq with, potentially, phase- sensitive objectives and adversarial regularizers.