During his internship at Dolby, Enric run an exhaustive evaluation of various loss functions for music source separation. After evaluating those losses objectively and subjectively, we recommend training with the following spectrogram-based losses: L2freq, SISDRfreq, LOGL2freq or LOGL1freq with, potentially, phase- sensitive objectives and adversarial regularizers.
We investigated various upsampling layers to consolidate the ideas we introduced in our previous paper. We benchmarked a large set of upsampling layers for music source separation: different transposed and subpixel convolution setups, different interpolation upsamplers (including two novel layers based on stretch and sinc interpolation), and different wavelet-based upsamplers (including a novel learnable wavelet layer).
I had mixed feelings this ISMIR: from one side, I was disappointed for attending to another virtual ISMIR – buuuuuut, on the other side, it was nice to meet you all! ISMIR is such a vibrant and enthusiastic community, that is always great to meet each other – even if it was virtually! Still.. I guess we all agree that ISMIR was much better when we had the possibility to jam on a boat! 🙂