This last year I have been collaborating with Francesc Lluís. He is master student in our research group, who worked on “A Wavenet for Music Source Separation”. For more info about our investigation, you can read his thesis or our arXiv paper. Code, and some separations are also available for you!
Our accepted ISMIR paper on music auto-tagging at scale is now online – read it on arXiv, and listen to our demo!
TL;DR: 1) Given that enough training data is available: waveform models (sampleCNN) > spectrogram models (musically motivated CNN). 2) But spectrogram models > waveform models when no sizable data are available. 3) Musically motivated CNNs achieve state-of-the-art results for the MTT & MSD datasets.
A few weeks ago Olga Slizovskaya and I were invited to give a talk to the Centre for Digital Music (C4DM) @ Queen Mary Universtity of London – one of the most renowned music technology research institutions in Europe, and possibly in the world. It’s been an honor, and a pleasure to share our thoughts (and some beers) with you!
The talk was centered in our recent work on music audio tagging, which is available on arXiv, where we study how non-trained (randomly weighted) convolutional neural networks perform as feature extractors for (music) audio classification tasks.
One can divide deep learning models into two parts: front-end and back-end – see Figure 1. The front-end is the part of the model that interacts with the input signal in order to map it into a latent-space, and the back-end predicts the output given the representation obtained by the front-end.
Figure 1 – Deep learning pipeline.
In the following, we discuss the different front- and back-ends we identified in the audio classification literature. Continue reading
Machine listening is a research area where deep supervised learning is delivering promising advances. However, the lack of data tends to limit the outcomes of deep learning research – specially, when dealing with end-to-end learning stacks processing raw data such as waveforms. In this study we train models with musical labels annotated for one million tracks, which provides novel insights to the audio tagging task since the largest commonly used (academic) dataset is composed of ≈ 200k songs. This large amount of data allows us to unrestrictedly explore different deep learning paradigms for the task of auto-tagging: from assumption-free models – using waveforms as input with very small convolutional filters; to models that rely on domain knowledge – log-mel spectrograms processed with a convolutional neural network designed to learn temporal and timbral features. Results suggest that, while spectrogram-based models surpass their waveform-based counterparts, the difference in performance shrinks as more data are employed.
We also compare our deep learning models with a traditional method based on feature-design, namely: the Gradient Boosted Trees (GBT) + features model. Results show that the proposed deep models are capable of outperforming the traditional method when trained with 1M tracks, however the proposed models under-perform the baseline when trained with only 100K tracks. This result aligns with the notion that deep learning models require large datasets for outperforming strong (traditional) methods based on feature-design.