Extreme Learning Machines (ELMs) are very controversial and very fast machine learning models that perform very well. Of course, very is in italics because such word is susceptible to change depending on your background or application field. However, this sentence provides an idea of what ELMs can deliver – and why these might be interesting for an audio community that rarely uses them. Continue reading
This post aims to share our experience setting up our deep learning server – thanks nvidia for the two Titan X Pascal! 🙂 The text is divided in two parts: bringing the pieces together, and install TensorFlow. Let’s start!
I was invited to give a talk to the Deep Learning for Speech and Language Winter Seminar at the UPC in Barcelona. Since UPC is the university where I did my undergraduate studies, it was a great pleasure to give a talk there!
The talk was centered in my recent work on music audio tagging, which is available on arXiv and is summarized in these previous posts: deep learning architectures for music audio classification, and deep end-to-end learning for music audio tagging at Pandora.
One can divide deep learning models into two parts: front-end and back-end – see Figure 1. The front-end is the part of the model that interacts with the input signal in order to map it into a latent-space, and the back-end predicts the output given the representation obtained by the front-end.
In the following, we discuss the different front- and back-ends we identified in the audio classification literature. Continue reading
TL;DR – Summary:
Machine listening is a research area where deep supervised learning is delivering promising advances. However, the lack of data tends to limit the outcomes of deep learning research – specially, when dealing with end-to-end learning stacks processing raw data such as waveforms. In this study we train models with musical labels annotated for one million tracks, which provides novel insights to the audio tagging task since the largest commonly used (academic) dataset is composed of ≈ 200k songs. This large amount of data allows us to unrestrictedly explore different deep learning paradigms for the task of auto-tagging: from assumption-free models – using waveforms as input with very small convolutional filters; to models that rely on domain knowledge – log-mel spectrograms processed with a convolutional neural network designed to learn temporal and timbral features. Results suggest that, while spectrogram-based models surpass their waveform-based counterparts, the difference in performance shrinks as more data are employed.
We also compare our deep learning models with a traditional method based on feature-design, namely: the Gradient Boosted Trees (GBT) + features model. Results show that the proposed deep models are capable of outperforming the traditional method when trained with 1M tracks, however the proposed models under-perform the baseline when trained with only 100K tracks. This result aligns with the notion that deep learning models require large datasets for outperforming strong (traditional) methods based on feature-design.
Let’s see what our best performing model (a musically motivated convolutional neural network processing spectrograms) yields when fed with a J.S. Bach aria:
Female vocals, triple meter, acoustic, classical music, baroque period, lead vocals, string ensemble, major, compositional dominance of: lead vocals and melody.
Top10: Deep learning
Acoustic, string ensemble, classical music, baroque period, major, compositional dominance of: the arrangement, form, performance, rhythm and lead vocals.