Our ISMIR 2019 tutorial on ‘Waveform-based music processing with deep learning’ got accepted! We will teach about music generation (Sander Dieleman), music classification (Jongpil Lee), and music source separation (myself)!

Our ISMIR 2019 tutorial on ‘Waveform-based music processing with deep learning’ got accepted! We will teach about music generation (Sander Dieleman), music classification (Jongpil Lee), and music source separation (myself)!
This is the first ICASSP I’m feeling that the conference has become a place where influential machine learning papers are presented. I’m happy to see that most of our community is not only employing ‘LSTMs for a new dataset
This was my second ISMIR, and I am super excited of being part of this amazing, diverse, and so inclusive community. It was fun to keep putting faces (and height, and weight) to these names I respect so much! This ISMIR has been very special for me, because I was returning to the city where I kicked off my academic career (5 years ago I was starting a research internship @ IRCAM!), and we won the best student paper award!
Our accepted ISMIR paper on music auto-tagging at scale is now online – read it on arXiv, and listen to our demo!
TL;DR:
1) Given that enough training data is available: waveform models (sampleCNN) > spectrogram models (musically motivated CNN).
2) But spectrogram models > waveform models when no sizable data are available.
3) Musically motivated CNNs achieve state-of-the-art results for the MTT & MSD datasets.
Continue reading