Although I’m now a researcher at Dolby Laboratories, I’m still collaborating with some universities in Barcelona — where I’ll keep teaching deep learning for music and audio. In this context, and given the importance of the gradient vanishing/explode problem in deep neural networks, this week I’ll be teaching recurrent neural networks to the Master in Sound and Music Computing students of the Universitat Pompeu Fabra.
As part of my onboarding at Dolby, I had the pleasure to be working in San Francisco. In order to share my recent experiences with my colleagues, I have been updating these slides and I presented some of my recent work at Dolby and Adobe headquarters.
We present a didactic toolkit to rapidly prototype audio classifiers with pre-trained Tensorlow models and Scikit-learn. We use pre-trained Tensorflow models as audio feature extractors, and Scikit-learn classifiers are employed to rapidly prototype competent audio classifiers that can be trained on a CPU.
This material was prepared for teaching Tensorflow, Scikit-learn, and deep learning in general. Besides, due to the simplicity of Scikit-learn, this toolkit can be employed to easily build proof-of-concept models with your own data.
Our ISMIR 2019 tutorial on ‘Waveform-based music processing with deep learning’ got accepted! We will teach about music generation (Sander Dieleman), music classification (Jongpil Lee), and music source separation (myself)!