The musicnn library (pronounced as “musician”) employs deep convolutional neural networks to automatically tag songs, and the models that are included achieve the best scores in public evaluation benchmarks. These state-of-the-art models have been released as an open-source library that can be easily installed and used. For example, you can use musicnn to tag this emblematic song from Muddy Waters — and it will predominantly tag it as blues!
As part of my onboarding at Dolby, I had the pleasure to be working in San Francisco. In order to share my recent experiences with my colleagues, I have been updating these slides and I presented some of my recent work at Dolby and Adobe headquarters.
We present a didactic toolkit to rapidly prototype audio classifiers with pre-trained Tensorlow models and Scikit-learn. We use pre-trained Tensorflow models as audio feature extractors, and Scikit-learn classifiers are employed to rapidly prototype competent audio classifiers that can be trained on a CPU.
This material was prepared for teaching Tensorflow, Scikit-learn, and deep learning in general. Besides, due to the simplicity of Scikit-learn, this toolkit can be employed to easily build proof-of-concept models with your own data.
Our ISMIR 2019 tutorial on ‘Waveform-based music processing with deep learning’ got accepted! We will teach about music generation (Sander Dieleman), music classification (Jongpil Lee), and music source separation (myself)!
This is the first ICASSP I’m feeling that the conference has become a place where influential machine learning papers are presented. I’m happy to see that most of our community is not only employing ‘LSTMs for a new dataset‘, but are proposing novel and inspiring machine learning methods. Let’s see what happened in Brighton (UK)!