Abstract. The focus of this work is to study how to efficiently tailor Convolutional Neural Networks (CNNs) towards learning timbre representations from log-mel magnitude spectrograms. We first review the trends when designing CNN architectures. Through this literature overview we discuss which are the crucial points to consider for efficiently learning timbre representations using CNNs. From this discussion we propose a design strategy meant to capture the relevant time-frequency contexts for learning timbre, which permits using domain knowledge for designing architectures. In addition, one of our main goals is to design efficient CNN architectures – what reduces the risk of these models to over-fit, since CNNs’ number of parameters is minimized. Several architectures based on the design principles we propose are successfully assessed for different research tasks related to timbre: singing voice phoneme classification, musical instrument recognition and music auto-tagging.
Reference. Jordi Pons, Olga Slizovskaia, Rong Gong, Emilia Gómez and Xavier Serra. “Timbre Analysis of Music Audio Signals with Convolutional Neural Networks”. arXiv:1703.06697
Code. This paper is the result of an intense collaboration between Rong, Olga and myself. Each one was responsible to study the implications of the proposed design strategy for different use cases. The code to reproduce each of the experiments is available online:
- Phoneme classification of Jingu singing: github.com/ronggong/EUSIPCO2017
- Musical instrument recognition: github.com/Veleslavia/EUSIPCO2017
- Music auto-tagging: github.com/jordipons/EUSIPCO2017
Datasets. This work was possible because several benchmarks/datasets are available for research purposes:
- Jingju a cappella singing dataset: github.com/MTG/jingjuPhonemeAnnotation
- IRMAS, a dataset for instrument recognition in musical audio signals: mtg.upf.edu/download/datasets/irmas
- MagnaTagATune dataset: mirg.city.ac.uk/codeapps/the-magnatagatune-dataset and github.com/keunwoochoi/magnatagatune-list
Acknowledgments. We are grateful for the GPUs donated by NVIDIA. This work is partially supported by: the Maria de Maeztu Units of Excellence Programme (MDM-2015-0502), the CompMusic project (ERC grant agreement 267583) and the CASAS Spanish research project (TIN2015-70816-R). Also infinite thanks to E. Fonseca and S. Oramas for their help.