Tutorial accepted: Waveform-based music processing with deep learning1 min read

Our ISMIR 2019 tutorial on ‘Waveform-based music processing with deep learning’ got accepted! We will teach about music generation (Sander Dieleman), music classification (Jongpil Lee), and music source separation (myself)!

Abstract — A common practice when processing music signals with deep learning is to transform the raw waveform input into a time-frequency representation. This pre-processing step allows having less variable and more interpretable input signals. However, along that process, one can limit the model’s learning capabilities since potentially useful information (like the phase or high frequencies) is discarded. In order to overcome the potential limitations associated with such pre-processing, researchers have been exploring waveform-level music processing techniques, and many advances have been made with the recent advent of deep learning.

In this tutorial, we introduce three main research areas where waveform-based music processing can have a substantial impact:

  • Music classification (teached by Sander Dieleman): waveform-based music classifiers have the potential to simplify production and research pipelines.
  • Music source separation (taught by Jordi Pons): making possible waveform-based music source separation would allow overcoming some historical challenges associated with discarding the phase.
  • Music generation (taught by Jongpil Lee): waveform-level music generation would enable, e.g., to directly synthesize expressive music.