The first ISMIR that was held online is now over! Congrats to the organizers, because everything went smooth – even though the circumstances. This year, I decided to only pick 10 papers and I forced myself to write a sentence about each of those. I hope you enjoy my selection!

Top 10 papers list, including 9 papers, with no specific order:
- Unsupervised Disentanglement of Pitch and Timbre for Isolated Musical Instrument Sounds
- Link: https://program.ismir2020.net/poster_5-10.html
- Why I liked it? Unsupervised learning of disentangled representations is important for many applications, I enjoyed they “self-supervised” inspired ideas.
- Few-shot Drum Transcription in Polyphonic Music
- Link: https://program.ismir2020.net/poster_1-14.html
- Why I liked it? IT WORKS! WITH JUST FEW EXAMPLES! The few-shot learning literature is very interesting, and it’s interesting to see that it can outperform traditional supervised learning approaches.
- Downbeat Tracking with Tempo Invariant Convolutional Neural Networks
- Link: https://program.ismir2020.net/poster_2-07.html
- Why I liked it? Tempo invariant convolutional neural networks!! This is an important step if one is willing to adapt deep learning architectures for music-related problems.
- Ultra-light Deep MIR by Trimming Lottery Tickets
- Link: https://program.ismir2020.net/poster_4-11.html
- Why I liked it? Because I find the lottery ticket hypothesis very interesting, and he explored extending it to make more efficient deep learning models for music tech.
- Less Is More: Faster and Better Music Version Identification with Embedding Distillation
- Link: https://program.ismir2020.net/poster_6-15.html
- Why I liked it? In the past, they showed that larger embeddings deliver better results for music version identification. Now, they looked at ways to allow for reducing the size of these embeddings and found that can get even get better results.
- Hierarchical Musical Instrument Separation
- Link: https://program.ismir2020.net/poster_3-07.html
- Why I liked it? Humans listen and understand sounds hierarchically, why not separating sounds that way? They show their findings looking at the task of music source separation from this perspective.
- Human-AI Co-creation in Songwriting
- Link: https://program.ismir2020.net/poster_5-11.html
- Why I liked it? It summarises the experience of an AI song contest, and find that machine learning-powered music interfaces that are more decomposable, steerable, interpretable, and adaptive, can enable artists to more effectively explore how AI can extend their personal expression.
- Deconstruct, Analyse, Reconstruct: How to Improve Tempo, Beat, and Downbeat Estimation
- Link: https://program.ismir2020.net/poster_4-14.html
- Why I liked it? They revisit the task from scratch, and find out that data augmentation and embedding more explicit musical knowledge into the design decisions in building the network helps. What a surprise!
- DrumGAN: Synthesis of Drum Sounds with Timbral Feature Conditioning Using Generative Adversarial Networks
- LInk: https://program.ismir2020.net/poster_4-16.html
- Why I liked it? Besides enjoying they creative angle of their project, they tackle the important problem of controlling neural audio synthesizers. Importantly, their audio samples include a trap example.
For more interesting papers, see the conference program (open! and online!) or check the list of awarded papers.