Actually, what I really need is less papers with “all you need” in the title – and to share a (non-virtual) beer with you folks!! Here some of the papers I enjoyed, together with the papers we presented. You’ll see that I don’t include classification/tagging papers, I guess I need a break from my PhD topic 🙂 Enjoy!Continue reading
On Thursday 13th May from 17:00 – 19:00 (CET) I’ll be part of the workshop ‘Exploring connections between AI and Music’. The live-streamed event is free to watch, and marks the presentation of the AI and Music Festival and its first activity (more information here). To prepare for it, I reviewed previous works by music AI artists and researchers. This slide deck contains a summary of how I perceive the current music AI scene.
Our “Upsampling artifacts in neural audio synthesis” paper has now a GitHub page with code to experiment with its figures. These notebooks provide additional (interactive) material to further understand our findings.
How to extract audio objects with deep learning – without explicitly learning to extract those? In our ICASSP paper we propose multichannel-based learning, a technique closely related to self-supervised learning, differentiable digital signal processing, and universal sound separation.Continue reading