While I never had the chance of attending to the Web Audio Conference (WAC), I have followed with great interest the recent developments of the web audio API. But.. this time I couldn’t resist to go – since Dolby is the main sponsor of the event, and the conference is organised by friends in my city!
As a personal curiosity, the first WAC (back in 2015!) was being organized by IRCAM when I was a research intern there – and I remember having this feeling that audio processing in the browser was going to be THE THING. Five years later, we start to see the social impact of the ideas that were introduced back then.
As you can guess from the picture above, we were presenting Dolby.io to the conference attendees. The goal of Dolby.io is to “make it easy to build and deliver high-quality content with just a few lines of code”. In other words, to provide easy to use APIs for enhancing audio and video files or to easily deliver immersive video-calls.
The browser: your new instrument
Note that using the web audio API for making music enables programs (living music pieces) to be shared via the browser, what is accessible and convenient – specially for collaborative music creation (see by Dannenberg et al.). Other interesting ideas include using interfaces similar to Pure Data or Max/MSP (by Shihong Ren et al.) on the browser, or to create generative music with the music audio API (by Paul A Paroczai). I also recommend watching the keynote by Anna Huang.
It is also worth checking the performances section of the program to see how different artists used the browser as an instrument. There were a lot of interesting ideas!
Essentia.js was recently developed by the Music Technology Group, and it enables a collection of music/audio analysis algorithms on your web-client. For example, they are able to run musicnn on your browser! See the demo and video.
Stöter et al. also presented a web-based platform for music source separation that includes two state-of-the-art models: open unmix, and spleeter.
The keynote by Juan José Montiel was AMAZING, and he was showing us how blind people interacts with technology – via audio cues, essentially. Throughout the presentation, he discussed the opportunities and challenges related to the use of the web audio API to develop accessible interfaces (and experiences, like audiogames) for blind people. Here the video!
Online listening tests
A nice example of the potential of the web audio API is the well-known webMUSHRA for online listening tests. During this WAC, Pauwels et al. discussed some modifications to design adaptive online listening tests. This allows to adapt/change the perceptual test depending on the user, or allowing server-based sampling to enforce a certain distribution over all participants.