Which is the outtake of Artificial Intelligence (AI)? This is a recurrent conversation topic among AI practitioners, specialized journalists, and brave politicians. Although some simple concepts are clearly conveyed to the general audience, there are some others that are not so widely known. In this post I’ll be focusing on an important topic that is often overlooked: the economics behind AI.
Since AI is impacting our lives through products available in the marketplace, the goal of this post is to analyze what’s up with AI systems when consumed via the free market. In other words, AI is developed and consumed in a market-driven fashion and I would like to better understand which are the consequences of that. Hence, I’ll be focusing on the economic side of AI to show that for encouraging the main AI actors to behave ethically we better (directly) act over the market.
In this series of posts I have written a couple of articles discussing the pros & cons of spectrogram-based VGG architectures, to think about which is the role of the computer vision deep learning architectures in the audio field. Now is time to discuss what’s up with waveform-based VGGs!
Many things have happened between the pioneering papers written by Lewis and Todd in the 80s and the current wave of GANs composers. Along that journey, connectionists’ work was forgotten during the AI winter, very influential names (like Schmidhuber or Ng) contributed seminal publications and, in the meantime, researchers have made tons of awesome progress.
I won’t be going through every single paper in the field of neural networks for music nor diving into technicalities, but I’ll cover what are the milestones that helped shaping the current state of music AI – this being a nice excuse to give credit to these wild researchers who decided to care about a signal that is nothing else but cool. Let’s start!