Data and music: How is AI helping the music world?
Data in the music world has made it possible to evaluate the performance of artists beyond the number of CDs sold, very often limited to fans. Let's discover in this article how data is changing the world of music
Music composition assistants
Finding a balance between technical skills and inspired creative flair is often a crucial goal when trying to launch a career as a composer. If you are too technical, your work may be perceived as soulless. But if your music is too unconventional, too free, and escapes categorization, you will have a much harder time finding listeners.
Built with detailed algorithms, platforms like Aiva, Amper, and Ecrett Music process its users' inputs and build surprisingly moving music in the blink of an eye. Take Ecrett Music: by selecting options from a series of menus (Scene, Mood, and Genre), the intelligent platform efficiently concocts a piece of music that absolutely matches those criteria.
Google Magenta: music innovation assistant
The most tempting option is to press a button and wait for the AI to give songs ready to be used, there are other options where an interaction between man and machine will allow to go beyond this easy solution. For example, Google's Magenta project allows for innovation in music. In particular, from this Google project was born NSynth a plugin allowing to create new sounds by mixing existing pairs of sounds provided by users.
Other initiatives such as theOrb Producer Suite plugins strive to help you develop your own ideas by suggesting, for example, musically sound chord progressions or paths
On the user side
2 main cases. On the one hand, with applications like Apple Music, Spotify or Deezer, we will be able to recommend music to users to help them discover new tracks, similar or close to their usual genres. We will also create playlists with mixes between the tracks they usually listen to and suggestions for new tracks. To do this, these companies collect behavioral data: music listened to how often, pauses, repeats, listening time to be able to learn from your habits and be as relevant as possible.
On the other hand, with applications like Shazam, when a user hears music that interests him, he can then easily discover new tracks and record them afterwards.
For these players in the music world, data analysis is a key skill. If you've tested the various music platforms, you'll see that the qualities of their recommendation systems are uneven.
Go further: Predicting success
This is one of the big questions the music industry is asking itself: can we predict future musical success before its release? Can we predict future hits in advance thanks to data? In 2013, Spotify tried to predict a part of the Grammy Awards based on the listening of its users. Results: 4 good predictions out of 6. Shazam did better with 11 out of 16 correct predictions, or about 69% success rate.
But what the industry is trying to do is to make this analysis further upstream than the Grammy awards to know on which titles to invest as soon as they are released. And to go even further, why not use data and algorithms to discover who will be the next big star.
However, will the uncertain nature of music, and more generally of art, make this technically possible? The future will tell us!