Digitalisation back

Always striking the right chord

Always striking the  right chord
Photo: © Jens Thomas
A sleek AI duo: Eric Eitel and Dr. Julia Schneider.

Digitalisation is completely transforming the music market and culture. With all the pros and cons that come along with it. Algorithms are already creating danceable sounds and already influence what we want to hear. Artificial Intelligence (AI) is simultaneously an abbreviation, a saviour and a terrifying thing at the same time. Is progress multiplying the alternatives or is it just the beginning of new mono­cultures? And in the end, can you earn anything doing it?

 

By ERIC EITEL and Dr. JULIA SCHNEIDER (Eric Eitel curates technology, art and cultural projects and is a founding member of Music Pool Berlin. Dr. Julia Schneider is an independent consultant for Artificial Intelligence (AI) and member of the scientific committee of the VDEI, Verband der Exoskelettindustrie e.V.)

 

The good news first: The cassette is still alive and kicking. In the US, 174,000 cassettes were sold in 2017 – a five­fold increase over 2010. The CD, on the other hand, has suffered a landslide; in ten years’ time we will probably hardly remem­ber this glittering mini Frisbee disc, which first saw the light of day in 1982. What’s more, digitalisation has transformed the music market significantly. Audio streaming now ac­counts for almost 50 per cent of total revenue in the music industry. Music creators are increasingly earning their mon­ey from live gigs – and, as we know, are paid only marginally by the major streaming platforms and record labels. In addition to these changes in the way we consume music, digitalisation creates one thing above all else: new music for­mats, new compositions, more gimmicks. More and more 
music makers are now also experimenting with AI. 

No dark future, but rather reality 

And this begs the question: What exactly is going on here? The discourse about AI­generated music first raises the question of whether people in music production will need it in the future. What legitimacy will I have as a musician when AI systems are able to produce more songs that are more successful than I can? Already now, AI systems such as Flow Machines by Sony CSL Research Laboratory are able to de­vise complex compositions – in this case, one can actually speak of specially composed songs by AI. In 2017, the sup­posedly first ever AI­composed pop song in the world was hit the media – “Daddy’s Car”, a song reminiscent of the Beatles and Oasis. And the following year, based on the same tech platform, “Hello World” by Skygge aka Benoit Carré followed, probably the first AI­produced pop album in music history. 

This shows that functional music in particular – music for film, TV and games – as well as entertainment music could in future largely be produced by AI systems. If you’re looking for background music for a video today, you’ll have no trou­ble finding it at Jukedeck. For some years there has been a genre there where you can enter a mood and a length – for example “pop”, “melancholic” and “15 seconds” – and an AI delivers everything you need in a few seconds. No more trouble with copyright, and all that just for a few cents. The same applies to “adaptive music”. Imagine a computer game with a basic musical theme and many variations. With sys­tems from vendors like Melodrive, “immersive” soundtracks can be produced in an original and real­time way. Immersive means that users can immerse themselves in a virtual envi­ronment – visually and acoustically. Although this technology is still in its infancy, it is developing rapidly.

On the one hand, the original revenue models of the music industry have been shaken by the digital upheaval in recent years; the personal responsibility is increasing for music creators, but also the joy of experimenting thanks to the extensive emergence of digital options. On the other hand, the uniformity is just about to increase

Creativity, cash, AI 

But does all that bring more diversity? Is progress multiply­ing the alternatives or is it just the beginning of new mono­cultures? And who’s making money off it? On the one hand, the original revenue models of the music industry have been shaken by the digital upheaval in recent years; the personal responsibility for music creators is increasing, but also the joy of experimenting thanks to the extensive emergence of digital options. On the other hand, uniformity is just growing – and that has much to do with the digitised market. The first 30 seconds are already decisive for success or failure in streaming. The large streaming providers are therefore cur­rently developing many small radio stations for their listen­ers, but Spotify currently has 4,500 curated playlists. Yet: The alternative or lesser­known artists usually are overlooked in digital searches. And this is precisely why artists are hardly earning anything. However, it is not just the streaming servic­es and the record companies that are at fault, because they are bad at rewarding artists. We music consumers are also to blame. Economists Nils Wlömert and Dominik Papies discov­ered in a study that Spotify users spend less money on CDs and downloads as soon as they subscribe to Spotify – whoev­er subscribes to a Spotify premium subscrip tion for 9.99 Eu­ros per month spends almost a quarter less on albums, sin­gles and individual songs. Digitalisation also means that not everyone benefits from it and that monocultures can initial­ly estab lish themselves. More and more music creators and managers are already adapting music to the listening hab­its and user behaviour online. Spotify is also investing heav­ily in AI itself, which could lead to the platform operators in­creasingly integrating AI music into our playlists in the future. Therefore, by implication, existing copyright could become an important promoter for the development of artificially pro­duced music. 

 

This could also be the AI E.T, but in fact it’s screenshots from the video “Magic Man” from the album “Hello World” by Skygge aka Benoit Carré, probably the first AI-produced pop album in music history. Directed by Jean-François Robert

So what does all this mean? Is creative personal achieve­ment ultimately being sacrificed on the altar of Al generated music? Yes and no. Because what is really promising at the moment is “deep learning” – and in this respect we are only at the beginning. It works as a sub­form of machine learning with artificial neural networks, which recognise structures themselves, evaluate the results and improve themselves in many cycles during the running application, “learning by themselves” – without human intervention. In terms of music production, this means that artificial knowledge can today be generated from old songs. This can be certain data points of a song – or the song structure. This knowledge, in turn, can be generalised again and used for new songs. From a music cultural point of view, the widespread use of AI could lead to more uniform AI­generated, “generic” mu­sic, on the one hand, but it could also mean a renaissance of experimental music, which for the time being could remain a human domain. For even if a music AI were to happen to compose something like twelve­tone music, it could not con­tribute to the socio­cultural context that would be necessary to convince other people to accept it as art. For now, humans will remain gatekeepers of what other people accept as art. 

Digitalisation also means that not everyone benefits from it and that monocultures can initially establish themselves. More and more music creators and managers are already adapting music to the listening habits and user behaviour online. Because what is really promising at the moment is “deep learning” – and in this respect we’re only at the beginning

Other technologies will revolutionise music as a holistic sen­sory experience. The field of music performance in particular will benefit massively from the development and use of new human­machine interfaces in the near future under the um­brella term “gestural music”. Above all, new wearables, which are equipped with motion sensors that control and manip­ulate electronic music using body movements or gestures, are exciting. “Mi. Mu Gloves” are, for example, complex and highly sensitive sensor gloves with which the Dutch musi­cian Chagall distorts her own voice in live performances. An­yone who has ever witnessed an electronic live act and has been annoyed at the anti­show of brain­teasing knob­turners immediately understands the potential benefits of this tech­nology for future stage shows. In the next development step, brain­computer interfaces (BCI) will revolutionise the control of devices or effects, and not only in the area of performance. That would then in fact be a future of more than just exper­imental listening pleasure. It would be a broadening of the spectrum. For it wouldn’t just enhance listening pleasure. In terms of accessibility, it would also create the opportunity for people with severe physical limitations to compose and per­form music in the future using brainwaves. Creating more in­clusion through digitalisation would be a real game changer (for a change).

 

Category: Specials

rss

Also a good read

close
close

Cookie-Policy

We use cookies to provide the best website experience for you. By clicking on "Accept tracking" you agree to this. You can change the settings or reject the processing under "Manage Cookies setup". You can access the cookie settings again at any time in the footer.
Privacy | Imprint

Cookie-Policy

We use cookies to provide the best website experience for you. By clicking on "Accept tracking" you agree to this. You can change the settings or reject the processing under "Manage Cookies setup". You can access the cookie settings again at any time in the footer.

Privacy | Imprint