Sustainability, Consultation
Björn Barutzki: "We see ourselves as cultivators"
What sustainability advice and services are available for Berlin companies …
Digitalisation is completely transforming the music market and culture. With all the pros and cons that come along with it. Algorithms are already creating danceable sounds and already influence what we want to hear. Artificial Intelligence (AI) is simultaneously an abbreviation, a saviour and a terrifying thing at the same time. Is progress multiplying the alternatives or is it just the beginning of new monocultures? And in the end, can you earn anything doing it?
By ERIC EITEL and Dr. JULIA SCHNEIDER (Eric Eitel curates technology, art and cultural projects and is a founding member of Music Pool Berlin. Dr. Julia Schneider is an independent consultant for Artificial Intelligence (AI) and member of the scientific committee of the VDEI, Verband der Exoskelettindustrie e.V.)
The good news first: The cassette is still alive and kicking. In the US, 174,000 cassettes were sold in 2017 – a fivefold increase over 2010. The CD, on the other hand, has suffered a landslide; in ten years’ time we will probably hardly remember this glittering mini Frisbee disc, which first saw the light of day in 1982. What’s more, digitalisation has transformed the music market significantly. Audio streaming now accounts for almost 50 per cent of total revenue in the music industry. Music creators are increasingly earning their money from live gigs – and, as we know, are paid only marginally by the major streaming platforms and record labels. In addition to these changes in the way we consume music, digitalisation creates one thing above all else: new music formats, new compositions, more gimmicks. More and more
music makers are now also experimenting with AI.
No dark future, but rather reality
And this begs the question: What exactly is going on here? The discourse about AIgenerated music first raises the question of whether people in music production will need it in the future. What legitimacy will I have as a musician when AI systems are able to produce more songs that are more successful than I can? Already now, AI systems such as Flow Machines by Sony CSL Research Laboratory are able to devise complex compositions – in this case, one can actually speak of specially composed songs by AI. In 2017, the supposedly first ever AIcomposed pop song in the world was hit the media – “Daddy’s Car”, a song reminiscent of the Beatles and Oasis. And the following year, based on the same tech platform, “Hello World” by Skygge aka Benoit Carré followed, probably the first AIproduced pop album in music history.
This shows that functional music in particular – music for film, TV and games – as well as entertainment music could in future largely be produced by AI systems. If you’re looking for background music for a video today, you’ll have no trouble finding it at Jukedeck. For some years there has been a genre there where you can enter a mood and a length – for example “pop”, “melancholic” and “15 seconds” – and an AI delivers everything you need in a few seconds. No more trouble with copyright, and all that just for a few cents. The same applies to “adaptive music”. Imagine a computer game with a basic musical theme and many variations. With systems from vendors like Melodrive, “immersive” soundtracks can be produced in an original and realtime way. Immersive means that users can immerse themselves in a virtual environment – visually and acoustically. Although this technology is still in its infancy, it is developing rapidly.
On the one hand, the original revenue models of the music industry have been shaken by the digital upheaval in recent years; the personal responsibility is increasing for music creators, but also the joy of experimenting thanks to the extensive emergence of digital options. On the other hand, the uniformity is just about to increase
Creativity, cash, AI
But does all that bring more diversity? Is progress multiplying the alternatives or is it just the beginning of new monocultures? And who’s making money off it? On the one hand, the original revenue models of the music industry have been shaken by the digital upheaval in recent years; the personal responsibility for music creators is increasing, but also the joy of experimenting thanks to the extensive emergence of digital options. On the other hand, uniformity is just growing – and that has much to do with the digitised market. The first 30 seconds are already decisive for success or failure in streaming. The large streaming providers are therefore currently developing many small radio stations for their listeners, but Spotify currently has 4,500 curated playlists. Yet: The alternative or lesserknown artists usually are overlooked in digital searches. And this is precisely why artists are hardly earning anything. However, it is not just the streaming services and the record companies that are at fault, because they are bad at rewarding artists. We music consumers are also to blame. Economists Nils Wlömert and Dominik Papies discovered in a study that Spotify users spend less money on CDs and downloads as soon as they subscribe to Spotify – whoever subscribes to a Spotify premium subscrip tion for 9.99 Euros per month spends almost a quarter less on albums, singles and individual songs. Digitalisation also means that not everyone benefits from it and that monocultures can initially estab lish themselves. More and more music creators and managers are already adapting music to the listening habits and user behaviour online. Spotify is also investing heavily in AI itself, which could lead to the platform operators increasingly integrating AI music into our playlists in the future. Therefore, by implication, existing copyright could become an important promoter for the development of artificially produced music.
So what does all this mean? Is creative personal achievement ultimately being sacrificed on the altar of Al generated music? Yes and no. Because what is really promising at the moment is “deep learning” – and in this respect we are only at the beginning. It works as a subform of machine learning with artificial neural networks, which recognise structures themselves, evaluate the results and improve themselves in many cycles during the running application, “learning by themselves” – without human intervention. In terms of music production, this means that artificial knowledge can today be generated from old songs. This can be certain data points of a song – or the song structure. This knowledge, in turn, can be generalised again and used for new songs. From a music cultural point of view, the widespread use of AI could lead to more uniform AIgenerated, “generic” music, on the one hand, but it could also mean a renaissance of experimental music, which for the time being could remain a human domain. For even if a music AI were to happen to compose something like twelvetone music, it could not contribute to the sociocultural context that would be necessary to convince other people to accept it as art. For now, humans will remain gatekeepers of what other people accept as art.
Digitalisation also means that not everyone benefits from it and that monocultures can initially establish themselves. More and more music creators and managers are already adapting music to the listening habits and user behaviour online. Because what is really promising at the moment is “deep learning” – and in this respect we’re only at the beginning
Other technologies will revolutionise music as a holistic sensory experience. The field of music performance in particular will benefit massively from the development and use of new humanmachine interfaces in the near future under the umbrella term “gestural music”. Above all, new wearables, which are equipped with motion sensors that control and manipulate electronic music using body movements or gestures, are exciting. “Mi. Mu Gloves” are, for example, complex and highly sensitive sensor gloves with which the Dutch musician Chagall distorts her own voice in live performances. Anyone who has ever witnessed an electronic live act and has been annoyed at the antishow of brainteasing knobturners immediately understands the potential benefits of this technology for future stage shows. In the next development step, braincomputer interfaces (BCI) will revolutionise the control of devices or effects, and not only in the area of performance. That would then in fact be a future of more than just experimental listening pleasure. It would be a broadening of the spectrum. For it wouldn’t just enhance listening pleasure. In terms of accessibility, it would also create the opportunity for people with severe physical limitations to compose and perform music in the future using brainwaves. Creating more inclusion through digitalisation would be a real game changer (for a change).
Category: Specials
Also a good read
Subscribe to our monthly newsletter!