• Menu

The machine-driven composer: Facts and fictions about AI and music

Nov 11 2021


Marie Clara Fancello

Topics

Music

The notion of computers creating music, instead of musicians, often draws people into the tired “humans vs. robots” debate.

The uncertainty we feel about the role of AI in music production, and the creative process, shouldn’t be ignored. Instead, it is a conversation worth taking the time to have.

Before going into how AI is used in music, and how it can (and already has) contribute to the creative process, here are common concerns related to its use:

  • Copyright. AI uses existing music to generate new music. Does the resulting work belong to the owners of the music that the AI analysed? Does it belong to the person who gathered the data? Or does it belong to the person who coded the AI to get the final song? What about the band that performs it? Copyright laws are not currently designed to address these issues.
  • Inequality. Using AI can help artists write more songs, and focus on genres the public most enjoys. What room remains for artists who do not want to delegate that creative process, and may thus produce music more slowly, or make riskier choices? This can exacerbate existing inequalities between artists who produce many songs with little meaning, and those who take more time but produce more personal, arguably powerful work.
  • AI cannot “understand” feelings and undertones. Artists using AI may be tempted to opt for quantity over quality, if “quality” is defined by emotional resonance and a personal touch. It's a choice that is likely to differ by artist style and also audience.
  • What about lyrics? Lyrics are one way people recognise themselves in music we love. An AI that generates lyrics at random could decrease the importance of choice of words.
  • This topic often invites a larger question about what it means to be a musician. Some would say it's about writing your own music; Michael Jackson, David Bowie, and Queen wrote their own songs. When an AI replaces that creative process, what is the difference between being a songwriter and being a cover artist? 
  • A song's message is often essential to its resonance. Being a musician used to mean—and, for many (like myself), still means—being able to convey an emotion or truth, or fighting for social and political issues. Believing in what you compose may be key to the emotional quality of a good song. But if artists stop composing, does the music retain a purpose?

How the technology works

Typically, music-driven AI analyses a database of existing samples, songs and lyrics, then creates new ones from those component parts. Most music-related AI systems also incorporate music theory. Depending on the software you use (Amper AI, Ecrett Music and MuseNet are some examples), it is possible to take pre-existing musical samples and combine them to create a new one, selecting the style and type of music you want generated.

In terms of lyrics, AI will usually generate phrases by subject, based on analysis of other phrases on the same subject. Thus you can write a love song drawn from thousands of other songs about love. But love can feel different from one person to another, and love songs are often rife with metaphors. Currently, AI cannot meaningfully describe love’s complexity as a reflection of human experience. Michael Kiwanuka’s “Love and Hate,” for example, isn't just about love; it is about toxic relationships, and how love and hate can interweave.

Kiwanuka’s music is composed of deep poetry and innuendo; for the moment, no AI is capable of this (at least not intentionally). Perhaps once Affective Computing has become mainstream, AI-generated lyrics could evoke such sensitivity, but the risk of losing some aspect of the intangible remains.

AI in music is not a novelty

On Sean Carroll’s Mindscape podcast, the musician Grimes declared that, once artificial general intelligence exists, it will be better at making art than artists. Yet use of AI in music production is not as new as we believe. It even precedes the creation of data-related professions. 

Consider the greatest artists from the 1970s. Pink Floyd created 15 albums, Led Zeppelin made eight, ACDC made 17, Queen 15, Michael Jackson 10. David Bowie alone released 27 albums and 128 singles. Bowie specifically was able to produce prolifically by using an early AI model: He’d write random words on paper, put them in a hat, then pick a few and write a song that combined them. 

In the ‘90s, he worked with someone to develop an app to facilitate this process. The “Verbasizer,” as he called it, follows the same idea—analysing sentences, cutting and randomly distributing them to create a song. It enabled him to write uniquely creative music while boosting his productivity. Unlike Taylor Swift or Adele, known for writing songs about their lived relationships, Bowie just drew from anything that came up on his computer screen.  

So for Bowie, AI was a fresh source of inspiration. Artificial intelligence also proved useful with production and creativity. In this case, the fear of losing the message, sensitivity and emotion of a song because of AI was unfounded; it instead provided helpful support but did not replace the artist himself.

One of the first albums created using AI is attributed to Taryn Southern—who, again, uses AI as a tool, not a replacement for the complete creative process. This technology is especially useful for artists with little knowledge of music theory or production. 

How AI can be leveraged

Southern used Amper AI to create her music. Amper AI allows artists to select a length, beats per minute (BPM), mood, and instruments, then combines those preferences to create a musical basis.

Using AI this way can also help people with little training or "talent" in music to engage in the creative process, thus extending the spectrum of possible new jobs and perspectives in the music industry. 

For example, it’s possible to use machine learning to analyse the genre, rhythm, and length of songs most appreciated by the public, to create music based on these characteristics. It is also possible to study lyrics to see what types work best; romance or politics, for example. And AI can be coded to process, analyse and generate lyrics and melody. 

Not all artists use, or will use, AI in the same way. Some may use it to write lyrics, or find a melody. Others may leverage it to generate new ideas, and still others could combine multiple functions for a unique mix. With less time lost on the aspects of creation that are most challenging for them, people can ultimately create more music, and explore more genres. 

Finally, thanks to AI, new artists can be discovered. AI platforms can actually benefit “unknown” artists by helping record labels detect them. This type of AI involves using social media and streaming data to detect high-potential artists. AI is also used by streaming platforms to create personalised playlists and help listeners discover new artists at the same time. There are even ways to “game” Spotify’s algorithms to further aid new artists.

All this aside, use of artificial intelligence in artistic professions remains a touchy subject, but it is for that reason that it requires nuanced discussion. Obviously some questions remain unanswered, and there is a need for laws related to copyright to catch up with changing norms. 

But perhaps most importantly, AI brings with it a strong social need to redefine what art means, what value we attribute to human creativity, and what role AI can play in contributing to—not replacing—the latter. 

The artist Joy Buolamwini is the perfect example of how AI can help artists in the process of fighting the technology’s downsides. Using AI in her activist art, she founded the Algorithmic Justice League to fight the coded gaze … human proof that AI and activism, or even creativity, are not mutually exclusive. 

An organisation, Over the Bridge, helps people in the music industry who struggle with mental health; these professionals attempt suicide two times more than the general population (of particular note, since the Covid-19 pandemic has made artistic life more precarious). To raise public visibility, Over the Bridge created “The Lost Tapes of the 27 Club,” a project that highlights mental health issues faced by music industry workers by creating new songs by the artists of the 27 club.

&dnt=1

The project uses two different types of AI that generate music and lyrics, designed to be similar to the music they created while alive. If the technique raises questions on copyright, it does demonstrate what AI really is: A means to an end, just another tool. 

DATA-DRIVEN TECH & SOCIAL TRENDS. DISCOVERED WEEKLY. DELIVERED TO YOUR INBOX.

Illustrations by Macha Pulcini.

Marie Clara Fancello

L'Atelier TechStream Intern

Marie Clara supports L'Atelier's research on new opportunities and emerging technologies through qualitative and quantitative analysis as an intern. Originally from France, she holds a master's degree in economics and social sciences from the University of Paris Nanterre; and a master's degree in econometrics and statistics from the University of Paris 1 Panthéon Sorbonne. 

About L'Atelier

We are an interdisciplinary team of economists, journalists, scientists and researchers interested in emerging technology related cultural phenomena and change.

Learn more

Sign up for the latest news and insights

Your e-mail address is used to send you newsletters and marketing information on L’Atelier. You can unsubscribe at any time by using the unsubscribe link in our emails. More information about the management of your personal data and your rights can be found in our Data Protection Notice.