General,  Music

AI in music production – A statement.

It’s time to talk about something serious: artificial intelligence in music. I made up my mind, to which extent I would use, tolerate or deny the use of machine learning algorithms in composition and music production. A deep dive into the future of music.

How I see artificial intelligence in music

After first lawsuits and social media hypes, the music scene is debating over the future of the music business. I am sure that the rise of AI is such a big and tremendous step, like moving from a 4-track recorder to a digital audio workstation. Young producer take the chance “working” with the vocalist of their choice, creating new beat types or circumventing creativity blocks by generating parts. On the other hand, well-known artists fear being copied or being put into the wrong context. I understand both views and made up my mind, where I see artificial intelligence as chance or threat.

I could profit from:

  • Amp simulation, reverb calculation and other effects
  • Algorithms for pitch or tempo correction
  • Algorithms for cleaning up recordings
  • Novel sound design or synthesizer designs
  • Linguistic tools for rhymes, metric and synonyms
  • Analytical tools for frequency, dynamic or loudness analysis

I am against:

  • Generation of full songs or backing tracks
  • Generation of lyrics or song parts
  • Style adaption, remixes or mash-ups
  • Extraction of individual stems from recorded material
  • Vocal replacement
  • AI vocalists
  • Automatic mixing and mastering
  • Using my work as training data for algorithms

A clear statement

All music on my social media channels will be performed, recorded, edited and mixed by humans. I may use software for sound design and drum programming, check my mixes with algorithms, but I won’t generate content with artificial intelligence. As a composer it is important to me to write my own music, so I don’t use sample libraries for songparts, chord sequences or melodies. The same applies for artificial intelligence. I don’t mind flaws and imperfections, cause my working ethics is about sharing creativity and supporting small independent artists. Human recorded stems will *always* be prioritized!

However, it feels wrong for me to support people or collaborate with people who use AI to generate creative content. Of course, generating interesting prompts and optimizing parameters needs certain diligence and knowledge, but it’s not comparable to the creativity and craft of conventional art. I am happy that you find a way to express your creativity in this novel way, but please don’t ask me to comment, repost or buy AI generated content. It’s good practise to credit everything properly, so people can decide on their own.

Why is AI a threat and chance to music?

Technical knowledge becomes obsolete

I’ve spent twenty years of my life learning how to compose, analyse, arrange, structure and mix music. I have a deep knowledge about instrumental ranges, playing techniques, vocal techniques, frequency ranges. My studies helped to understand the mathematics and physics beyond the knobs of your compressor, spectrometers and reverb machines. Most of my knowledge and will become obsolete, if I let algorithms generate melodies, or decide about dynamics and instrumental choices. What is the point in learning all this if you could train an algorithm to generate an AC/DC guitar riff from scratch; turn your lousy vocal performance into Freddie Mercury or mix your song like Steve Albini? We’re standing on the shoulders of giants instead of understanding WHY their work is so valuable. We are not able to do everything with one knob, but it’s just a question of time. I have a problem with this mindset!

You could argue, that using virtual instruments, step sequencers, sound presets, rhyme dictionaries samples or musical function theory do the same – but I see them as low-level helpers. These tools help us to learn and grow, and they give us the freedom to create new things when we understood how they work. You cannot program a realistic drum groove if you haven’t tried to play drums yourself or at least watched months of video footage. If I buy MIDI or drum samples, I could draft a song pretty well, but I skip a big part of the learning process, and thus forget to appreciate all the people who learned to play drums.

Lack of transparency and explainability

The type of machine learning algorithms that are used to create ready-to-use content and deep fakes do not require knowledge of physics, music theory or instruments. They also do not show us how they came to the result. These are nothing else but calculation of probabilities and similarities in multidimensional spaces with the big risk, that we don’t even know where the input data comes from and which parameters are tweaked. Further, machine learning algorithms can be biased or skewed, which means, that they have a higher probability for certain outcomes than others.

We will have a hard time figuring out if AI technology has been used to create music, so the market will be flooded with fakes, bootlegs and interpolations. How would you feel, if you hear a song sounding exactly like you, but you never wrote, sang or performed it? How would you feel if somebody else gets the payment for using your voice? I have a problem with that – especially with no given consent or post-mortem. There is no copyright for voices or styles. The courts will have a lot of lawsuits concerning interpolations and trademarks. This is not about cover versions and tribute bands.

Music becomes replaceable

Music is endangered to become replaceable, arbitrary and fast changing – and thus, decrease in importance. Well-trained musician ears are still able to hear artefacts, but the algorithms will become better and flood the market with superb audio quality and mediocre songwriting. Soon, everybody will be able to generate a 2-minute piece of AI music that sounds exactly like the stuff in the radio. I bet that some people will embrace it, while others visit live concerts to support human performed music. I am very concerned about bedroom producers and songwriters, who learned everything from scratch, don’t perform live and have to compete against a mass of music.

Optimizing workflows and sound quality

I am not against AI, I just have to make a distinction between generative and analytical tools. Working with novel amp simulations or synthesizers can simplify the workflow. What about using one effect pedal instead of a dozen? Getting rid of noise or artefacts in a ten year old recording? Finding issues in the mixing, when your ears are tired? These are the only chances that I recently see in using artificial intelligence in music.

Side effects on cover artwork and music videos

You may have noticed a huge increase of AI generated music videos and cover images. I understand that some musicians don’t have the means to shoot them on their own, but I have to point out that most tools are not using training data that has been approved by artists. Due to ethical problems, I will avoid or even reject these tools. I love to draw and design, so I want to support photographers and illustrators. I have *absolutely no* understanding for well known bands who work with these algorithms since they were able to afford graphical artists. I don’t want small and middle sized freelancers to shut down their business.

Wrap Up

As you can clearly see from the length of this article, artificial intelligence in music is a topic that bothers me as a composer, producer, artist and scientist. The AI revolution cannot be stopped, but we have to discuss regulations and crediting before the music market is saturated. I am all in for a transparent, trustworthy and fair use. I don’t support generated content, but I support use within impulse response and analytics.

Leave a Reply

Your email address will not be published. Required fields are marked *