top of page

Remixing Grimes with Deep Learning

During this strange time, I was blessed to stumble upon a Metapop-sponsored remix contest from one of my favorite musicians, Grimes. I quickly decided that I wanted to integrate some deep learning (aka artificial intelligence) into the mix, as it's a field I've been studying for the past year and also an interest of Grimes. And it's a good first project for my latest alias, Demon Flex Council (check this space for more releases soon.) Many of the synth parts in this remix are the products of extracting MIDI information from Grimes' recently released “You'll Miss Me When I’m Not Around” with Anthem Score, which shows extracted MIDI notes in light blue and a multi-colored spectrogram in the background.

anthem2.png

I then imported Anthem's MIDI files into Magenta Studio’s Interpolate, which works by making a high-dimensional data model of two input MIDI files and then recombines them elegantly. The violin-like intro theme, for example, is an interpolation of the MIDI extracted from the guitar and ad lib vocal stems. The interpolated MIDI looks like this:

midi1.png

I also used Tensorflow to train a neural network on the vocals. It’s called GANSynth, coded by Google (Engel et al., 2019) and it’s built on the principle of one “art forger” network trying to outwit another “art detective” network. Like all GANs, GANSynth produces completely original data every time the generation program is run. GANs have had great success in the visual realm, with algorithms capable of producing nearly perfect and totally unique, human-looking faces (GANs by Generated Media, Inc.)

gan1.png
gan2.png

GANSynth relies on pitch information for training stability and overall accuracy, so I cropped out all the sustained and consistently-pitched monophonic vocal parts from the stems. Only eight small sections (averaging about half a second each) fit this criteria. I then pitch-shifted all these sections to fit in the range of B3 to F#4, enough room to create a melody. After pitch-shifting I had a dataset totalling 64 members, very much on the small side considering that the GANSynth baseline model is trained on the nearly 300k-member NSynth dataset. I trained my dataset for 983k epochs on a custom stereo version of GANSynth with self-attention and this was enough for the model to learn pitch. I then fed the model some simple MIDI and it generated an AI-esque take on Grimes' voice. It definitely has its own character, kind of a lo-fi alien vibe. All the "syllables" are generalized vowel sounds, creating a nonsense language when strung together. It ended up working best as a high-pitched, stuttered "ahh" before the second drop. The unprocessed, generated waveforms look like this:

waveform7.png
waveform8.png

Overall, my mix is fairly faithful to the original, with the biggest change being the bed of string-type synths. They're nearly all made in Massive. I left the MIDI unedited and the result was that all the translation errors added a strangeness and complexity which matched the atonal, modern classical feel of the vocals. Most of the synths’ sonic nuance comes from plug-ins, particularly Soundtoys "Crystallizer" and "EchoBoy." In fact, Soundtoys plug-ins are all over this mix. "Radiator" sounds ridiculously great on everything, and I had to restrain myself from using it too much. Percussion-wise, I decided to feature the hi-hats because they created such a tonal shift every time I placed them into the mix. I used all of the original percussion except for the tom, and I added some ancillary percussion from two Ableton kits--Peace and Battu. Both of these kits took MIDI from Magenta's Drumify and Generate programs. For atmosphere, I made a few iPad sounds as well--Noisemusick for some blips during the intro and iVCS3 for turning an extreme-time-streched vocal into a synth. It’s deep in the mix, but it helps lead into the first drop. The key to this remix is Grimes' extremely funky bassline, which anchors both drops. I added a few more bass parts with Massive, again using the Anthem-extracted MIDI. The mix took a big leap forward when I cut most of the instruments on the line, “Hurt myself again today...” It was a reminder to focus on the song and what the singer is conveying--it’s such a harrowing line and I wanted to expose it as much as possible. It also gave the mix somewhere to go, moving from this point of low density to the higher density of the second drop. My favorite part also foreshadows the second drop. It's called a nyckelharpa, and it's a very ancient Swedish keyed fiddle I saw on an episode of Creative Cribs.

nyckelharpa.png

I managed to find a free sample library online, plugged it into Sforzando, and voilà, I had a nyckelharpa! I couldn't resist drowning it in Soundtoys FX, so it ends up sounding like a distant vocal choir. Thanks so much to Grimes, 4AD, Metapop and everyone involved with this project. I had a great time remixing this track!. Here is the finished version and all source code:

  • Twitter
github.png
bottom of page