Once More With Feeling

That AI-generated pop song sounds terrible, but it reflects a long legacy of letting machines manipulate or even generate our musical output.

By Ernie Smith

If that viral AI singer-songwriter was any better, Mary Spender would have to sue for theft of vibe.

But fortunately for everyone involved, she—and every other singer-songwriter out there—has nothing to worry about. But we can still learn something from it.

Recently, Anna Indiana, an “AI-generated” musician, surfaced on the birdsite and immediately stunk up the room. The backing track was like Laserdisc karaoke. The lyrics seemed to suggest impending revolution for an unspoken slight. And Indiana’s stand-still appearance, as a generative AI doll, did not exactly inspire confidence. Part of the issue is that, well, the song is trash and if she had feelings, she would feel deep shame for what she created. AI generated music leads down some uncomfortable roads; this is just another example.

However, as terrible as “Betrayed By This Town” is, sounding much more like Sarah Brand’s “Red Dress” than a song you would put on by choice, I would like to posit that the real problem with the song is that it forces us to do something we don’t generally like to do: judge AI in purely human terms. If it wasn’t judged in those terms, it would be slightly better appreciated.

Here’s what I mean. In the mid-20th century, a music-generation technique called musique concrète became a target of serious composer interest. It was essentially sounds captured by machines and developed into music. The Dr. Who theme song, mentioned in this piece, is a famous example of this.

Tedium on Patreon

Keep Us Moving! Tedium takes a lot of time to work on and snark wise about. If you want to help us out, we have a Patreon page where you can donate. Keep the issues coming!

We accept advertising, too! Check out this page to learn more.

This eventually evolved into something called process music, where machines were used to generate or shape sounds. A couple of famous examples of this are Steve Reich’s landmark 1965 composition “It’s Gonna Rain” and William Basinski’s 2001 performance “The Disintegration Loops 1.1.” (A quick trigger warning: Yes, the latter example is that song that gets played over a recording of 9/11, shot by the artist, as the song had been finished that day. It’s a fascinating song, perhaps one of the greatest instrumental compositions ever produced, but it must be taken in small doses.) Both cases involve audio samples of human activity, turned into noise processed by machines.

As truly electronic music production techniques began to emerge, it became more common to use tools that generate harsh sounds that sounded like harsh sounds.

These sounds were often the canvas upon which human songwriters performed, and they were often reflective of their time. Silver Apples’ 1968 song “Oscillations,” for example, sounds like a garage-rock song that leverages the random hisses of primitive noise-squeezers. Meanwhile, Nine Inch Nails’ 1989 breakthrough “Head Like A Hole,” combines lessons from the synth-pop era into a sound that drastically departs from it. What makes these songs great, despite their reliance on electronic machines that produce sounds, is that they have been infused with humanity.

Later on, Brian Eno, the guy who created the Windows 95 start chime and would probably hate that I described him that way, coined a term called “generative music,” which is sort of the bridge between process music and artificial intelligence, where a computer unpredictably makes music based on what software tells it to do. As he said about generative music in a 1999 speech:

Generative forms, in general, are multi-centered. There’s not a single chain of command which runs from the top of the pyramid to the rank and file below. There are many, many, many web-like modes which become more or less active. You might notice the resemblance here to the difference between broadcasting and the Internet, for example.

You never know who made it. With this generative music that I played you, am I the composer? Are you, if you buy the system, the composer? Is Jim Coles and his brother, who wrote the software, the composer? Who actually composes music like this? Can you describe it as composition exactly when you don’t know what it’s going to be?

Why does an idea like this grab my attention so much? I said at the beginning that what I thought was important about this idea was that it keeps opening out. This notion of a self-generating system, or organisms, keeps becoming a richer and richer idea for me. I see it happening in more and more places.

If “Betrayed By This Town” was seen as a wrinkle in the generative music revolution, perhaps this innovation would be seen as more innovative and interesting. Just one problem: “Anna Indiana,” based on her appearance and style, wants to be taken seriously as a human songwriter. She’s out here putting on her best Taylor Swift impression, when in reality, she has much more in common with a Steve Reich process-music composition than with anything on 1989.

In a way, all music is generated by tools. Those tools might include our voice, or our hands as we clap. We may use an instrument such as a guitar or a drum kit to extend what our bodies can do, or we may rely on electronics to process and shape that music. As many have noted, most music imbues a human spirit. But I do think it is in fact possible to feel things from a work of art created mostly by a machine, and not necessarily the feeling that “this machine is going to steal my job someday.”

“The Disintegration Loops 1.1” is one of the deepest, most emotional experiences I’ve ever had with a musical composition, and it’s literally just a horn loop, recorded 20 years earlier, played on a decaying analog tape for 20 minutes, losing scrapes of magnetic material each time it runs through the machine. The reason is because of the context and the story behind it, which gives the composition power.

Anna Indiana’s story is that she’s an AI algorithm, totally generated to produce bad music, singing about firebombing a town she’s never seen for a reason she’s not even sure about. That’s not a compelling story. If it was presented as a new technical form of process music, rather than a cheesy bedroom composer with an old Casio spitting out MIDI chords, that would arguably be more compelling.

But let’s face it—that was never the goal of this exercise. It was to go viral. In that way, it was a success.

Real Links

This story of a man losing his citizenship just because he tried to renew his passport is screwed up.

Here’s an example of music created by humans. This Ben Folds Five song, written by drummer Darren Jessee, is such a beautiful, graceful composition.

My post last week about reply guys on Mastodon had a bit of a slow-delay reaction, and ended up driving a bit of discussion there over the weekend. (Some were upset about the implications being made. I’ll admit, I knew it would rattle fences.) I just want to acknowledge that the Mastodon team is taking steps to implement safeguards to its Android app, which was announced just after I posted my piece. They’re a small team, but they absolutely get it.

--

Find this one an interesting read? Share it with a pal! And may Anna Indiana stop releasing songs.

 

Ernie Smith

Your time was just wasted by Ernie Smith

Ernie Smith is the editor of Tedium, and an active internet snarker. Between his many internet side projects, he finds time to hang out with his wife Cat, who's funnier than he is.

Find me on: Website Twitter