{{announcement.body}}
{{announcement.title}}

AI and Music: Noteworthy?

DZone 's Guide to

AI and Music: Noteworthy?

People use AI for many things, so why not compose music in collaboration with an algorithm? Good music will always be in the ear of the beholder.

· AI Zone ·
Free Resource

There are an increasing number of organizations developing and/or using music composed in part or in full by AI technologies. In the beginning, many of those efforts were academic in nature, but a growing number of groups are attempting to make a business model of composing music. And while there are more people doing it now than there were a couple decades ago, the idea of computer-composed music is reasonably old. One of the first computer compositions I'm aware of was in 1957.

The majority of these systems are structured just as you would expect: a large collection of compositions representing a genre or an artist are used as training data, which create some sort of generative/predictive model. In the very earliest approaches, simple Markov chains were derived from the compositions.

In more recent attempts, various types of deep neural nets have been used, but the mode of operation is pretty much the same. Once you have a trained system, you "seed" the model with a note or string of notes, which a human might select or which might be randomly generated. Once the system is started, it adds notes (or cords) to the input string of notes. Then it uses the ever-growing string to generate the next most probabilistic note. The melody wanders off on a vaguely stylistically organized path, albeit somewhat aimlessly. Here's the first movement of the 1957 Iliac Suite:


Technically this was a human-computer collaboration, and since it was 1957, it relied heavily on hand-written rules (based on a pool of human compositions). You can learn quite a bit more about the details of this project here.

Just to be clear, the kinds of systems I'm talking about do not listen to music, nor do they play music — rather, they read musical scores and write musical scores. The resulting compositions are often played by human instrumentalists. Since we've had relatively inexpensive high-quality MIDI instruments for the last couple decades, however, you'll often hear synthesized versions.

Jumping forward to the near-present, most of the projects are far more interactive and also do a much better job of making music that sounds more musical. One example is FlowComposer, which was created by Flow Machines. Here is a short video showing how a composer works with it:


It's essentially a CAD/CAM tool for music. Tools like this do not autonomously write completed pieces. Instead, they allow the composer to assemble the music in conceptual chunks and modify sections of it in very abstract ways. If you watched the video above, you perceived that the human was using a "style brush," much like you would in a word processor or spreadsheet. There are lots more examples on their website to explore. Another similar effort is Jukedeck.

FlowComposer and Jukedeck are interesting commercial products that make sense if you're a professional in the business of making music. But there are some other options that offer just as much fun in the open-source arena. And thanks to the introduction of tensorflow.js, you can have that fun in the comfort of your home in your very own browser! One project is called Magenta and it attempts to augment musical and visual art with AI. (I'll skip over the visual art aspect here, because it's an entire topic on its own.)

All of the Magenta code is freely available. Many of the demos run directly from the website in your browser. Be a little patient, since the first time you load them, the TensorFlow and other helper libraries need to download. There are many examples to play with, and some of them are a tad mesmerizing. (Remember, if you experience a large chunk of "lost time," you have been warned.) I personally got stuck on "the incredible musical spinners." You can find all the demos here

Beat Blender is a lot of fun too, so I thought I would leave it here as clickbait.


BEAT BLENDER

You may have wondered exactly what the input notation is for training music composition. Many projects come up with their own notation, or use something based on MIDI notation. One group working on generating realistic Irish folksongs uses a database of tunes from many cultures that have already been transcribed into a relatively simple notation called abc. It looks like this:

X:1
T:Speed the Plough
M:4/4
C:Trad.
K:G
|:GABc dedB|dedB dedB|c2ec B2dB|c2A2 A2BA|
GABc dedB|dedB dedB|c2ec B2dB|A2F2 G4:|
|:g2gf gdBd|g2f2 e2d2|c2ec B2dB|c2A2 A2df|
g2gf g2Bd|g2f2 e2d2|c2ec B2dB|A2F2 G4:|


It's suited to the machine learning task, but it can be readily rendered into standard, human-readable musical notation.

Image title

Many folk tunes have been crowd sourced into abc at The Session. A description of the abc language is here.

One thing that all of these generative systems have in common — whether they are generating small sections of music or complete pieces — is that they seem to lack the spark of originality we expect from music. The result is not quite stale, but it's not fresh, either. A big part of the joy of music is that within a reasonably well-defined stylistic landscape, we are presented with subtle (and sometimes not-so-subtle) violations of the style: changing to a minor chord for one bar, introducing an unexpected type signature, handing off the melody to a "voice" in a different register, and so on. These systems can be trained on music that already contains these "surprises." But even then, the surprises seem a little formulaic. As an example, take a pop song generated using an AI assist. The melody and other elements were suggested by the human partner, but the music was rendered in the style of the Beatles. It definitely sounds Beatle-esque, but I think most of us would agree that it's missing something. We're not quite sure exactly what is missing, but we're pretty sure that John, Paul, George, and Ringo would not have put it on one of their albums.

In my opinion, great human musical composers are safe for now...at least until the end of the decade...probably...maybe? 

Topics:
artifical intelligence, machine learning, music

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}