When I play piano in my spare time, I usually find myself playing some jazzy chords and improvising a melody to go along with them. Recently, I wondered how it is that I know what notes to play next. I feel like I just know what will sound good, but I never really broke it down, and I wondered if I could. So I did, and then I did the obvious thing to do once one has broken down a process into a set of logical rules: reproduce that process using a computer.
So I wrote a little computer program. Let’s call it Johann Sebastian Bot (JSB for short). JSB follows a few simple rules:
– Play a chord at the beginning of every bar. Alternate between a major 7th chord and a minor 7th chord.
– Play a melody note every half beat. But 50% of the time, rest and don’t play a note.
– Select each melody note from the pentatonic scale only.
– To decide the next note in the melody, choose randomly between the note one spot lower on the scale than the note just played, the note one spot higher on the scale than the note just played, or just play the same note that was just played.
That’s it. When I hit run, JSB goes through the logic and outputs a file with all the notes he played. Then I feed that file into my favourite piano sample to translate the notes in the file to sound. What results is a very listenable, perhaps even beautiful, song. This is one of JSB’s first songs (it repeats three times):
I generated a few songs and was pretty happy with the outcome. I found myself getting JSB’s songs stuck in my head, and now I steal some of his riffs to use when I’m playing.
I think it would be fun to take this idea further and explore how many rules JSB would need to pump out something like a top 40 hit. People say pop songs these days are formulaic (turns out jazz might be too), and if that’s the case, we should be able to get computers to write them for us.
The prospect of bot-composed music is interesting, but so are the bigger-picture questions that came to mind as I was doing this. I’ll leave you with some things to think about:
1) Can we learn something about our brains by breaking music down into rules like this? It seems like resting half the time sounds good, but too much more or less doesn’t. Playing notes close to the last note that was played sounds good, but jumping around too much doesn’t. Are our brains tuned to certain levels of regularity and unpredictability, and can we uncover what these levels are?
2) What is creativity, and can a computer be creative? When I improvise a song like this, I’d say I’m being creative. When JSB does it, is he being creative? Or is he just replicating creativity? Is there a difference?
APPENDIX: The code
I used the Python library MIDIUtil to generate the MIDI file for the song. Props to Mark Conway Wirt for making such an easy-to-use library.
The structure of the program is just a loop through each half note of the song. When the loop index matches some condition, the program writes notes to the MIDI file. Then I just import that file to a Reason project and run it through a piano sample.
The code is really short and easy to follow, so check it out here, and feel free to download it, play around with it, and make your own songs.