She reached out to me to see if I had any ideas on how we could put a soundtrack to her video. I had the idea to map the distances between semicolons into notes on a scale. One of the problems with generative music is that it usually doesn't sound that musical at all. So, I decided to constrain myself to a scale so that the work would sound as musical as possible, while still having a generative/data driven basis.
The first step was to analyse the base material. Thankfully, a text file of the work is available form project Gutemberg which means that processing the text was very easy. I created a node.js parser that would essentially split the work into a list of passages between semicolons and semicolons. Then I took the size of these passages and created a JSON file with some metadata.
Once that was created, I moved onto creating a MIDI instrument with Node.js. I have experience creating music with Ableton Live, but I can't afford Max for Live. Furthermore I'm not that comfortable with GUI programming languages. So, i created a rudimentary midi controller on node.js, using Justin Latimer's node-midi package. It creates a midi input and output, and is controlled by Ableton, so you can play, stop, etc. Unfortunately there isn't information on the BPM set on Ableton on the MIDI messages, so that was a bit hard to work with - I might want to use OSC for that. Anyway.