Some words about audiovisuals and stuff
While I was in college back in 2010, I attended an event put on by Robert Henke, one of the developers of Ableton Live, and Tarik Barri, creator of the powerful Versum audiovisual cell phone tracker and spy software composition software, to view their collaborative piece called Monolake. The exhibition (or I guess you could call it a concert) involved spacial sound design (think surround sound audio) paired with visual representation of the soundscape Henke created, projected on all sides of the studio we were in. The environment created made it feel as if one was drifting through an environment governed by school writing paper sound. I had seen visualizers before, but it definitely did make me recognize and admire the almost jazz-like freeform factor of audio visualization in live concert.
While you definitely can’t play with your visuals as fast and loose as Coltrane on a trumpet, it does allow for considerable improvisation, making each experience unique and content writers different from the next. Most large concerts rely on lights and interesting visuals that have for the most part been pre-rendered (Deadmau5 comes to mind), but no one I’ve seen (aside from Robert Henke) is touring or has toured with someone who visualizes their audio on-the-fly (feel free to point some out, though!). I feel like a lot of people think the buck stops at programs like XBMC or iTunes, where you can “visualize” your music library, but it’s actually fairly easy to stream audio or MIDI into
a visualizer using the right programs or language.
The first visualizers were made for DOS, so the idea is neither new nor difficult on early PCs, but these basic visualizers like MilkDrop or Cthugha didn’t allow for the human element or entropy that made the visualization of music as orchestrated as it was in Monolake. More recently artists have been using programs like MaxMSP with Processing and Ableton (Which are the tools Barri and Henke incorporated) to produce both
sound and audio together, and other end of the spectrum you have guys like Dr. Bleep creating hardware dedicated to audiovisual synthesis. Seeing it in everyday concerts, though? I still believe it’s a ways off from being labeled anything other than “experimental”.
Making MIDI into pretty
So, I’ve mentioned MIDI a few times, but what is MIDI? MIDI is Musical Instrument Digital Interface – it’s been around since the early 80′s and has become the standard protocol for most (if not all) electronic music. MIDI protocol allows for sending a broad spectrum of audio commands and variables to be passed through such as note, pitch, velocity, or cues, volume, and vibrato. How can we use this
in audio visualization? Well, a few ways, though it really depends on what you want to see happen. Let’s assume you have a window (on your computer, silly) and in that window is a Sphere. A lot of simple visualizers rely on single MIDI variables (for instance, playing a certain note might
change the Sphere’s size or color, while more complex visualizers review cell phone spy software might take into account more than one variable). Let’s say you play a “sad” chord on an electric guitar plugged into your computer – the program might recognize that you’ve played all those notes in
the chord at the same time, and the sphere will turn a “sad” blue. This is an ideal time to introduce you to the hack I created for gTar, but first, allow me to deviate a little to tell you a bit about the software I used.
Three JS & Chrome
WebGL and Canvas are becoming more and more important tools for web developers as computers with internet access get faster
The hack, dubbed jsTar, can be found here (and can be seen in full screen by adding /show to the URL. Link for the lazy). I started this hack about a week and a half ago with limited experience in WebGL (I hadn’t taken a computer graphics course in years) and some basic MIDI knowledge, and within that time was able to put together a working model of my hack in JSFiddle, a wonderful service for testing out
webcode before launching it into the public. The only caveat is that since MIDI is not yet fully supported by HTML, I had to include the Jazz-Soft java library that allows for MIDI to be played in almost any browser. Being a hack, this idea is far from complete, as is most of the code I used to create it, which was a library provided by the Incident Technologies folks behind the gTar along with some other MIDI bridging code. The program works by receiving MIDI input from a device, specifically a note played by the gTar or a piano, but can be edited to respond to control commands, and then either A) creating particles that take
their color and size from note and velocity, respectively, or B) dynamically changing the size and rotation speed of mesh cubes by the same two variables.
While this code is very basic, so was (and still is) my knowledge of Threejs and WebGL, but I hope this library sets you and others
on their way to doing some awesome MIDI and 3D visualization stuff from within your browsers. Perhaps you’ll even build something like the Monolake project I saw back in
college! Good luck and feel free to drop me a line at nick[at]grandst.com