Web Audio Sequencer

It all started on a Friday night, the thought, the idea to build something using Web Audio.

I’ve been technically interested in music for sometime now, and being a front end developer, web audio was something that I couldn’t ignore to try out. Having some basic knowledge of audio routing and music synthesis made it easier for me to directly dive into web audio.

I thought it was a good opportunity for me to implement some of the synthesis techniques I’d learnt. So the first thing I built was http://webaudio-synth.appspot.com/#/synth .

This is a single octave synth, having ADSR (Attach, Delay, Sustain, Release) envelope, 4 types of oscillator, a low pass filter (cutoff frequency and Resonance Q), and a canvas to visualise the wave selected.

The audio routing for this looks like – Oscillator -> ADSR -> Filter -> DAC

I’d like to write a separate blog post on this whole routing thing, but if you want an overall good understanding I’d recommend reading http://chimera.labs.oreilly.com/books/1234000001552 . This is quite informative and will give you a good foundation on web audio.

I’d built this almost an year ago! I’ve not done anything much with web audio since as I have been only spending time on Ableton.

I love web and it is fun to program audio. So I decided to build a music sequencer and here it is – http://codepen.io/subtraktive/pen/ByvVzr 

It is simple in its function, there are 16 steps which can be used to create a pattern, an input range selector for BPM, and a play/pause button. You choose a pattern, the bpm and you can play the pattern endlessly. You can also change the pattern, the bpm on the go!

The only complex thing about this is the scheduling! Web Audio doesn’t provide you a scheduler, you need to build it yourself. The Javascript timers cannot be used as they are dependent on the thread (JS is single threaded!) and can result in delays. Timing is one of the most important aspects of audio and you can’t screw it.

To get a better understanding of schedulers I read http://www.html5rocks.com/en/tutorials/audio/scheduling/ a couple of times. Honestly, I still haven’t completely understood some of the things he says, but I got an idea of how it works and how to implement it.

I used the same technique to schedule audio events. The only change I made was to make it a sequencer. The idea is very simple, you just push the selected boxes into an array and every time the scheduler runs, it checks if the beat no currently running is present in the array, and if so it plays it. I’ve commented the code and should be easily understandable. If not please provide comments, and I can help.

There is so much fun in building these small components. Looking forward to build more, hope I won’t take another year for it!

Ableton Live 9 Intro And MPK Mini II

I finally got my copy of Ableton Live 9 Intro from a friend who was coming down to India for holidays. I also bought a new controller!

I had been using M-Audio 32 keystation, which was decent, but it lacked other controls such as knobs and pads. After I started using Live, I realised that a lot could be done using a controller. Midi mapping on Live is super easy, and there was very little I could do with keystation.

So, I decided to buy a controller that offered me more than just keys. My first choice was Akai LPD8, which I thought would complement the keystation by providing all the missing features I was interested in. I would have bought it if I hadn’t come across MPK Mini II that packed in more features. This device combines features of both the above mentioned controllers, and has pitch modulator joystick and an arpeggiator which are kind of cool. It has fewer keys than keystation, which I’m fine with, considering how little I use both my hands to play the keys.

Coming back to Live, I have started using the Intro and it is simply awesome. The best part about it are the limitations! I’m a beginner and guess it is sufficient to work with 12 tracks, 8 scenes, 26 Audio effects and 3 instruments. I don’t think I’m even eligible to use the word ‘limitation’ at this point of time as I’m yet to learn and experiment with what I have. I need to reach the point where I experience the limitations, work within it, and move forward.

As I’ve mentioned in my previous posts, the possibilities in combining sounds are just infinite. I’ve a feeling that working within constraints can help us master little things and achieve goals in parts.

Now that I’ve spoken great things about the two, you can imagine how much more fun it can be when the Mini is plugged into Live.

I had no trouble setting up the Mini with the Live. All I had to do was to go to the preferences in Live, and update the Midi sync tab. Just select the MPK control surface and the corresponding input, output, and then turn on the track, sync and remote for the input MIDI ports and you are good to go.

I loaded some samples and it was fun playing them on the pads. Even more exciting is how you can map the knobs to various parameters like cutoff, Q, LFO etc and change this as you are playing.

You already begin to feel like an electronic musician! Such is the power of Live.

The only thing that some of you might like to change is the channels on which the pads and the keys operate. The default editor of MPK has channel 1 set for both the pads and the keys. The problem with this setup is that if your keys are in a certain octave range (say C1, D1 etc) and you have a pad assigned to C1 (or any in that range), then both the key and the pad will trigger the same sound. This might not be desirable, especially when you want both of it to work independently, say when you are playing a solo, you don’t want to hear some drum or trigger some other operation.

The solution for this is to update the channels. Basically the pads and the keys must send signals on different channels so that values do not overlap. This can be changed in the MPK editor, which can be downloaded for free from the website.

If you are using mini II along with Ableton Live and want to have this setup, you might find the below two links extremely useful.





I feel excited to have started a new blog and write my first post.

I had a blog before, but it had no particular goals. This, on the other hand, will be mostly aimed at sharing some of the things that I learn in the area of Audio Production and Frontend Development.

I’ve been working as a Frontend Engineer for close to 3 years now and it’s been quite an exciting journey. The job involves understanding design and writing code, and this intersection of art and technology brings in some joy.

Music has been my main interest since engineering days, but it had been limited to only exploring and listening new sounds. Things changed when I began to learn an audio programming language called Chuck, which introduced me to the concept of music synthesis. I got deeply interested in synthesis and it led me to dive into the area of audio production.

I’m very new to this area and have been learning through online courses, tutorials, articles etc. I’ve a system running Ubuntu 14.04 and it took me a while to understand and have the Jack audio setup working.

I want to keep learning and share them. Hope I get to do it through this blog.