From Node to Diode

I have a confession to make. I'm an LED addict. If it involves making some blinkenlights I'm your man. In 2010 I helped build the Illuminatrix, a 4ft x 4ft wall of LEDs nestled inside ping pong balls that displayed hundreds of animations from people all around the world. This was my first big electronics project, besides a few kits I'd built and some awful attempts at implementing various audio syntheziser circuits I had found online. But I officially caught the hacker bug back then and that project inspired a lot of my work over the last few years.

This year we decided to build another project for the Burning Man festival. We're calling it the Diodome and its an 18ft geodesic dome containing hundreds of LEDs. Obviously.

I'll be detailing the build on my blog as we go, but the first step is to figure out how we're going to control all these LEDs!

How we did this before...

One of the things I did when I built the Illuminatrix was to build a web based animation editor that anyone could use to easily create animations for the project. In that you could write JavaScript code that we could use to generate keyframe data for the animations. We took the keyframe data and dumped it on an SD card and had a PIC microcontroller and custom circuit boards read from the SD card then control the LEDs directly. This worked well enough, but was a lot of work, and wasn't very stable or flexible. We could only use keyframed data, and we couldn't modify the animations on the fly.

So since we want to make the Diodome a lot more interactive and long term extendable than that we're going to go for another approach. I still want a web based animation editor, purely because I can't have an 18ft dome setup in my living room! So how can we take the JavaScript code from the editor and use it to control the LEDs without having to rewrite every animations code by hand to run on the hardware?

Enter NodeJS and the Raspberry Pi. The Pi is a wonderfully capable little machine, and thankfully Node compiles very well on it. So could we take the JavaScript based animations from the animation editor and run them on the Raspberry Pi and then use the Pi to control the LED strings?

Installing Node.js on Raspberry Pi

To get started we first had to compile Node for the pi, this turned out to be pretty simple.

tar xvzf node-v0.10.9.tar.gz
cd node-v0.10.9
./configure && make && sudo make install

This will take a long long time...for me almost 2 full hours. It should however go without a hitch. You can check its working by doing:

node --version
npm --version

Now that Node is running on the pi we'll need to figure out how to control the LEDs from node...

Timing is everything

So the LEDs we're using for the Diodome project are the popular and inexpensive WS2811 based strings you can get cheaply on Ebay or in our case direct from China. These allow you to individually control hundreds of LEDs from a single GPIO pin, but they do require very precise timing in the region of 0.5 microseconds. This is an accuracy of timing that I'm just not sure could ever be accomplished from JavaScript, and likewise even the Raspberry Pi itself might struggle to bitbang pins with that degree of accuracy. Thankfully there's a simple way around this which makes all our lives a lot easier - use an Arduino.

Obviously an Arduino isn't powerful enough to run our animation code in Node, so we'll use the Raspberry Pi and then talk to the Arduino over the USB port. Thankfully the Arduino can talk to a Raspberry Pi using Serial over USB. All we need is to be able to write to the serial port from Node. Thankfully this is quite simple using the node-serialport library. Here's a simple bit of code that opens up a serial port to the arduino and sends an array of color values for 5 LEDs.

var serialport = require("serialport"),
    SerialPort = serialport.SerialPort;

// first list the serial ports available so we can figure out which is the arduino
serialport.list(function (err, ports) {
    var port = null;
    ports.forEach(function(p) {
        // this should work on windows and maybe osx
        if (p.manufacturer.indexOf('Arduino')!==-1) {
            port = p.comName;
        } else {
            // this will work on raspberry pi / linux
            if (p.hasOwnProperty('pnpId')){
                // FTDI captures the duemilanove //
                // Arduino captures the leonardo //
                if ('FTDI') != -1 ||'Arduino') != -1) {
                    port = p.comName;

    // port should now contain a string for the com port
    // open the port
    var serialPort = new SerialPort(port, {
        baudrate: 115200
    // hook up open event
    serialPort.on("open", function () {
        // port is open
        console.log('port ' + port + ' opened');
        // hook up data listener to echo out data as its received
        serialPort.on('data', function(data) {
            console.log('data received: ' + data);

        // here's an array of LED color values, 3 bytes per LED.
        var LEDS = [

        // create a Buffer object to hold the data
        var buffer = new Buffer(LEDS);
        // write it on the port
        serialPort.write(buffer, function(err, results) {
            if (err) {
                console.log('err ' + err);
            console.log('wrote bytes : ' + results);

We can then write a simple sketch on the Arduino to read from the serial port, and set the color of the LEDs using the excellent NeoPixel library from Adafruit. This library has some rather excellent assembly code that can manage the accurate timing needed for the WS2811 drivers, and even manage to drive the LEDs at 800khz which gives us just enough time to update all our LEDs without any flicker. Here's the sketch I'm using on my Arduino Micro:

// include the neo pixel library
#include <Adafruit_NeoPixel.h>

// how many leds in our string?
static const int NUM_LEDS = 5;

// Parameter 1 = number of pixels in strip
// Parameter 2 = pin number (most are valid)
// Parameter 3 = pixel type flags, add together as needed:
//   NEO_RGB     Pixels are wired for RGB bitstream
//   NEO_GRB     Pixels are wired for GRB bitstream
//   NEO_KHZ400  400 KHz bitstream (e.g. FLORA pixels)
//   NEO_KHZ800  800 KHz bitstream (e.g. High Density LED strip)
Adafruit_NeoPixel strip = Adafruit_NeoPixel(NUM_LEDS, 6, NEO_GRB + NEO_KHZ400);

// buffer to hold colors of our LEDs
char colorValues[NUM_LEDS*3];

void setup() {

  // initialize to black (off)
  for (int i=0; i < NUM_LEDS*3; i++) {
    colorValues[i] = 0;

  // initialize the strip to the current values
  for(int i=0; i<NUM_LEDS; i++) {
    int d = i*3;
    uint32_t c = strip.Color(colorValues[d], colorValues[d+1], colorValues[d+2]);
    strip.setPixelColor(i, c);
  // update the strip;

   //Initialize serial and wait for port to open:
  while (!Serial) {
    ; // wait for port

void loop() {
  // wait for bytes on serial port
  if (Serial.available() > 0) {
    // read 3 bytes per LED from serial port
    char bytesRead = Serial.readBytes(colorValues, NUM_LEDS*3);
    // check we got a full complement of bytes
    if (bytesRead < NUM_LEDS*3) {
      // something went wrong, abandon this loop
    // feed the data to the leds
    for(int i=0; i<NUM_LEDS; i++) {
      int d = i*3;
      uint32_t c = strip.Color(colorValues[d+1], colorValues[d], colorValues[d+2]);
      strip.setPixelColor(i, c);
    // update the strip;

And there you have it. Simply program your Arduino with this sketch, plug it into your Raspberry Pi, then run the node script with node app and your LEDs should change color to the values specified (in the example above this was 1st LED RED, last LED blue, all others off).

Next time I'll be detailing how I setup the animation editor, and how we got the animations running on Node on the Pi using the Tween.js library. If you find this post useful, or are just feeling generous we're fundraising for the Diodome project now.

Elastic Beanstalk and VPC Fun

So last time I blogged about getting libcouchbase installed on an Elastic Beanstalk deployment. That was a major headscratching session there. After we got that working I had to move on to getting our EB configuration to deploy within a Virtual Private Cloud. This was a bit of a nightmare...

Looks easy at first

First out it looks like this should be relatively easy. We just setup a configuration file in our .ebextensions folder that can configure the VPC settings using our first configuration file which I'll name 01_environment.config:

  - namespace: aws:autoscaling:launchconfiguration
    option_name: EC2KeyName
    value: ec2keypair
  # Now setup the VPC that our EC2 instances should use
  - namespace: aws:ec2:vpc
    option_name: VPCId
    value: vpc-un1que1d
  # Now we setup a subnet for our VPC ec2 instances to use
  - namespace: aws:ec2:vpc
    option_name: Subnets
    value: subnet-un1que1d
  # And another subnet for our ELB
  - namespace: aws:ec2:vpc
    option_name: ELBSubnets
    value: subnet-un1que1d2
  # Now set the instance type we want to use for autoscaling
  - namespace: aws:autoscaling:launchconfiguration
    option_name: InstanceType
    value: m1.small
  # And setup a security group for NAT
  - namespace: aws:autoscaling:launchconfiguration
    option_name: SecurityGroups
    value: sg-un1qu1d3

This is all mostly straight out of the documentation for EB which you can find here. Of course, its not as easy as all that. Once all this is in place and you do a new deployment with EB you can see that this doesn't actually work! It appears that the security group that is deployed on the Elastic Load Balancer isn't correctly configured to allow traffic from port 80 to port 8080 inside the VPC. This means that while all the servers come up inside the VPC correctly, there's no external access to them so our web services can't be accessed from the outside world. This is probably a sensible configuration for a VPC to have since its supposed to be Private after all, but its no good for our services.

Lacking documentation

So I set out to find some documentation that would tell me how to configure the security group for the ELB that Elastic Beanstalk creates for me. Unfortunately it appears that you can't do this...there's nothing in the documentation for Elastic Beanstalk about configuring the ELB or very much about how to configure any of the resources that EB brings up for you. However...all is not as it seems!

ELB uses the same configuration management and resource setup as the CloudFormation service. So while its not actually documented, all configuration you can do in CloudFormation configuration files can pretty much be done with files in your .ebextensions folder. So lets setup a SecurityGroup using this documentation and configure our LoadBalancer using this documentation. This gives us a file which I'll name 02_load_balancer.config which looks a little like this:

"Resources" : {
  "AWSEBLoadBalancerSecurityGroup": {
    "Type" : "AWS::EC2::SecurityGroup",
    "Properties" : {
      "GroupDescription" : "Enable 80 inbound and 8080 outbound",
      "VpcId": "vpc-un1que1d",
      "SecurityGroupIngress" : [ {
        "IpProtocol" : "tcp",
        "FromPort" : "80",
        "ToPort" : "80",
        "CidrIp" : ""
      "SecurityGroupEgress": [ {
        "IpProtocol" : "tcp",
        "FromPort" : "8080",
        "ToPort" : "8080",
        "CidrIp" : ""
      } ]
  "AWSEBLoadBalancer" : {
    "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
    "Properties" : {
      "Subnets": ["subnet-un1que1d2"],
      "Listeners" : [ {
        "LoadBalancerPort" : "80",
        "InstancePort" : "8080",
        "Protocol" : "HTTP"
      } ]

Just ensure that your VpcId matches the one in your earlier configuration, and that the Subnet configuration matches the one in ELBSubnets configuration.

Now when you deploy you should see it create this new security group for you and automatically assign the load balancer to it, thus allowing you to access your web services. Its great that Amazon provide all these services, but really there's a lot of hidden knowledge locked up in the documentation thats not clearly obvious anywhere. The ability to use all the CloudFormation configuration options in your Elastic Beanstalk .ebextensions files is frankly awesome opening up a huge range of possibilities for Elastic Beanstalk deployments.

Deploying libcouchbase with AWS Elastic Beanstalk

Recently we've begun using couchbase as a backend datastore for some of our projects at my day job with SupplyFrame. Since my projects are now involving nodejs I needed to make us of the couchnode package. Unfortunately this depends upon libcouchbase being installed on the platform before the package is installed.

We're using Elastic Beanstalk for our deployments so I needed to figure out how to get the libcouchbase libraries installed before npm install was run so it would install correctly. Thankfully Amazon have figured this would be a problem and so have provided a rather handy mechanism for configuring your AWS environments under Elastic Beanstalk.

.ebextensions configurations

If you include a folder called .ebextensions in the root of your source package (in our case a git repo), then you can add configuration files (with the extension .config) there that will be executed in alphabetical order during environment start. Full documentation for the .config files can be found here.

If you're using one of the standard packages and the packages you need are available in the standard repositories then you can probably make use of the packages key to automatically install the packages you need. This would look something like this inside your 01_myapp.config file (below example taken from the docs):

    libmemcached: []
    ruby-devel: []
    gcc: []
    chef: '0.10.2'
    mysql-client: []

Unfortunately libcouchbase is not in the standard repositories, and to make matters worse the Amazon Linux AMI we're using doesn't even have some of the dependencies for this package installed. So I had to get a little more creative.

Using .ebextensions commands

There's another key called commands in the .config specification that allows you to execute arbitrary shell commands as part of the environment setup. This took a bit of fiddling to get going, but after much headscratching I finally came up with the following which seems to do the job and get libcouchbase and all its dependencies installed:

# Errors get logged to /var/log/cfn-init.log. See Also /var/log/eb-tools.log
        command: wget -O/etc/yum.repos.d/couchbase.repo

        command: yum install epel-release
        ignoreErrors: true

        command: yum check-update
        ignoreErrors: true

        command: yum install -y --enablerepo=epel libev

        command: yum install -y libcouchbase2-libevent libcouchbase2 libcouchbase-devel

What this does is mostly described in the libcouchbase installation instructions, with a couple of extra additions, 02-command ensures that our AMI has the epel repository installed in yum so that we can gain access to the libev package installed in 04-command. Finally we modified the main install command to include the libcouchbase2-libevent package which appears to have been missed from the dependency hierarchy for libcouchbase2.


This should have been really straightforward to get going, unfortunately as always, there were a few hiccups that made this all take much longer to figure out. Firstly these .config files are supposed to be YAML format, but they appear to be very very sensitive to syntax issues and to make matters worse if you get the syntax slightly wrong you'll probably see your Elastic Beanstalk environment just lock up for about 40 mins before reverting to the last working version. Secondly it can be pretty hard to debug whats gone wrong, especially if the instance reverts to the last good version since quite often the process of reverting will wipe out most of the log messages you need to see to figure out whats wrong. Also you don't always get the best log messages so it can be pretty tough to figure out whats going on.

However, I have to say I'm quite happy with the setup now, I can easily push my local git repository into Elastic Beanstalk and have all my dependencies setup in one easy command. Great stuff. Next time I'll detail some of the perils of trying to do all this inside a VPC!

Setting up a Raspberry Pi media center

I finally got round to ordering a Raspberry Pi this week and this Friday it finally arrived. So this weekend I've been busy getting it setup to act as a media center for our living room. What I want ideally is a Xbox Media Center setup capable of playing 1080p media with DTS output, I then also want it to be capable of playing Spotify headless without having to use my TV as the screen. So really I want an Android client on my phone controlling spotify on the Raspberry Pi and XBMC for when I'm watching media on the TV.

Xbian and XBMC Setup

I decided to use Xbian since it comes with a nice minimalist pre-built image ready installed with XBMC and HDMI CEC support. Getting this installed was as easy as downloading the installer and letting it do its thing to my 2gb SD Card. Simple. Within 10mins I had my XBMC installation up and running.


Thankfully since Pulse Eight produced the libCEC library HDMI CEC control just works out of the box. So I immediately had my LG Magic Remote working with XBMC and was a very happy bunny. However, I have to say after some testing this really isn't the ideal way of controlling XBMC. Don't get me wrong HDMI CEC is very cool, and being able to startup the TV and 5.1 Receiver with XBMC is great, but the simple fact is that the buttons on the LG Magic Remote aren't ideal for this type of interface. If someone could get the pointer interface to control a mouse over HDMI then we'd be talking, but right now its too limited.

So back to my trusty Android devices to install the Official XBMC Remote app. This app is great, a breeze to setup and its fast and snappy giving you good integration with your media center. The remote interface is a lot more comfortable to use than my LG Magic Remote, and the on-device browser for media is lovely (if a little slow with the images server from the Raspberry Pi).

Media Library

One thing that caught me out at first was that XBMC will not apply its library functions to media shares over UPnP which is a bit of a shame, so I've had to add the shares on my NAS to XBMC using smb shares. Once I did this though it happily chugged away indexing all my media and I was soon up and running.

Be warned, when XBMC is updating its library its basically unusable, the performance drops to multi-second responses, so if you've just added some new media to your shares just go get a cuppa or something and wait for it to do its thing.

Spotify with Spotimc

Next up I wanted music in my living room, I had two options here, Spotimc which is a plugin for XBMC that was incredibly easy to setup thanks to the work of welly_59 from the stmlabs forum, just grab this file, put it somewhere XBMC can reach it on your network then go to System -> Addons -> Install from Zip and browse to the file.. It won't show you much happening, but a short while later you'll get a little popup telling you the plugin was added. If you then go to Music -> Addons from the main menu you'll see Spotimc. On first start it will do some more installation and setup which will require a reboot, after that though you should be greeted with a login screen and be up and running.

A better way - Mopidy

Thats all well and good, but I don't really want to have to put my TV on just to play music.

So I spent this morning looking into other alternatives. The best of which appears to be Mopidy. Mopidy is a music server that will play Spotify streams as well as search and play local media. But best of all there's a variety of different mobile clients that will control it since its MPD compatible. This means once its setup you can control it from the comfort of your mobile and play music without the need to use your television! There's a great client called MPDroid available on the Play Store, it lets you playback your Spotify playlists and search for tracks / artists / albums.

Getting it setup was a little trickier than I had hoped though, first off there's no installation instructions for Xbian available. So I followed the rough outline of the instructions for Debian Wheezy.


First install Mopidy and its dependencies from, as described in the Wheezy installation documentation. In short:

    wget -q -O - | sudo apt-key add -
    sudo wget -q -O /etc/apt/sources.list.d/mopidy.list
    sudo apt-get update
    sudo apt-get install mopidy

Now install jackd1 since the new version jackd2 causes issues apparently:

    sudo apt-get install jackd1

Say 'yes' to allow installation with a realtime configuration.

Finally we want to ensure that all the relevant drivers are up and running, so lets modify our /etc/modules file to look something like this:


Running on boot

This is where the basic installation instructions end and things got a little murky. I wanted to setup Mopidy to run automatically on startup, most instructions tell you to use upstart, but Xbian doesn't use this by default, it uses the old school init.d scripts. So lets setup a script to start Mopidy, and setup configuration for our Spotify account.

First lets add a user account for mopidy to run under:

    sudo adduser --system mopidy
    sudo adduser mopidy audio

Next we should create the required config files in the /home/mopidy folder for our new user:

    sudo mkdir /home/mopidy/.config
    sudo mkdir /home/mopidy/.config/mopidy

    vi ~/.config/mopidy/

Add something like this to your file:

    MIXER = u'pulsemixer'
    SPOTIFY_USERNAME = u'yourspotifyuser'
    SPOTIFY_PASSWORD = u'yourspotifypass'

Notice here I've modified the default Mixer configuration for Mopidy to use the pulsemixer configuration. This is because I found my HDMI audio to be highly corrupted with lots of distortion and clicking and popping sounds. So I quickly installed Pulse Audio to fix this:

    sudo apt-get install pulseaudio

Finally we need to setup our init.d script to run Mopidy on startup:

    sudo vi /etc/init.d/mopidy

Then add something like this to the file:

    # mopidy daemon
    # chkconfig: 345 20 80
    # description: mopidy daemon
    # processname: mopidy
    # Provides:          mopidy deamon
    # Required-Start:    $remote_fs $syslog $network
    # Required-Stop:     $remote_fs $syslog $network
    # Default-Start:     2 3 4 5
    # Default-Stop:      0 1 6
    # Short-Description: Start mopidy daemon at boot time
    # Description:       Enable mopidy music server



    DESC="My mopidy init script"

    case "$1" in
            echo "Starting Mopidy Daemon"
            start-stop-daemon --start --chuid mopidy --background --exec /usr/bin/mopidy \
                    --pidfile $PIDFILE --make-pidfile \
                    -- 2>/var/log/mopidy.log
         echo "Stopping Mopidy Daemon"
            start-stop-daemon --stop --exec /usr/bin/mopidy --pidfile $PIDFILE

            $0 stop
            $0 start

            echo "Usage: $0 {start|stop|restart}"
            exit 1

This script makes use of the really handy start-stop-daemon to manage the starting and stopping of the Mopidy process.

Finally we need to tell the system when to start the script so we do this to setup the defaults.

    sudo update-rc.d mopidy defaults

Finally we should be able to reboot our box and have Mopidy up and running automatically. However...

XBMC is now broken! Oh noes!

Dun dun duuun! It seems XBMC will no longer boot, it just sits at the loading screen.

Turns out that Mopidy uses some version of a dependency called libtag that is incompatible with the one that XBMC needs. This is really unfortunate and had me scratching my head for ages. But don't worry, it turns out there's an easy fix, we just need to get XBMC to look in /usr/local/libs for its dependencies first. So we just need to modify the xbmc startup script:

    sudo vi /etc/init.d/xbmc

After the comments at the top of the file, add the following line:

    export LD_LIBRARY_PATH=/usr/local/lib

Now try to restart xbmc with sudo service xbmc start and you should be good to go!


That was a long one. Took a whole day of messing around to figure out how to get all these things to work nicely together, but I'm quite happy with the result. I can now control my XBMC from my Android devices, and play music without having to use the TV which is lovely.

Sadly the Raspberry Pi is still a little underpowered for all this, I can't help but think if it just had a little bit more ram and a few hundred more mhz it would be the ideal media device. As it stands its usable, and ridiculously good value for money for the 40$ it costs to get up and running.


I had to consult an awful lot of the internet to get this running, so here's a few links I used to get to the above, they may come in useful if this information gets out of date.

Installing node-canvas for Windows

I've recently been experimenting with node.js for a possible new side project. So far my experience has been great, there's a lot of fun to be had working with Node and there's a massive raft of libraries available to make your life really easy.

CSS Awesome

I wanted to use Stylus in my application to compile and normalize my CSS. I cannot overstate just how great using Stylus as part of your workflow for CSS really is, its saving me loads of time and my CSS feels clean and tidy. Anyway, one addition you can make to Stylus to further improve its awesomeness is to add Nib which allows for really cool stuff like cross-browser CSS3 extensions, various short-hands that make life easy, and most of all automatic generation of Gradient images for IE and older browsers that don't support gradients.

However, all this awesomeness comes with a price, you must install node-canvas into your app so that it can render these gradient images on the server side. While Node is proving to be largely platform agnostic, some modules such as this one require native code compilation. While this is usually pretty straightforward on *nix based operating systems its a bit of a headache on Windows. I've now had to get this running on a couple of different boxes and keep forgetting where to get all the relevant pieces, so I thought it was time for a blog post so I have this stuff easily to hand and maybe it will help some of you out too.

There's an article here that documents how to install node-canvas on Windows, but I found its missing a few details, so I've outlined these below.

node-gyp setup

Native compilation on Node is achieved with the node-gyp module. This tidies up all the little bits of pain we usually associate with compiling things on various platforms. In order for it to work on Windows we'll need to install some of its dependencies which are listed here.

  1. Install Python 2.7.3 only 2.7.3 nothing else works quite right!

    If you use Cygwin (as I do) ensure you don't have Python installed in Cygwin setup as there will be some confusion about what version to use.

  2. Install Visual Studio C++ 2010 Express

  3. (64-bit only) Install Windows 7 64-bit SDK

    If like me, the SDK fails to install with the following error:

    Installation of the “Microsoft Windows SDKfor Windows 7” product has reported the following error: Please refer to Samples\Setup\HTML\ConfigDetails.htm document for further information.

    Then you should follow the instructions here to get the installation to work again.

node-canvas installation

The node-canvas module uses GTK to do its rendering on the server side, so you'll need to install a copy of the GTK binaries on your machine. You'll want the 'All in one' binary package:

Download the appropriate package, and unzip it to the C:\GTK folder, any other folder and you'll have to make configuration changes to node-gyp so its probably best to just put the library there.

Now that you have node-gyp setup properly you can install node-canvas like you would any other module, either globally with:

npm install -g canvas

Or locally in your application using a dependency in your package.json file.

Now hopefully your canvas installation will work perfectly from here on out. However, you might still encounter errors. If you do, you could try one last nasty hack that really isn't ideal but did solve some final dependency issues for me. It appears that on some installations the .dll files for GTK do not get found by the native code running for node-canvas. In order to make this work simply copy all the dll files from the C:\GTK\bin folder to your node_modules/canvas/build/Release folder. That should resolve any niggling dependency issues.

Node + Windows

While I am beginning to love developing with Node, I am really hating the niggling issues it still has with Windows. While I could switch to OS X or a flavour of Linux, I'm really disinclined to at this stage as the Node issues are the only thing that really grates in my development process right now and I'm very proficient with my current choice of OS.

Perhaps things will improve, I'm certainly going to try and stick with it for now and where possible contribute back to the Node community to get things more polished.