This recipe is an adaptation of How to cook perfect banana bread by Felicity Cloake. The original recipe makes great banana bread, but some of the quantities and instructions irk me. This is my slightly edited version so I can refer back to it in the future.

I've changed the amount of butter to be a weight, rather than tablespoons, I've also given the number of bananas as well as the peeled weight. With the instructions I've shuffled the order around a bit and removed the sifting step as I find it doesn't make much difference. Enjoy!

  • 160g soft, light brown sugar
  • 2 eggs, beaten
  • 60g melted butter, plus extra to grease, slightly cooled
  • 4 ripe bananas (roughly 350g peeled weight)
  • 180g plain flour, plus extra for the tin
  • 2.5 tsp baking powder
  • 1 tsp salt
  • 50g walnuts, roughly chopped (optional)
  1. Preheat the oven to 170C. Grease and lightly flour a loaf tin about 21x9x7cm.

  2. Put two-thirds of the peeled banana chunks into another bowl and mash until smooth. Roughly mash the remainder and stir in gently.

  3. Put the sugar, eggs and melted butter in a large bowl and whisk them until pale and slightly increased in volume.

  4. Fold the bananas, flour, baking powder and salt into the sugar and egg mixture until you can see no more flour, then fold in the walnuts.

  5. Spoon into the tin and bake for about an hour until a skewer inserted into the middle comes out clean. Cool in the tin for 10 minutes before turning out on to a rack to cool completely.

A couple of weeks ago we brewed our first ever batch of beer. Rather than start with something simple like an extract brew, I decided to create my own all-grain American IPA recipe. The beer is ready to be bottled now, then it will need another 2 weeks in the bottles for conditioning. By mid-January we will finally be able to taste it, hopefully it will at least resemble drinkable beer.

Go is a great language for building network based applications. It comes with some excellent tools for creating web-apps out of the box.

I often want to create a "simple http server" to serve up the current directory, usually I reach for python -m SimpleHTTPServer, but in the spirit of re-inventing the wheel I decided to see how Go could handle this task.

It turned out to be remarkably simple. Go comes with a static file server as part of the net/http package, in this example I've added a couple of flags that allow specifying the port and the root filesystem path for the process.

// httpserver.go
package main

import (
    "flag"
    "net/http"
)

var port = flag.String("port", "8080", "Define what TCP port to bind to")
var root = flag.String("root", ".", "Define the root filesystem path")

func main() {
    flag.Parse()
    panic(http.ListenAndServe(":"+*port, http.FileServer(http.Dir(*root))))
}

The actual meat of the program is the second line inside the main function. http.ListenAndServe accepts an address to listen on as the first argument, and an object which implements the http.Handler interface as the second, in this case http.FileServer. If ListenAndServe returns an error (most likely because another process is using the desired port) then the process will panic and exit.

If you've got Go installed then this can be run directly.

$ go run httpserver.go

Or you can compile it to a standalone binary.

$ go build httpserver.go
$ ./httpserver

The file server implementation that Go provides even handles serving index.html from a directory if no file is specified, and provides a directory listing if there is no index.html present.

For more details check out Go's implementation of http.FileServer.

The code shown in this article is available on GitHub.

I've been playing with my Raspberry Pi starter kit that I got for Christmas today. It comes with a clear plastic case for mounting the Pi onto, but as I've already got a case I'm just using the breadboard and the components that were supplied with the kit. As well as the breadboard the kit includes the following components:

  • 12 × LEDs (4 each of Red, Yellow and Green)
  • 12 × 330Ω resistors (for the LEDs)
  • 2 × 10kΩ resistors
  • 2 × mini push buttons
  • 10 × male to female jumper wires

The example I was following is for a simple traffic lights system. The hardware components are wired into the Raspberry Pi board using its GPIO pins. These General Purpose Input Output pins can be controlled using software so they provide a simple way to connect to external hardware to the Pi.

In addition to the familiar USB, Ethernet and HDMI ports, the R-Pi offers lower-level interfaces intended to connect more directly with chips and subsystem modules. RPi Low-level peripherals introduction

Once I got the traffic lights wired up and working I started hacking on the software side of things. The example python code didn't cleanup the GPIO when I pressed ctrl-c, so the lights remained in the position they were in when I interrupted the process. To fix this I installed a signal handler to run the GPIO cleanup code and exit cleanly.

def cleanup_gpio(signal, frame):
    print
    GPIO.cleanup()
    sys.exit(0)

# Install signal handler to cleanup GPIO when user sends SIGINT
signal.signal(signal.SIGINT, cleanup_gpio)

The code I've been tinkering with is on GitHub. This includes the example C code, which is pretty much untouched other than adding a limit to the number of loops so that the GPIO cleanup code gets run. There's also the example python code which is what I've mainly been playing with.

There is a disco.py file in the repository that flashes the lights in sequence which is essentially a modified version of the traffic lights script.

For my first attempt at Raspberry Pi hacking this was very successful, it's a real buzz to see hardware and software coming together in something of your own making.

The next stage of this project will involve linking this traffic light system into a web service to create a status indicator.

For a while now my shell has been taking a very long time to start up. It wasn't so noticeable on more powerful machines, but on my late 2010 MacBook Air with "only" 2GB of RAM, it was very noticeable.

Last weekend I decided to finally sit down and figure out what was causing the slow startup times. I stepped through the startup process in my dotfiles, commenting out lines until it was fast again, it didn't take long to find the offending line in my zshrc.

for module ($DOTFILES/**/*.zsh) source "$module"

This line uses shell globbing to load up all the files that end in .zsh in the dotfiles repo, its pretty central to the way the dotfiles work, without it no custom zsh scripts are loaded. So how could this line be causing such a slow startup?

The way globbing works is it expands the pattern into a list of files. In this case it checks all the files in the dotfiles directory to see if they end with .zsh.

This shouldn't be a problem, but I had configured my vim plugins to be installed into a directory within the dotfiles repo, which had its contents gitignored, you can see the commit on GitHub. This meant I could install vim plugins without having to track them in the repo. So the shell globbing was checking all of the files in the vim plugins as well as the dotfiles.

To solve this problem I spent an hour removing my vim configuration from my dotfiles and putting it into its own repository. I'd wanted to do this for a while anyway because I found myself changing my vim configuration a lot more than anything else in the dotfiles.

I made this change on my faster work laptop, which doesn't suffer as much with the slow startup. When I got round to pulling the changes down to my Air the difference was very noticable, new shells pop into existance in under a second, like they should.

It's been 6 months since I made the commit which began the slowdown. I've been putting up with slow shell startups for all that time, when actually it only took a couple of hours in total to identify and fix the problem.

So don't be like me and ignore performance problems when you know they're there, take the time to fix them

If we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger. Edsger W. Dijkstra

I take great pleasure in reducing the size of a codebase.

When you first start a project there is a lot of code churn. As your understanding of a problem changes the code changes and grows with it.

As time goes on you need to remove code that is no longer used. One strategy that I often see is commenting the offending code out. This adds a lot of noise to the code when you're reading it, a much better strategy is to just remove the unused code, if you need it back you can ask your VCS.

When you don't take the time to remove old code and features, it's often more confusing when you come to add new code and features.

Recently I found myself wanting to access the resque-web ui on a live application. I had considered just running resque-web as a separate process, but after reading this article I realised that I could mount resque directly in the router, awesome!

However, the application doesn't use devise for authentication, so I wanted an easy way to restrict resque-web to admins.

Using rails 3 router's advanced constraints you can pass a :constraints options with an object that responds to matches? and receives the current request as an argument.

Since the current user's id is stored in the session, we can simply retrieve the user and check if they're an admin.

class AdminRestriction
  def self.matches?(request)
    user_id = request.env['rack.session'][:user_id]
    user = User.find_by_id(user_id)
    return user && user.admin?
  end
end

MyApplication::Application.routes.draw do
  mount Resque::Server => '/resque', :constraints => AdminRestriction
  # Other application routes.
end

The AdminRestriction class performs the actual checks, in the router it is simply passed as a constraint.

First we pull the user_id out of the session, then we attempt to get the user from the database. Finally we check that we've found a user and that they are an admin.

If the user tries to access /resque and they are not an admin, they simply get a 404 error.

This technique can be used with any rack application, or indeed with any regular route, just pass a :constraints option (see the match method docs), and the constraints that you apply can use any part of the request to decide if it matches. You can restrict access by ip address, or do routing based on the request's geolocation.

The possibilities are endless.

Start at the top level with a user requirement. This will ensure that you are trying to solve the right problem in the first place.

Cucumber / Steak

Write a simple high level requirement in cucumber or steak. For example if you were writing an api and you wanted to allow people to post activity.

First you would write out the requirement in plain english.

Feature: Creating activity
  As an activity producer
  I want to post activity to the api
  In order to share it with the world

  Scenario: POST to /activity
    Given I am an authorized activity producer
    When I post some activity
    Then I should receive a 200 success status

This defines the high level picture of what the application aims to achieve. From this point the next step is to write some steps to properly test that the features are working.

Obviously the features won't be working yet, because that's not how test/behaviour/real-world driven development works. If you start writing code before you really know what you are building, then it is likely that you will at some point, perhaps without realising, implement the wrong feature. More likely though, being that smart martin that you are, you will implement the correct feature, but unwittingly leave a bug in it. No harm, you say, bugs happen in software and it's probably only one line of code that needs changing. But if the bug doesn't manifest itself for a few days, weeks, months, then it will take an ever increasing amount of time for you to track it down. Then once you have isolated it, you have to ensure it won't happen again.

This all comes down to testing. If you can easily run a bank of tests that confirm that your application is behaving at night you will, infact, sleep better at night. For some this is reason enough to write tests, but wait, there's more. If you have a comprehensive collection of tests you can use to verify that your application is working correctly, then you are in a very good position to do some refactoring. No more worrying if you've broken some seemingly unrelated piece of functionality when you make a change to the application.

Controller tests

The next layer down the stack is the controller, this makes sense, since this is the layer that manages incoming requests. So after writing the initial acceptance test in a high level language, then implementing the steps necessary to test this logic in a slightly lower level language, you now drop down to a relativly granular level. This is where you start to stub out the main functionality of the application. So in the case of our activity example above, you would at this stage be stubbing out the functionality of the models, and just dealong with the way that controllers handle requests.

This layer may be preceeded by another, slightly higher level one in which the routing for the application is set up, however this is quite a web application specific area.

Model tests

The model tests are the core of the testing universe, they are very small, low level details about the various methods that a model provides.

The model tests are the most important tests, as they contain the business logic for your application. But they are often also the easiest tests to write. This is because a lot of the time you will be dealing with primitive data types. No external services to worry about. This is a good reason to push as much logic into the model as possible, it is far easier to test this that it is to put it in your controller then try to stub it out.

Other tests

All of this testing malarkey is great and everything, but sometimes you have got to just use the product to uncover random bugs, but don't worry, that's not to say we can't still use testing to help us. Instead of jumping right in and fixing that bug like a good boy scout hacker, write a failing test that reproduces the bug. This test may exists as (m)any layer(s) of the test stack. Often you will need to write a test in the acceptance layer to perform the same interaction that caused the bug, then once that has caught the bug, go down the layers and write tests for the code paths that the bug touches, these failing tests will then (all going to plan) be green once you have fixed the bug, and as a billy bonus you've got yourself another test or two that will ensure many good nights sleep in the future.

Final thoughts

The testing pattern of development is still not used as a default development technique. Many (most) tutorials you find out there will focus on the functionality that they are trying to tutor you on, but a vital part of implementing any functionality is to ensure a long lifespan by testing it thoroughly.

Testing will seem like a chore at first, but once you get the hang of it, the whole process becomes a pattern, that if followed with a small bit of dicipline, will lead you to enlightenment, it will allow you to model your ideas with more structure with the tools available.

In many ways it's like learning a new language, with more languages, you see more ways of doing things. With testing, you'll see new ways to express your applications logic using high level language.

This blog now lives at a new address, hecticjeff.net. If you're reading this then you are already here! I'm still using heroku to host the blog, it's quite simply the best (only?) way to host ruby apps in the cloud. If you want to get in touch you can now email me on self at hecticjeff dot net.

The blog is going to get some love soon, comments will be the first thing to activate, as I'm using the excellent toto from cloudhead I will most likely opt for the wonderful Disqus comment system. That's priority 1. Priority 0 is to write some more posts, but I've been settling into a new job (with the awesome Simpleweb crew) in a new city (Bristol) recently, so this blog has been neglected to say the least. But it's time has come.

Update: The node.js api is in constant flux until it reaches a 1.0 release. The code in this post doesn't work with the latest releases of node (it was written for the 0.1.3x series), however the concept is still the same. The best place to start is the node.js api docs.

I've only just got round to doing some coding with Ryan Dahl's great node.js project, although the project has been around for about a year now, so I've put together this short introduction to give you a taste of what node is about.

The project is under heavy development, and the api is still changing quite regularly, but don't let that scare you off, node is stable and usable now.

For those that don't know, node.js is a project that brings javascript to the server and into the realm of PHP and Ruby. However with node there is one big difference. It is based around the concept of events, just like in the browser, javascript waits for events to be fired, then invokes the function that has been attached to the event.

Node uses this to it's full advantage, most operations that you perform in node will have a callback associated with it, this means that node doesn't have to wait while it's completing some time-consuming task, it can go about serving other requests, then when the event is fired it can go back and execute the function attached to the event. This leads to very fast response times when using node as a web server, as it can handle thousands of requests per second.

The program can be installed easily on OS X (using Homebrew or compile from source) and Linux (compile from source), Windows support is planned in the future, but is currently non-existent. Once installed you have access to a command line utility, node, this runs .js files through the node interpreter.

A basic node hello world program could look like this:

var sys = require('sys')

function sayHello (name) {
  return 'Hello ' + name + '!'
}

sys.puts(sayHello('World'))

save this into a file and then run it from the command line like so:

$ node hello.js
Hello World!

As you can see it's just javascript syntax, just like you would use in a browser, but the first difference the astute reader will have noticed is the require() statement. This is built into node and conforms to the CommonJS Module specification. This version of require is slightly different from one you might see in Ruby or PHP, it actually returns an object which you can assign to a variable, or use directly:

require('sys').puts('Hello World!')

Each module is a self contained unit of code, it has its own private scope, so can define functions to use internally, then expose public functions to be used externally using the exports object. Here's an example of a simple module:

var calculator = {
  add: function (a, b) {
    return a + b
  },
  subtract: function (a, b) {
    return a - b
  },
  multiply: function (a, b) {
    return a * b
  },
  divide: function (a, b) {
    return a / b
  }
}

process.mixin(exports, calculator)

The last line may seem a little strange, but all it does is extend the exports object with the properties defined in the calculator object, if that line wasn't there then the module wouldn't return anything as it wouldn't export anything. To use our new (rather basic) module we'd do something like this:

// Assuming that calc.js is in the same dir as this file
var calc = require('./calc')

// Now we can use the calc object to do calculations, joy!
var ten = calc.add(3, 7)
var two = calc.subtract(5, 3)

This overview barely scratches the surface of what is possible with node.js, the api documentation contains all you need to know about node, and it is all on one (albeit rather long) page, so once you've read through it you should have a pretty good idea how you'd go about coding for it.

There are already some very interesting projects using node available to try. A quick glance at the node wiki will give you a glimpse of what is possible with this fantastic new technology, as well as some more background information on the project.