The best programming book I’ve ever read: Beautiful Code

Beautiful Code is probably the best programming book I’ve read (and I’ve read a lot of them!).

Bookshops are filled with books about the mechanics of programming; “Learn X in N days”. But there are precious few books which manage to capture the thought processes and the wisdom of good programmers. Reading “Beautiful Code’ is like spending several hours in the pub in deep conversation with a bunch of really sharp programmers. If you care about programming, it’s a must-read.

When you write code, you make lots of decisions. Some are small scale, like whether to split a long conditional across two lines. Others are larger scale, like how to model concurrency in a large system. Ideally, each decision you make should be an informed one where you are aware of the trade-offs you are making. However, that thought process is rarely captured in source code. When another programmer reads your source in six months, they are probably going to wonder why you did it that particular way instead of different way.

So that’s the great thing about “Beautiful Code”. There are 33 articles, all written by programmers about systems they’ve built and care deeply about. The articles talk about the trade-offs and choices they made. Some articles focus on the design of small bits of software, such as Python’s dictionary implementation. Others focus on cool new techniques, such as Simon Peyton Jones’ article on transactional memory. There’s an amazing article about designing software for Stephen Hawking’s computer system (he can only operate a single input button). These are not trivial toy examples; they are real world systems with history and design.

There’s both a breadth and a depth to this book which really impresses me.

See Beautiful code at or


Unix tools, and how I use them

I love the unix philosophy: carry around a small set of tools and use them to build bigger custom tools to solve problems. Over the last few years, I’ve the following programs to my ‘must have’ list:

strace – trace system calls

Strace tells you which system calls a process is making. It gives loads of information about errant processes – is it blocked on network or file i/o? Is it stuck in a loop? I used this recently to find out why ssh logins were slow on one of my machines. I used ‘ps ax | grep sshd’ to find the pid of sshd, then ran “sudo strace -f -t -p PID”. The ‘-f’ means to also trace any child processes, and ‘-t’ gives timestamps. This showed that sshd was doing a reverse DNS lookup when I logged in (wrongly set up dns) and also that the default ubuntu .bashrc takes a good while to run.

lsof – list open files

lsof is useful in conjunction with strace; strace will show you that a process is reading on file descriptor 7, but what is that used for? Running “lsof -p PID” will tell you what each file descriptor is connected to.

cstream – filter, monitor and bandwidth-limit stream

cstream is great for monitoring long running jobs. I use this often to monitor the progress of mysql imports from mysqldump files. Eg. “cat dump.sql | cstream -l -T1 | mysql DATABASE” lets me know how much of the file has been processed so far.

You can also use cstream to bandwidth-limit a stream, but I tend to do my bandwidth limiting via rsync (–bwlimit) or scp (-l).

socat – like netcat, but better

Socat is netcat for network, processes, files, sockets, etc, etc. It also doesn’t buffer output in the same annoying way that netcat does, which makes it more useful for creating mock servers. For example, I recently used it to create a dummy HTTP server for testing erlang’s inets library:

Create a script called “reply-204” containing

sleep 1
echo -ne 'HTTP/1.1 204 No Content\r\nSomeHeader: foornrn'
sleep 100

.. then run “socat tcp-listen:9999,reuseaddr exec:./reply-204”.

watch – run a command repeatedly

I used to write this loop lots: “while true; do ls -l somefile; sleep 1; done”. Now I just use watch, for example “watch -n1 ‘ls -l somefile'”. The “-d” flag is also useful – it highlights difference between each run.

Commands which run other commands are the happiest commands in the world.

iftop – what traffic is going where?

iftop is like top, but for network traffic. Great for getting a quick overview of why your network connection has suddenly slowed to a crawl. Also good for noticing weird connections (aka, why is my machine sending traffic there?).


Like tcpdump, tcpflow captures network packets. Additionally, it stores each “conversation” in a seperate file which makes it easy to futher analyze.

Whilst running tcpflow a minute ago, my browser happened to request this page and tcpflow let me see that it returns this header: “Server: Modified Atari-ST”. Do you think it’s true?

iperf – how fast can my network go?

iperf is a simple end-to-end network performance tool. Answers the question: What’s the maximum transfer rate between two machines? I recently moved all my photos and videos onto a separate media server box, and loading up big jpegs was taking a few seconds. I used iperf to check my actual network speed, but sadly the performance was pretty close to the theoretical maximum .. sadly, moving lots of bits still takes a while!

Not forgetting

  • My favourite grep flags: “-o” to only show matching text, and “-P” to get perl regexps (eg. non-greedy quantifiers)
  • My favourite cat flags: “-T” to show tabs as “^I” … useful for eyeballing tab-separated files
  • My favourite less flags: “-S” makes long lines get truncated, rather than wrapping.

Red ruby pills

It’s a bad night’s coding when your new app crashes the ruby VM … 🙁

It’s an actor-style flexible process manager. It’s great. Only it crashes the VM. Sigh, it’ll have to wait for another day.

deadlock 0xb7cb8484: sleep:-  - /usr/lib/ruby/1.8/monitor.rb:285
deadlock 0xb7cb8628: sleep:-  - /usr/lib/ruby/1.8/monitor.rb:285
deadlock 0xb7cdc708: sleep:J(0xb7cb8088) (main) - ./actor.rb:34
deadlock 0xb7cb8088: sleep:-  - /usr/lib/ruby/1.8/monitor.rb:285
deadlock 0xb7cb81f0: sleep:-  - /usr/lib/ruby/1.8/monitor.rb:285
deadlock 0xb7cb8308: sleep:-  - /usr/lib/ruby/1.8/monitor.rb:285
/usr/lib/ruby/1.8/monitor.rb:285: [BUG] Segmentation fault
ruby 1.8.6 (2007-06-07) [i486-linux]

Aborted (core dumped)

Erlang and Amazon’s S3 Storage Service

Hurrah, Erlang OTP R12B-0 has been released! The new version includes my fix to the http client. This is good news because it means that my erlang bindings for Amazon’s S3 storage service are now usable without having to patch OTP. Check out the retro distribution page for s3erl. Instructions in the README.

At the moment, the s3 library allow you to
– create, list and delete buckets
– read, list, delete and write objects

Things which I’d like to add if I get the time
– Improved error handling
– Support for EU-based S3 buckets. (not too hard)
– Support for streaming objects to/from disk (inets supports streaming to disk, but not from disk just now) (harder)

UPDATE:  You can now get the source from the public mercurial repository; “hg clone”.