I like discovering the story behind words. A few months, I realised that “mer” was greek for “parts” and so “polymer” and “monomer” just meant “many parts” and “one part”. Words which come from other languages, and any “native” words, must themselves have ultimately just been made up by someone at some point. The problem is that the birth of most words is not recorded. You might be able to guess why the word was chosen (like, “anteater”) but I think it’s unusual to be able to pinpoint exactly when a word first entered the language.

I’ve been reading a recent biography of Michael Faraday, who did a lot of important early work in electromagentism, and from this book I read the following historic nugget. By 1831, a lot had been discovered about the nature of electromagnetism, but the language used to describe the phenomena hadn’t caught up. People were often using analogies to water – they talked of “electrical fluid – but this analogy could be confusing. Faraday started a correspondance with Revd William Whewell at Cambridge Uni, which I’ll paraphrase:

FARADAY: “I think we need some names for the terminals of a battery. I’ve came up with: exode, zetode, zetexode but I also think westode and eastode are pretty catchy too”

WHEWELL: “Hmm, that’s a bit of a mouthful. How about anode and cathode? Pretty solid greek background for those words”

FARADAY: “Cheers Will, but the guys down at the pub weren’t very convinced by anode and cathode. They laughed at my poor use of greek”

[ FARADAY goes away and writes a paper using the terms DEXIODE and SKAIODE instead ]

WHEWELL: “Let me give you a quick crash course in greek, and you can tell your mates where to stick their criticisms”

FARADAY: “Y’know Will, I think you might just be right after all”

And that’s how the words “anode” and “cathode” first entered the language of electricity. That’s why we have “cathode ray tubes”. Not content with that, Faraday and Whewell went on to add the words “ion”, “dielectric” and also “diamagnetic” and “paramagetic” to the language, all terms which are still used today when describing electricity and magnetism.


The Future

I always liked Technetcast and now I’m listenting to it’s spiritual descendent, IT Conversations. It’s a collection of presentations and lectures from various conferences around the world, all in mp3 format. So I copy them onto my mp3 player, and I get to listen to Steve Wozniak telling his life story while I’m at the gym, or Stephen Wolfram talking about physics as I cycle somewhere.

This is the kind of thing that The Future always promised when I was younger. I can have video chats with my family over the internet. I can listen to audio lectures anywhere I want on a tiny portable device. I can check my email using GPRS halfway down a mountain biking trail. I can watch a degree level course on electromagnetism from MIT whenever I want, from the comfort of my own flat.

Okay, so we don’t have teleporters yet. And hard AI didn’t do so well. But, all things considered, the Future is doing pretty well.


Extreme Extreme

A while ago, I had the idea of always giving your customers a specially instrumented version of your product. The application would know what lines of code had been executed during your testing phase, and whenever the user ran a bit of code which hadn’t been tested pre-release, a dialog box would popup saying “Hey, you’re running code which we didn’t bother to test!”. How would you feel about doing that for your application?

Anyway, I stumbled across Guantanamo today which takes that one step further. It runs your unit tests, find any lines of code which haven’t been executed during the tests *and deletes them* from your source code. If they’re not being test, they’re not going allowed to exist. Heh, how cool is that?


Method & Creativity

I find performance work quite relaxing and satisfying. I think there are several reasons for this. Firstly, it is very methodical work. It goes like this:

1. Establish which particular aspects of your application are too slow
2. Establish what would consistute “fast enough”.
3. Create easy-to-run test cases for each area that needs speeding up (eg. application startup time).
4. Run a profiler to see where your application is actually doing during that time.
5. Based on that evidence, make changes to the application.
6. Repeat until things are fast enough.

It is very methodical, but enjoyable too. When you are identifying the performance bottlenecks, you have powerful tools (such as Intel’s Vtune) at your disposal and they do most of the hard work for you. It is always very satifying to use good tools.

These tools produce a large amount of data, and you have to put on your “investigation” hat to intepret the raw data. I enjoy this phase, partly because I know that all the relevant information is available to me, and partly because it lets you see your application from a different angle. It’s like exploring a landscape, building up a map of an area that you only vaguely knew before. I am always very familiar with the static structure of the applications I work on (what each bit of the code does, and how they fit together), but it’s only when I am doing performance work that I look at the big-picture dynamic structure of the application.

During this investigation phase, some of the facts which reveal themselves are expected and familiar – for example, you would expect a game would spend a lot of its time drawing graphics. These points act as signposts to me – known markers in the large mass of information. But then when you continue drilling down, you often hit surprises. Beforehand, you probably had suspicions as to where the time was being spend. I have to say that my suspicions have always been consistently wrong – to the point that I no longer bother thinking about them too much .. I just run the profiler. During performance work, I find myself alternating between thinking “yup, that all seems reasonable” and “oh gosh, what’s going on there?”. It’s like some kind of crazy jigsaw puzzle. Sometimes weird stuff shows up; people accidentally leave bits of code in the application which shouldn’t be there. Mostly however, the code is basically fine, but with your performance-tuning goggles on (and the benefit of hindsight) you can see how things can be restructured to make them more efficient.

Once you’ve explored around for a while (a fine opportunity to drink some tea) you end up with probably a few areas which you think could be improved. Now comes the creative side of performance work, because at this stage you have lots of choices. You could make the program do the same amount of work, but faster. You could make the program avoid having to do so much work. You could delay the work until later, waiting for a time where a delay wouldn’t be so important, and possibly never having to do the work at all! There are other angles to approach this. It is often not the absolute performance of your application which is critical – it is the user’s _perception_ of the performance which matter. This is why an application will display a progress bars during a long operation. This is why we have splash screens during application startup. This is why lift engineers put mirrors in lobbys near to the lifts. People hate having to wait, and they perceive and application which makes them wait as being slow.

So, the “make things appear less slow” stage can involve a huge range of techniques, from low-level assembler optimizations, through design changes, right up to HCI tricks. You have a real opportunity to pull out your toolbox and go to work. But at all stages, you can always go back and test how well you’ve solved the problem. It’s a lot easier than, for example, choosing which new features to add to an application.

Right, that’s nearly the end. This turned out not to be so much about profiling itself, but more about my reaction to the task. I’m not sure why I’ve recently turned my attention away from technology towards my reaction to technology. Partly it is triggered by the title of a famous paper by Dijkstra, “Programming considered a human activity”. Yes, I am still strongly interested in technology and I think there are great gains still to be made by improving our technology. But at the same time, it is we human beings who use the technology, and we who are affected by its results. Software projects are carried out by humans who get bored sometimes, excited sometimes, and are most definitely not robots. A software methodology which treated team members as interchangable automata is doomed to failure, because it would crush the spirit of each team member. But on the flip side, you need some structure in order to harness the team’s energy and coordinate their efforts. I think the best gains are to be made by adopting methods which amplify and harmonize the efforts of individuals, rather than focusing on process and expecting individuals to execute that process. I think a useful first step is to be aware nof your own rational, emotional and physical reactions to the work you do, and try to avoid the nasty “distraction field” which seems to operate around computer. Oh look, another superficially interesting article on slashdot! There goes another few minutes of my life!