Node.js and a changing world

When node.js was released in 2009, I didn’t understand why it got so much hype. I understood what it did, but since it didn’t let me do anything I couldn’t do before, it just wasn’t very interesting.

But since then I’ve put a few thoughts together:

1) Javascript *is* the ubiquitous client-side ‘platform’ in today’s web world. You might not have chosen things to be that way. But we could be a lot worse off. So, in today’s world, everyone has to know javascript.

2) Javascript is today what BASIC was in the ’80s. You get immediate feedback without compile/deploy cycles. You can futz around without any infrastructure. You don’t type ‘RUN’ any more, you just hit refresh in your browser. I learned BASIC then Z80 assembler as my first languages. Today, you’d learn javascript.

3) Not everyone knows C#/Java/C++/ruby or similar “server side” languages. I do, but only because of my career track to date. Everyone in the world has a different background and experiences. If you already know javascript, I suspect it’s not really that attractive to have to learn a second language just to do server-side programming?

4) Even if you did choose to use a ‘real’ server-side language, you end up with two sets of every library – a javascript version for the client, and a ruby/java/etc version for server. Two sets of collections, two different network libraries, etc, etc. This is boring. Why not pick one and use it on server and client? Oops, except your hand is forced because History has fixed the client-side language.

5) Event-based programming is an old, old idea. Desktop GUIs have always all been event-based. Threads were only added to windows in, IIRC, Win95. But node.js may well have been many web programmers’ first exposure to the idea. There are upsides and downsides to the event-based approach, compared to threads. But if your language runtime is single-threaded, you never even had a choice. If you’re a good salesman, you sell the upsides regardless. Ruby chose multi-process concurrency, whereas node.js chose event-based concurrency (augmented by processes) .. as did windows ..

6) Lots of well-funded people have a vested interest in making javascript run better. Speedups for client-side javascript translate into speedups for serverside javascript. Moore’s law delivered us into an era where even dumb interpreters were “fast enough”. Ruby and Javascript are both “fast enough” for most of the hobby webapps I’ll ever write.

7) The old split of “clever server code” and “dumb client code” is fading anyway. Client code is doing more. Webapps on mobile devices are able to better handle flaky networks. Client side code quickly gets to the stage of needing all the design patterns which ‘real apps’ have used – mvc, events systems, stuff that backbone provides.

So, javascript on the server makes some kind of sense – with or without hype. It’s not a slamdunk though. I still like my advanced static type systems thankyouverymuch. Quite possibly, javascript is more like the assembly language of the internet; whether at a superficial level like coffeescript or a deeper level like links.

(Thanks to Cameron for interesting conversations on this topic!)

there is a great difference

For some reason it occurred to me tonight to find the longest common substring in various classical novels.

There’s a bunch of interesting algorithmics involved here. The naive O(n3) approach is slower than a slow snail on a slow day. Dynamic programming gets you to an easy-to-understand O(n2) solution, but that’s still impractically slow for story-length strings. Things start getting really clever with Ukkonen’s algorithm and now we’re into practical runtime territory. You could get a constant-factor speedup by working on sequences of intern’d words rather than chars, but the above algorithm runs in under a minute as-is so I didn’t bother.

I downloaded and normalised some books from Project Gutenberg and ran my longest-common-substring on them.

  • Emma vs Pride and Prejudice: “when the ladies returned to the drawing room”
  • Dracula vs Frankenstein: “between two and three in the morning”
  • Dracula vs Pride and Prejudice: “but there was much to be talked of”
  • Dracula vs. Fiat Money Inflation in France: “the difficulties and dangers of”
  • Treasure Island vs Dracula: “threw himself on his knees and held”
  • Treasure Island vs Jekyll & Hyde: “and to make assurance doubly sure”
  • US Constitution vs Dracula: “to the best of my ability”

.. and last, but most self-referentially not least …

  • Emma vs Frankenstein: “there is a great difference”.

High precision wheels

This evening I was truing one of my bicycle wheels. I started out with a truing stand – literally, a metal bolt identifies high spots on the rim by rubbing against them. I managed to get the wheel pretty true. But then I remember that I also have a DTI, accurate to 0.01mm. It’s like a red rag to a perfectionist. After a bit of futzing with that, I could get the wheel true within ~0.1mm and so I stopped there. The first time I hit a bump it’ll blow that precision away! But it got me thinking … how you could automate this (rather laborious) wheel truing task.

After a bit of thinking, I realised you can utilise the fact that the wheel is made of conductive aluminium, and therefore can act as one plate of a capacitor. Sure enough, wikipedia agrees. Unfortunately, the sensors in question are quite pricey – $120 – so I doubt I’ll ever get one.

Could I make a homebrew version? I think it’d be hard. Given how small a wheel rim is, you’d probably only manage a plate 10x10mm. If you got the wheel true within 1mm, your plate distance would be maximum 1mm. So that gives a capacitance of 0.88pF. That sounds pretty small to my ears. Charged up to 5v and draining through a 1M resistor, it’d drop to around half its voltage in a mere 0.88 microseconds.

The typical low-tech way to measure unknown capacitances is to stick it in as part of the timing circuit for a 555 astable and then measure the resulting pulse frequency. Fine in theory, but I don’t own a frequency counter .. and also the frequency would be in the megahertz range.

Microchip have a nice technote describing how to use a PIC microcontroller to measure small capacitances – and when they say small, they mean sub-pF. I’m still digesting that.

I’m tempted to skip all the tricky stuff and just build a larger scale version to test the concept … just to see if I can make the basic method work at all. Whilst I started off thinking about measuring distances, it occurs to me that a large-scale variable-capacitance controller oscillator is … a theremin!