Categories
Programming

Twees

Wow, people can make climbing trees into a commercial opportunity.

Here’s a classic story software engineering story – Anthony does the intro and DaveM delivers the punchline. Next time you’re chasing down a bug which “can’t possibly be happening” bear this one in mind. Of course, every other time it’ll be a bug in the code. But this once, the code was fine. Crazy stuff!

JQuery is a pretty nice bit of software. It allows you to query your java codebase using prolog-like questions, and it integrates into Eclipse to provide dynamic views across your code base.

I remain undecided whether a bad language with good tools is preferable to a good language with bad tools. People who get paid to code in Smalltalk/Lisp can sit back and be smug.

3 replies on “Twees”

Dave and I were talking about the bug as part of an informal project retrospective today. Something I found interesting is: the fact it turned out to be bad hardware didn’t really change the attitude of the user/site who were beta-testing the software. They still perceived this particular test experience as involving them having to run a buggy version of software that crashed too often and made their work unpleasant.

It’s an understandable reaction. Our product was the only software being run on that machine that pushed it hard enough for the memory errors to manifest. The OS and standard tools weren’t having any problems, because they didn’t need GBs of RAM. The computer was only broken when it ran our software – so practically speaking, our software was broken.

It makes us wonder if including some kind of memory testing to diagnose and report such a problem might be a reasonable thing to expect software like ours to do.

That’s interesting. I guess there’s a whole spectrum of “how much do you trust the platform” and the correct answer is going to be different for a word processor, for the control software on a space probe, and for a medical application. There’s that risk/effort balance again. Next time, it’ll be a dodgy hard-drive rather than memory though.

Extrapolating a bit, if an application is running with gigabyte datasets for long enough, surely at some point alpha particles or power cuts have to feature in your product plans. As you rightly point out, customers are buying “solutions” not “software”.

That reminds of the quote along the lines of “there is no such thing as an infinite loop – the computer will crash long before that”.

We experience exactly the same problems with shipping PC games. How much do you trust the user’s (video) drivers? Glitchy games always get interpreted as buggy coding even though 90% of the time the problem is related to either:

(a) The user still having five year old drivers installed when they bought their system OR
(b) Lunatic-fringe buggy beta drivers downloaded from whatever l33t site the user prefers.

With older games we used to run hardware-accelerated only on a recognised database of cards. This had the interesting side effect that upgrading your graphics card caused FIFA 2000 to look crap, because your new card wasn’t recognised and therefore you got software rendering.

If you play any EA Sports PC game, you’ll notice that we ship with the default graphics quality set to “utter crap”.

I’m not sure that in your case shipping a test suite will solve the problem – how often is this likely to happen? Is it the case that shipping a test suite could result in false positives, merely causing the user to believe your test suite is buggy? How much of the system are you going to test – what if the memory’s fine but the hard drive gives wrong data occasionally? Would it be better to ship a 3rd-party test suite, so that at least you disassociate your brand from the problem?

Fixed hardware consoles solved this problem for us, but with the proliferation of online gaming, the can of worms is about to be reopened…

Comments are closed.