Categories
Programming

Would the engineer please stand up?

I occasionally play with molten metal .. just to fill the time, y’know. I’m about to design and build an electric furnace, because I can use it indoors (and living in Scotland during winter months, this is important).

But, you say, this is meant to be a software engineering blog! Ah yes, I’m going to try to throw some evidence into the “is software engineering really engineering?” debate. I don’t personally hold a strong position on this matter (there’s plenty of more interesting things to talk about) but I think it is useful to draw parallels between traditional engineering and software engineering to see what can be learned.

My electric furnace is going to involve mains electricity, so I’m being really quite careful with it. It’s not a complicated project by any means, and I’ve built a non-electric furnace before, but since this is my first big electric project, I am being very cautious and doing a lot of careful reading and thinking before I start. Gosh, sounds like how you’d approach a software project!

After writing down a rough design, I decided that it would be prudent to think of all the things which might go wrong with the furnace, on the basis that I’d rather not be surprised by any of them occuring later! I brainstormed everything which could possibly go wrong (including some highly unlikely ones like “total structural collapse of the furnace”) and wrote them all down. At this point, I realised that I was really just doing a formal Hazards Analysis. I wasn’t doing it in a perfunctory way, or because someone told me I had to do it. I was doing it because it seemed like a pretty sensible idea, and I didn’t want to electrocute myself later! Later, it turned out that even the unlikely “total structural failure” hazard led me to change the design slightly (by adding an extra power switch at a safe distance from the furnace).

Now, if your company follows certified processes (such as ISO9001) you may be familiar with doing hazards analysis for software. Software often has the potential to cause harm to humans, and this is reflected in the ACM code of ethics with it’s Asimov-like “avoid harm to others” imperative.

But traditional engineer goes much further than this kind of “codifying common sense” which is familiar to their software counterparts. For example, when I am choosing which wire I should use in the furnace, once I have established the maximum current it needs to carry, I can look up a standard table to find out which wire I should use. Well, it doesn’t tell you exactly which product you should use. It tells you which “British Standard” the wire should meet, and you can go to the shops and ask which wire meets that particular standard. So, it’s a bit like interfaces and implementations in software.

The guys who wrote the “British Standard” have done a lot of thinking about what properties the wire should have. For example, if you’re looking to wire up a fire alarm system, there’s probably a BS1234 standard which ensures that the wire isn’t going to burn up in a typical house fire.

An electric engineer would never just pick up a random bit of wire and say “this’ll probably do”. An engineer would appraise the situation, look up the appropriate standard and choose their material based on that information. Later on, they could justify their decision by refering to their appraisal and the standards.

Now consider what a typical “software engineer” does. “Uhh, we need a bit of code which does a linked list. Hmm, here’s one here. Seems to work. Okay, let’s move on”. This doesn’t bear much similarities to other kinds of engineering. “Seems to work” is what is keeping the software industry at it’s current stage of buggy programs and unpredictable delivery times. “Seems to work” just isn’t good enough.

Now it’s much harder for software bunnies, or so the argument goes, because we produce new and innovative program. We’re never just “building another bridge” so it’s hard to distill relevant experiences down into tables or whatever. Partly, I agree with this. But I can also see that software writers often waltz by opportunities where a more strict engineering approach could be usefully applied.

Engineering standards and tables of engineering data are essentially “distilled experience”. There are two different examples of “distilled experience” common in software: design pattens and libraries.

Design Patterns can guide you to choose appropriate design elements for your software, and highlight possible difficulties with related approaches. They certainly are an example of “distilled experience”. But currently, they’re a process which you apply to structure your source code. Often once you’ve used a design pattern, it disappears away into your codebase – possibly you leave a comment saying “I’ve used the blah pattern here” to note it’s passing. We should demand languages where the patterns are one of the building blocks we use to compose our programs. If you use a pattern, you should be able to point at it in your codebase, and it should stand out distinctly as “the pattern” rather than being blended in with whatever code is surrounding it.

The other example of “distilled experience” is libraries. If you want, say, a linked list you’d never consider writing one from scratch since there are many high-quality implementations already written. We should be standing on the shoulders of giants as much as possible. But, then again, with our “engineers hat” on we should ask “what does high-quality mean?”. An engineer would check all of the relevant properties of a particular kind of wire before using it – the resistance, the density, how the resistance changes with temperature, maximum operating temperature etc. How many properties would you check for a third-party library? The “it seems to work” property? Does it provide O(n) or O(lg n) access to data? How much memory will it use? Is it thread-safe? If so, what effect does the locking policy have on performance? Does it safely interact with DLLs? All of these questions are ones which have bit me in the past. What do you know about the people who wrote the library – how much testing has the library had? To what standards was it written? Did they do code reviews? Are the results of the reviews available? Did they track any code quality metrics? Did they provide a list of “common misuses of this library”?

To the software engineer, this might seem like overkill. You just gotta trust that the libraries do what they claim to do, yeah? Well, no, that’s not good enough. Engineers don’t trust. They measure, they quantify, they document what they did and why they did it. There is traceability and there is responsibility.

Having worked on a big software project for several years, I’ve learned to respect the required level of “process” and accountability which you must have above the “coding layer” in order to capture and learn from experience (community experience and your own). That level of formality never seems to get carried on down to the level actual coding, except possibly if you are writing code for the Space Shuttle.

I guess this goes full-circle back to my interest in languages and tools. We can improve the situation by critically examining our tools and practises, and improving them to capture our knowledge and experience. Tools, particularly, shouldn’t be viewed as immutable – they should be flexible and adaptable. Too often, I see examples of primitive tools and adhoc rules being used in building software, and they make me wince.

There’s a quote I once read by, I think, Marvin Minsky which went along the lines of “computing is still such a young field – you have to remember that 99% of what you have learned will turn out to be total rubbish”. I feel that way about our current crop of tools.

(Incidentally, I keep starting to write blog entries, and then give up on them once they reach about 1000 words and have become out-of-control essays, sprawling out across multiple topics in an unstructured way. I find that writing a blog has required me to reexamine my own writing skills, and how I think through things. Which is all good for me, but I’m no longer convinced that I can easily produce blog-sized entries without a lot of effort. Oh well, I originally started this blog as a means of keeping track of my computer-related interests .. since otherwise I get to the end of the year and think “what did I do this year?”. It still fulfills that purpose at least).

Categories
Programming

Back to Basics

I keep getting dragged down to the roots of computer science and other disciplines. If you set out to write a Better Development Environment, you’ll pretty quickly slide away from code-related problems into general problems such as “information visualization” and “cognitive studies”. Playing with programming languages, it’s amazing how quickly everything interesting reduces to theorem proving. In a way, it’s disappointing because it means there’s less virgin territory out there than you’d expect. It means that you’re often better off heading for a library and spending a few hours reading rather than hacking “something new”. Sometimes I feel that I spend more time “reading” than “doing”, because so much of what I could write has already been done before in a research project somewhere.

Categories
Programming

Creole

Creole is a fusion of Shrimp (a hierarchical visualzation technique) with Java. I like reading about environments which are solving problems like this – problems I didn’t even realise I had until they were pointed out. There is a conceptual wall which divides “the tool” from “how you use the tool”. If your only navigation tools are class-hierarchy views and call-graph views, then you spend a lot of time “using” these tools in order to answer the questions which you are asking of the code. Wouldn’t it be better to use more powerful tools, so that you can actually ask the question directly? That’s what JQuery is all about too. There are lots of cool UI ideas and metaphors in these two projects.

I’ve often “watched myself” using a development environment and realised that I have a strong sense of a “current working set”. I think that the default “everything is visible” notion which most development environments present isn’t very useful. I would like to start with an empty canvas, and drag a few methods, classes or JQuery-style dynamic queries onto my canvas. At any one time, I’d be working with a canvas which contains only information relevant to my current train of thought. I would probably switch between canvases as time progreses and I start working on different parts of the system. In fact, it may be quite useful to share canvases between developers, perhaps as a way of guiding new developers .. “Bob, here’s the set of methods which you should look at before making your change”.

As an aside (of interest to Voxar folks), there is a “filmstrip” metaphor used in Shrimp which is really similar to Live Images in Voxar3D. If I knew more about formal HCI, I know where to look for more “GUI design patterns” such as this. I think this particular one is quite useful and I’d like to see more.

Categories
Programming

It’s MetaTurtles, all the way down

(This is a postscript to my previous entry about “compiler compilers”)

A baby version of English, like the one understood by SHRDLU can be used to construct sentences about colored blocks. It is a language and as such, it has syntax rules (which tell you how to distinguish a well-formed sentence from gobblygook) and it has semantic rules (which tell you what a well-formed sentence actually means). This particular baby language is pretty much restricted to talking about blocks, positions and colors.

Java is a baby language too. It lets you construct sentences about computations – such as “do something five times”. It’s a language, and like all languages it has syntax rules and semantic rules.

There are languages such as BNF which allow you to describe syntax rules, in the same way that “baby english” allowed you to talk about shapes. So, a sentence in BNF could describe the syntax of the java language. Like any language, BNF has syntax rules and semantic rules which tell you what a valid BNF sentence looks like, and what it means. Let’s just clarify the previous-but-one sentence – a syntactically valid BNF sentence can be “intepreted” (according to the semantics of BNF) to describe the syntax of the Java language.

Now, for the first time, we have a chain; sentences in one language (BNF) are describing the syntax rules of another (Java). And those syntax rules just tell us which sentences make up valid Java programs. When we have chain, some people start using the word meta – ie. BNF is a meta-language.

(Question 1: What language do you use to describe the syntax of BNF?)

There are also languages such as OpenJava. They let you construct sentences about Java programs – for example, “add a new method foo() to class Bar()”. OpenJava is certainly a language – it has well-defined syntax rules and semantic rules. However, we’ve also got a chain here – sentences in the OpenJava language are referring to sentences in the Java language. It’s another case of a meta-language.

Earlier, we were looking at “operational semantics” and we noted that it was just a language too. It has syntax rules and semantic rules, like any other language. These rules are what you’ve understood when you can “read” operational semantics and know what they “mean”. A sentence in the “operational semantics” language describes the “meaning” of a programing language. Note that the “operational semantics for Java” sentence doesn’t describe the meaning of a particular java program. It describes a /mapping/ from syntactically correct Java programs to programs which run on a “hypothetical machine”.

And, of course, these “hypothetical machine programs” are just sentences in another language – one which has its own syntax and semantics. It’s turtles all the way down!

(Question 2: What language do you use to describe the semantics of “operation semantics”?)

Now let’s leave programming languages for a moment and look at Proper Grown-up English. It’s a pretty crazy language. The syntax rules are mixed up (and gradually changing), and the semantics are subtle, ambiguous and take years to grasp. But one if it’s cooler properties is that sentences in the english language can refer to other sentences in the english language. We can say things like ‘”nearly finished now” has three words’. We can also get really self-referential and have sentences which describe themselves, such as “multisyllabic”. Our dictionaries define the meaning (ie. the semantics) of english words by using english! There’s clearly a bootstrapping problem here – you couldn’t learn English from a dictionary unless you already knew some English. But English is clearly a self-describing language, at least to some extent.

So, let’s return to the two questions we left above. One obvious answer to the questions would be to use “English” to describe the syntax of BNF, and semantics of “operational semantics”. That’s the approach that a CS lecturer would take.

But, you can actually use BNF to describe the syntax of BNF. The expressive power of the language is enough, and the structure of the language is simple enough, that it can describe itself. BNF can be used to describe the syntax of any context-free language. BNF itself is a context-free language. Therefore it can describe it’s own syntax.

Before we try to answer the second question, we should look more closely at what we mean by “semantics”. A few paragraphs ago, we noted that the operational semantics for Java defined a /mapping/ from syntactically valid java programs to programs which run on some hypothetical machine. Haven’t we just moved the goalposts? We’ve not actually established some Platonic “meaning” for each Java program. We’ve just restated in the problem in terms of some “hypothetical machine”.

No, we’ve not cheated. This is an intrinsic property of semantics. You can’t attain a “Platonic ideal” of the semantics of Java. You can only restate the problem in some other form. Operational semantics tells you how to interpret (syntactically correct) Java programs as programs on a hypothetical machine. Denotational semantics tell you how to interpret them as mathematical domains. But it’s as if you are translating a story between English, French and German, hoping to get closer to the true meaning.

At some point, you just have to “get” one of the languages, whether that’s English or some other language, and be content that you understand with the version of the problem restated in that language. You didn’t learn English (or whatever your first language is) by studing grammar books and reading dictionaries. You learned it through some “side channel” – behavioural reenforcement as an infant.

So, let’s finally return to the one outstanding question. Can “operational semantics” be used to define the semantics of “operational semantics”? Hmm, I should leave this as an exercise for the reader. I myself will return to the question once my head has stopped hurting!

Categories
Programming

Yet another Compiler Compiler

If you are ever foolish enough to wake one day and think “I’m going to write a compiler for the FooBar language”, you will soon find yourself well acquainted with the “FooBar Language Specification” document. In theory, this document tells you exactly what a program can look like, and how all the bits work. When you start writing the compiler, you’ll spend a lot of time checking little details in this document.

Boring, huh? You have the Holy Grail of Software Engineering (a complete and accurate specification!) but yet you’ll still need to read it, chew over it for a while, and then spend a good few months whacking keys like a little code monkey. If you watch someone write a compiler, they’ll read a bit of the specification, make some changes to the compiler, then repeat ad nauseum. What thought processes are going on in their head to convert this specification document into a compiler? Can’t we get the computer to do all the hard work, so we can spend our time doing more fun things?

Let’s look at our raw materials. The definition of a language is a quite a hard thing to specify. It has to be very precise, so that compiler writers have a clear understanding of how the language works. Compiler writers have a hard job, especially dealing with all the corner cases and edge conditions. One of these days, someone using their compiler is going to try to write a program which uses a hellish mixture of overloading, inheritance, template specialization and inline assembly — and they’ll (quite reasonably) expect that the compiler is going to handle it and keep on smiling.

A precise specification is not necessarily a good specification though. Just think of the instructions you get with flat-pack furniture. They do tell you exactly what to do – “Insert bolt (1) through part (2), using nut (3)” – but they don’t convey the spirit of the procedure. It reminds of the discussion in Zen and the Art of Motorcyle Maintenance where the author describes an analytical description of a motorbike (“a motorbike consists of a power assembly and a running assembly. The power assembly consists of .. yada, yada”). Such a description manages to tell you what a motorbike is, without ever conveying what a motorbike is. Hmm, just go read the book – it had a big effect on my life.

Uhh, let’s get back on course. Languages like “C” were created for pragmatic reasons – to get the job done (writing unix). Later, when the language got more popular it became important to create a formal specification, so that you could guarantee that each “C compiler” on the market will behave in the same way. So, while there is a language specification document for C, it is somewhat of a retrofit. Other languages, such as ML, have progressed along with their specification from early days.

If you read through the specification document for Java or C or C++ you’ll quickly notice that it’s written in english. Certainly, it’s a technobabble version of what you or I speak, more reminiscent of the ZATAOMM quote above than anything you’ll hear in the pub. But, it’s english nonetheless. And, as anyone with a longterm partner will know, english is an ambiguous language. This is just asking for trouble. Compiler writers loose sleep over phrases like “should not” and “must not”. Even worse, when you try to use english in a very precise way, you risk making it a complete impediment to understanding.

Besides, our original plan was to get a computer to write a compiler for us. Computers aren’t very good at understanding english, so we’re unlikely to have much success with languages where the specification is written in engrish.

Fortunately, some clever language designers have thought to use something better to describe their language. For example, the creators of Standard ML use a formal notation called “natural semantics” to describe the meaning of ML programs. Cunningly, you have to buy their book to see it (or live near to a Uni library), but this notation enables them to elegantly and concisely describe the whole of the language in a single slim booklet.

This particular notation (“natural semantics”) is really just another language – one suited to describing the behaviour of programs. We rarely try to use english to discuss calculus or algebra (unless there’s no blackboard nearby) because mathematical notation is much more precise and concise. It’s the same situation with discussing programming languages. Sure, you have to invest some time in learning the language, but once you’ve done that you can communicate effectively and precisely about the meaning of programming languages. Once you’ve learned to read natural semantics, you can spent your winter evenings reading your way through “natural semantics for Java”, “natural semantics for C++”, and so on …

Natural semantics isn’t the only such “language” for describing the meaning of programming languages. There’s also operational semantics, denotational semantics, action semantics and probably lots more. So you could have “denotational semantic for Java” and “operational semantics for Java” and they’d both tell you what Java is.

Given that you can describe the behaviour of Java using any of these notations, you might wonder why you’d pick one over the other. The difference is that one notation is particularly good if you’re building a compiler, while another notation might be particularly good if you’re trying to prove properties of a program (like, “does it do what I hope it does?!”).

A “denotational semantics for C” describes the behaviour of C by mapping each part of the C language onto a mathematical object called a domain. Don’t worry about the details – just note that once you’ve got such a mapping set up, you’ve got a huge toolbox of mathematical techniques available to probe your language with. For example, you probably have an intuitive notion that the code “i++; i++;” is pretty much identical to “i += 2;”. Denotational semantics is the ideal tool for putting these intuitive notions on a more formal footing. Unfortunately, having all these mathematical objects and theorems floating around isn’t getting you much closer to having a compiler for the language.

In constrast, an “operation semantics for C” would describe the behaviour of C programs on some sort of hypothetical computer – probably a fairly simple one. For each construct in the C language, it would describe the transition from the initial state of the machine to the resulting state of the machine. This is a pretty reasonable way of defining a language, especially given that these are programming languages and mostly we’re interested in writing compilers for them. It’s important to choose the “hypothetical computer” carefully. You probably don’t want to choose a Real World computer, since that would make it hard to build compilers for other platforms. But you also don’t want to make the hypothetical computer too abstract (like a turing machine, or lambda calculus) because your description wouldn’t convey the spirit of the language very effectively.

If you were writing a compiler for a language, you’d probably find the “operational semantics” quite helpful. Basically, you’d just need to decide how to efficiently implement the operations of the “hypothetical computer” on your target machine, and the rest is easy!

But wait! We don’t want to write the compiler by hand. Let’s code up that knowledge (the mapping from “hypothetical computer” instructions to “target computer” instructions) into a program, and have it slurp in the “operational semantics” definition of the Java language (which is a mapping from “java source code” to “hypothetical computer” instructions). Hey presto, we got a java compiler/interpreter!

What’s more, we can take that same program (a “compiler compiler”)and feed it the “operational semantics for C++”. Hey presto, instant C++ compiler!

This sounds great. I’m describing a world where language designers write a formal (accurate and complete) description of their language, and we can instantly build a compiler for it. Surely it’s too good to be true? Yeah, of course it is. This kind of thing does actually work to a certain extent (shock, horror) but it’s got big difficulties. For a start, a naive translation will result in a hideously inefficient compiler – it’ll generate correct code, but that code will run very slowly. And, regarding correctness, we’ve moved the goalposts – our compiler compiler had now better be bug free, or we’ll have big problems. That’s just the tip of the iceberg. There’s a lot more work required in this field before you can throw away your copy of gcc.

So, that’s the end of this minor epic. To summarize: we can describe the “meaning” of programming languages using a variety of notations – english, operational semantics, denotational semantics etc. Each of these flavours of notation are suited to a particular task. Some make it almost possible to generate a compiler for the language direct from the specification, eliminating the costly and bug-ridden process of having humans write compilers.