I was lying on the grass in Holyrood Park this evening, having returned to my hobby of finding new ways to fall off mountain bikes (by cycling backwards, or doing no-handed trackstands). The first thing I noticed was that I usually think of the sky as “that thing above the horizon”. That’s very two dimensional of me. When you’re lying on your back, the sky becomes this big circle all around you. I then spent a while trying to change my point of view from “I’m lying on the ground and the sky is above me” to something more like “I’m stuck to the earth by gravity and ‘up’ and ‘down’ are just convenient notation rather than being the whole truth”. And then I got onto software engineering ..
In the Extreme Programming book, Kent Beck describes the philosophy of “if X is good, let’s do X all the time”. Well, if “objects” are so great, let’s do “objects” all the time. I find the idea quite compelling, partly because I’ve been reading about the GUI for the Self programming system recently. It models the world as a collection of interacting objects, but it notably put a lot of effort into maintaining a consistent, uniform and tangible view of this world. That might sound like eye candy at first, but I think it’s really quite significant. The argument from the Self/Smalltalk camp is that human beings have plenty of experience working with “objects” in the real world, and we can take advantage of that when designing programming systems. I find that my C++ background makes me think of an “object” in a bottom-up fashion — as a collection of data and some associated methods. I very rarely think of C++ objects as real-world “objects”. However, when using Smalltalk and Self I definitely think of them as real-world “objects” first and foremost. Why do I have such different views of two “object oriented” languages? It’s caused by the runtime environment. In Smalltalk, I can pick up an object (well, open an Inspector on it) and prod it to make it do things. Once I’ve seen how it reacted, I can choose to prod it in a different way. There’s a cycle of investigation and discovery going on, which matches how I interact with objects in the Real World. My “mountainbike” object responds well to the “pedal” message, so maybe now I’ll try the “brake” message. In the C++ world, you have to specify all of your interactions in advance. It’s a bit like writing a tutorial on how to ride a bike, but then having to stay indoors whilst someone else goes outside to try it out. Maybe they’ll come back and tell you how they got on, but you might be left wondering if there really ever was a “mountainbike” object involved at all. Maybe I’m an object empiricist after all! (I would appear to be the only one, according to google).
I’m interested in programming systems, so let’s do an “object makeover” there. Normally, programming systems revolve around ASCII source code. Yuck, that’s totally un-object! Let’s just get rid of that entirely. We’ll replace source-code with objects like “Loop”, “Conditional” and “Variable” which we can connect together to express algorithms. If we adopt that world view, we can see traditional source code as nothing more than a convenient notation for expressing the rich structure of programs. Now it becomes obvious that we can switch between different notations, depending on the problem domain, to express our intentions more closely. That’s one part of Intentional Programming. Now our “code” is “objects” we can use all our powers as programmers to manipulate and investigate it. We can ask a “Loop” object which variables it reads and writes. We can ask a “Method” object which classes it knows about.
As I said earlier, you get a much more engaging experience if your objects are tangible and manipulable. The traditional presentation of source code as plain old text might be familiar, but it’s hardly earth-shattering. Squeak has alternative (“tiles”) rendering mode for source code which displays the structure geometrically. When you realise that the contents of your ASCII text file is just one particular representation of the “Platonic Ideal” version of your code, you start wonder if it’s a good representation, or if it’s a load of rubbish.
Creating and viewing “source code” objects is only part of the story. The power of an “object” world view comes from being able to interact with the object, discovering things about then and finding out what they can do. What questions would you want to ask of your programs? If the most efficient way to change a method implementation is to edit it’s ASCII representation then, great, do it that way. But if the most efficient way is to reach in and manipulate an ‘Expression’ node to refer to a particular ‘Variable’, then do it that way.
Time for a change of tack. The terminology gets confusing when you have run-time objects (ie. normal C++ or Smalltalk object) and also “source code” objects (commonly called meta-objects). I’ll do my best to distinguish the two.
The reason you’re creating these collections of “Method” and “Loop” meta-objects is so that you can control the actions of your computer. How exactly does that work? Well, in one direction, the meta-objects define the behavior of the runtime objects by generating the code which is run on the CPU. That’s compiler backends do for you. However, there can also be useful flow of information in the other direction too — the runtime objects can provide concrete data about how their little lives progressed. One example of this is profiling. A “Method” meta-object might be quite interested to know how many times it’s runtime compatriot was called during the execution of the program. Most profilers dump that information out to a database. It seems to me like the meta-object is a more sensible place for that information to go. Furthermore, the meta-object can use that profiling information to change how it generates code in the future. If I were a meta-object, I’d be very interested to read the autobiography of my run-time compatriot.
Time for another change of tack. In the Real World, I can learn about objects by interacting with them and observing the consequences of my actions. I flick the gear-change lever on my bike, and then I can watch the chain skip over to a new gear cog. Similarly, when I’m learning a new object-oriented program, I want to watch the objects interact so I can figure out what’s going on. In C++, you have bare minimum tool-support for watching this happen. I think the “step over” and “step into” debugger command are incredibly primitive caveman tools. I find that the best way to learn how the system works is to find someone who already knows about it, and have them explain to me. They wave their hands around, and gesticulate toward imaginary “objects” and speak in anthropomorphic terms, and suddenly I understand how the program works. Why can’t the computer give me this kind of explanation? I can sit down and drawn a UML sequence diagrams by hand, but I’d much rather have the computer generate them for me by executing the program. Meta-object systems let you investigate the static “recipe” view of a system, but what about peering into the whirring guts of a running program? When I took a six month sabbatical to go travelling, I found an book about software visualization which had some great ideas in this field. For example, create a graph where the nodes are objects and the edges indicate an interaction between the objects. Then, lay out the graph so that the objects which call each other most are close together. Where are the tools like that? I need to get another copy of that book.
In fact, why are the commonly-used developer tools so primitive? Is it because Microsoft and Borland dominated the C/C++ market to such an extent that smaller tool companies didn’t get a look in? I used to think that it was a language problem. C++ is a pain to parse and it has complicated semantics, so creating tools which process it is hard. There seems to be many more tools for Java these days, and Smalltalk and Lisp have had pretty good tools for decades.