Categories
Programming

Optimization

This is an idea which has been growing on me for a few years. I’ve finally manage to nail some of the concepts down a bit. I’m dreaming of a world where programs are elegant, concise and obviously correct. In particular, I’m asking:

“Why do programs have to get more complicated when you try to speed them up?”

If you want to speed up your program, you usually end up rewriting parts of it to make them go faster. Typically, you use better algorithms and data structures, add caching, or maybe rewrite an inner loop in assembler. The resulting code is probably more complicated, harder to understand, harder to modify and more likely to contain bugs. You’ve made a performance gain, but you’ve suffered a heavy loss in code quality.

It’s depressing to start with a clean elegant piece of code and then desecrate it in the name of performance. Aesthetics aside, optimization also makes code brittle. It’s harder to fix bugs in “optimized” code because it’s less clear what’s going on.

In my ideal world, I would be able to write an elegant, clear version of the code, and then seperately specify a transformation which can be applied to my elegant version to produce a faster version. This would allow me to express my intentions much better. I can say “look, this is what I’m really doing” and, seperately, I can say “here is how I make it run fast enough”.

If I were writing code to draw a straightline on a bitmap, my clean version would calculate the position of each pixel independently using “y = mx + c”. My “program transform” would describe the neat trick which Breshnam used in his famous algorithm, transforming line drawing into an incremental algorithm.

Web designers do this kind of thing all the time. It’s very uncool these days to write HTML pages which contain layout and formatting. The cool kids all now write their content in XML, and then produce a seperate “presentation transform” which is applied to make the version you actually look at. This allows the content and presentation to be controlled independently, rather than mixing the two together into a tangled mess.

It is not so easy to do this with code!

Do we not already have seperate optimizing transforms? Your compiler certainly knows about many potential speed-improving transformations, and it can apply them appropriatly to your code. Thus, your compiler will do its best to generate fast code, but it doesn’t give any guarantees.

Consequently, you may be tempted to apply these same kind of transformations manually to your source code to ensure that they really are getting done. For example, you may manually inline a function in your source code, because the “inline” keyword is technically only a suggestion to the compiler. After all, if inlining is critical to the performance of your application, you want to be absolutely sure it’s happening. You don’t want someone to upgrade the compiler on the build machine and suddenly find that your application crawls at a snails pace because the new compiler has decided not to inline your key function.

Manually inlining isn’t nice. It’s a bit like writing old-school HTML, resplendent with “font” tags and “color” attributes. The resulting code is a tangled mixture which is harder to understand.

Now, if you *knew* for sure that inline is going to be applied, you can make the source code clearer by splitting big lumps of code out into functions. You’re gaining code clarity by using functions, but you aren’t suffering any speed-loss because you *know* the functions are flattened out before code generation.

I sometimes think about this back-to-front. Imagine that you *know* that the compiler will honour an “inline” annotation before generating code (eg. some variation on __force_inline). In that case, it is safe for you to apply the opposite transformation (eg. splitting out into functions) to your source code. This means you don’t suffer a performance cost, but you gain a code-clarity points. If “inline” is an performance-boosting transformation, then the inverse of “inline” is a clarity-boosting transformation. What’s the inverse of “inline”? It’s the popular “extract method” refactoring, with all the well-known benefits it brings.

(As an aside, this makes you wonder what a “function” really is. Usually, a function is like a subroutine. You expect to see a CALL instruction, and a RET instruction somewhere. When you start applying transformations to your code, it becomes clear that functions are more like named scopes).

So, if we have a cast-iron guarantee from the compiler that it will honour “inline” we can simplify our code by applying the anti-transform to our source. We can utilise the clarity of functions without any of the cost.

Getting the required guarantee is the difficult bit today. Your language would really need annotations something like “compiler_must(inline)”. Actually, I guess that could be “optimizer_must(inline)” to underline the fact that “compiler” is really two black boxes – an program optimizer, and a code-generator. Either way, this is a much stronger assertion than merely having a comment in your source code which says “// I checked, and the compiler is definitely inlining this function today”.

Are compiler-based optimizations the whole story? No, of course not. A compiler has a very limited understanding of what your program does. It does not realise, for example, that you may be using matrices. It does not know that matrices have lots of interesting properties which could be exploited to make the code run faster. If you were just doing integer arithmetic, then the compiler could help. Integer arithmetic is part of the language, and consequently the compiler probably knows facts like “a * b = b * a”. But once you move away from core language features, the compiler is limited by a lack of domain-specific knowledge.

Clearly, we need some means of capturing relevant domain-specific information (such as “M * I = M, where I is the identity matrix) and then describing domain-specific optimizations which use these properties. Our optimizer in the compiler could then work at a much higher level. Furthermore, now that we have been freed from the boring work of producing optimal code, we can program at a very high level, leaving the task of producing optimal code to the optimizer. Hmm, we’ve arrived at the notion of DSL (domain specific languages) through a very performance-driven route. Normally I think about DSLs because they make programs more simple and easy to understand. I guess that’s just a reflection of the “optimizations are anti-clarity tranforms” idea.

I’ve wafted over the difficulties of actually writing one of these transforms. What language are you going to express it in? Returning to the HTML/XML example, there are a myriad of general purpose and domain specific languages for transforming XML. For low-level source code transforms, we would need to operate in a world of abstract syntax trees and dataflow analysis. For higher level transforms, we will be working in terms of domain concepts, like matrices.

Regardless of what language you use, you need to be very sure that the transformation is correct. It can contain bugs, just like any other program. There are probably many lines of code within your compiler dedicated to the “inline” transform, yet more for the “loop unrolling” transform. Imagine the effect which a bug in your compilers optimizer would have on your product. If we are going to write our own transformation too, they need to be totally bullet-proof.

Actually, that’s not a totally fair comparison. Compilers have to decided, based on limited information, whether it is safe to apply a particular transformation. If the preconditions are met, then the implementation of the transformation must work correctly in all possible circumstances. That means it has to assume the worst case scenario. For example, if your C++ code uses pointers at all then lots of optimizations aren’t possible – since writing through the pointer could conceivably change any of local/class/global variables in the program (the pointer aliasing problem). The compiler, by itself, doesn’t have any more information about how the pointer is actually used. Most compilers have an command-line option which is equivalent to “you can assume that pointers don’t alias”.

However, a transformation in your own code doesn’t have to be fully general. You have to take on the responsibility for ensuring that it’s valid for the cases which you need it to be. Going back to the HTML/XML example, you often find that people create stylesheets for fixed-position layouts, and they check that it look okay. Then they go back and add a new paragraph to the content. As a result, the paragraph overlaps one of the picture and the layout looks bad. Was the original stylesheet buggy? Not particularly, since it did the job fine before the new paragraph was added. It just wasn’t general enough to remain correct in the face of content changed.

Finally, we need to think about the effect which optimizers have on the rest of the development environment. Debuggers, coverage tools and profilers would all be effected by such a scheme. Stepping through compiler-optimized code in the debugger is often “exciting” as it is, and so adding a whole new layer of domain-specific optimizations will make things much worse.

As is traditional when I have a new idea, someone has already got there before me! It’s fun thinking through these things for a while before you read the literature. I wrote all of the above stuff, then went to the web and discovered Todd Veldhuizen’s web page. He does research on this very topic. I’ll have to forgive him for being so keen on C++ template metaprogramming (which is the subject of a whole seperate rant) because there’s loads of interesting information on his site. I was happy to see that my ramblings above bear some resemblence to the stuff he talks about.

[ If I offered a free pint of beer to anyone who made it through the whole article, would I get any takers? 😉 ]

Categories
General

Ocaml assembly output

Having spend a lot of time recently looking at the assembly generated by the MS C++ compiler and Corman lisp, I was about to start investigating what style of assembly the ocaml compiler generates for typical constructs. Fortunately for me, some else has already got there first. It’s a pretty informative article, and goes a long way towards explaining why ocaml performs so well. Well, actually, the real reason is that Xavier is one clever cookie.

I read one of his early reports on the ZINC system when I was travelling around the world with iPAQ in tow, and the major performance gain at that point, if I remember correctly, concerned avoiding unnecessary construction of closures for curried function. So, even though function calls in ocaml are curried (ie. a “two argument function” is really a single argument function, which returns another single argument function) you don’t actually need to build up all the scaffholding for the intermediate function if you’re just going to immediately apply it to another value. This stops you building lots of intermediate closures on the heap. This was an innovation at the time, I imagine (in 1990).

The article also describes the boxing scheme used in ocaml, which uses the bottom bit to indicate whether a bit of data is an integer, or a pointer to a heap block. If all your heap blocks are word aligned, the bottom bit is redundant anyway so this is a neat efficient trick. [I know a few other unnamed people who should recognise such bittwiddling tomfoolery too 😉 ]

Last week at work, I started rewriting a small bit of code because I knew that there was a more elegant (and therefore more likely to be correct and maintainable) way to express what it was doing. Unfortunately, when I started editing the code I realised that I was thinking in ocaml! The “elegant way” required ocaml-style variants and pattern matching, but I was coding in C++. Eep! The closest C++ equivalent is almost a joke in constrast to the ocaml version.

Finally, to end this ocaml praise session, the internals of the compiler are elegant and clean. The various sections of the compiler are split out using ocaml’s powerful module facilities, and the functional style of programming (ie. minimal use of state) makes understanding the code a lot easier. By contrast, the internals of gcc are a hideous mess. Actually no, the internals of gcc *are* a hideous mess, period.

Mu, I’m somewhat paralysed with indecision regarding where I want to go with computer tools. I have so many ideas and things I want to try out, and I’ve seen so many great ideas consigned to the historical bit bucket. And also, despite the impression which this programming-only-blog might convey, there’s a million and one other non-computer things which I want to spend my time on. I think at some point I need to lower my ambition and focus on improving one particular thing, rather than riding along on a cascade of new ideas. But it’s annoying, because every day I see tools which I consider to be primitive and backwards compared to what is possible. I want to “do an Alan Kay” and burn the disks – look around, take what is good, and throw the rest away.

Categories
Programming

C++ Refactoring Tools

Well, I tried to kickstart this year’s attempt to write a C++ refactoring tool by asking EDG if I could somehow get a cheap license for their C++ front end. In the past, I’ve got bogged down in the difficulty of parsing C++ and always gave up in disgust, vowing never to waste my time on such a primitive language. This time, I thought I could avoid doing the work myself and buy in a third party solution. Unfortunately, EDG either want a big up-front sum, or some concrete guarantee that I’ll sell lots of copies of the (expensive) resulting software. That’s understandable, since they need to make money, but it makes this approach unfeasible for me.

A few days later, I notice that the Xrefactory bods have apparently been taking the same route recently. Clever people!

Of course, having a parsed-and-semantically-analysed representation of a C++ program is only half the battle. In an ideal world, you’d throw Bill Opdyke’s thesis at the problem and you’d be done. But in the real world, you have a preprocessor to deal with. Since C and C++ have a preprocessor which works at the lexical level, it can perform horrendously arbitary transformations on the program. The preprocessor munges text, and has no awareness of the syntactic/semantic structures which it is manipulating.

Refactoring in the presence of the preprocessor is hard. Fortunately, some clever people have been worrying about this recently and published a pile of papers on the subject.

I shall read these papers soon, become disgusted by the inelegance of C++ and give up on my “C++ refactoring tool” project for the fifth time.

Categories
Programming

Optimizing programs

Simple program gets complex when you make it faster.
Want to encapsulate “optimizations” for clarity, explanation and reuse.
In the compiler? Don’t know if it gets used. Can’t rely on it. Hidden. Someone will “optimize the source” later.
By hand, in the source? Original version lost. Confusing now.
Seperately, in macros? Can’t flip between the transformed versions easily. Hard to debug?

Eg. Drawing a gradient. Incremental computations.

Eg. writing xml and a stylesheet, rather than flat html

Categories
Programming

Debug Win32 Heap Internals

Having recently started a new job, I’ve been deliberately taking some time to build up my development environment to be just right. Over the past six or so years I’ve figured out what’s the easiest way to do X, or the fastest way to do Y, and I’ve carried these things from job to job. I’ve spent a while tuning my dot-emacs file so that most of my common tasks are automated, and I’ve turned into a “three strikes and you automate” bunny.

Anyhow, I’ve been doing a lot of heap related work, and the MSDN docs aren’t great. So, I finally got round to writing the article which I wish I’d wrote years ago …

It’s obscure! It’s of interest to about 3 people in the world! It’s the … Win32 Debug CRT Heap Internals guide!

(but it means I’ll always know where to find the information, so I’m happy. And it’s always fun indulging in a bit of low-level hackery rather than always using high-level languages which protect you from this stuff)

Next week, I’ll write another article on how to pinpoint heap corruption (writes through dangling pointers etc) using only DevStudio (no Purify or Boundchecker required).

I can’t over-emphasis how good nxml-mode is. I’m using it for all my xml/xhtml stuff now. I couldn’t be without it.

On another track completely, I’ve been doing a lot of thinking about physics recently. I’m trying to relearn electromagnetism from the ground up. I remember reading that Richard Feynman once did this exercise – taking your knowledge to bits, examining all the parts, and carefully putting it back together. I am really appalled at how poor my understanding of this subject it. I did physics up to 2nd year at Uni, and always got top grades. Yet, I have ended up with a facade of understanding. This page convinced me that I wasn’t crazy after all. I think it should be possible to write a simple electromagnetism “simulator” which operates at the level of individual electrons, and only knows a small number of rules (Maxwells laws), but yet is able to recreate pretty much all electromagnetic phenomena. Sure, it’ll be slow but I hope it will be able to demonstrate basically all electromagnetic phenomena.