Categories
General

ICFP day 1

Day one of the ICFP functional programming conference. It got off to a great start (for me, at least) with Guy Steele making a call to abandon outdated programming practices based around inherently sequence datastructures such as linked lists, instead favoring those with more opportunities for divide-and-conquer parallelism such as balanced trees (conc trees in lisp) – both can be used to represent ordered collections, but the latter gives you nice ways to split off subprograms which can be computed in parallel. And, since the same kinds of ‘recombine partial results’ strategies crop up again and again they can be factored out, and in the Fortress language, given a very concise notation. Guy described one common ‘useful trick’ to avoid some data-dependencies across the sub-problems, which was to accumulate some ‘extra information’ up as you recombine partial results – this reminded me a lot of Merkle trees.
Of course, along with the idea of ‘factor out strategies’ there were also many mentions of mapreduce – quote, “Map/reduce is a big deal” – which made me pleased I’d talked on this very topic at SPLS earlier this year. Enjoyable, and pretty straighforward talk (but then, I think Guy Steele can make complex things sound simple in much the same way that many other speakers do the reverse).

The other talk I was most interested in was the formally proved microkernel; however learning that an ~8kloc C program had required ~120kloc of computer-assisted proofs was somewhat sobering. The talk was very high level and didn’t include much in the way of gruesome details. Happily, a different talk featured a live theorem proving session using Coq – I’ve just started learning this myself, so it was a lot more meaningful that it might’ve otherwise been.

At these conferences, I’m never sure what fraction of the talks will be meaningful to me, given that this is effectively my hobby interest. However, I’m pretty pleased with my hit rate after today. Only one talk was content-free for me (note to self: never do a talk like that). But given that I still remember the first time I went along to SPLS years ago when I didn’t really grok monads, it was a nice contrast that I could follow a talk about the interaction of laziness and non-determinism in monadic style without difficulty; although it’s not exactly going to be immediately useful to anything I’m writing today. It’s nice to feel that my understanding has progressed a bit over the years. And it was useful to have played with arrows recently, both in the HXT xml toolkit and by reading the original Hughes paper – they came up several times.

I’m playing a “spot the kool-aid” game; trying to see how many people play the “functional => good” card without further justification. I mean, “functional” is such a broad brush – pure functional like Haskell, impure like ocaml, dynamically typed like lisp/scheme – each with their own strengths and weaknesses, and many ‘functional’ ideas have been integrated into more mainstream languages already. I can’t help imagining that if there was a ruby/c++/prolog conference next door, they would be cheering their heros and booing their villains in identical ways. Hmm, that’s probably just being harsh; in practice, everyone I have talked to so far at ICFP has seemed pretty clued up, but I do think it’s useful to apply some level of cynicism to what you see and hear. I jokingly think that a “devils advocate” panel would be fun … where experts in an area would try their hardest to convince the audience that their area is pointless/futile/flawed. Wouldn’t that be fun?

Categories
General

Learning to program, again

Finally, I’ve found a tutorial for Coq which is aimed at people who primarily grok functional programming and only have the vaguest clue what assisted theorem proving might be.

Initially, you can think of Coq as being another functional programming language. It’s got all the usual suspects – algebraic datatypes, pattern matching and such like. You can write a ‘quicksort’ function and evaluate it pretty much you’d do in haskell/ocaml.

You can then go on to make a bold claim such as “forall (l : list nat), length (quicksort l) = (length l)”. As yet, that’s no more impressive that seeing a similar claim in some javadoc comment – maybe it’s correct and maybe it’s wrong. In java-land, you might even write a unit test which asserted that it was true for a few hand-picked cases. But in Coq, you can then go on to prove that it’s correct for all possible executions!

(fx: screen goes all wavy)

Back when I did maths at Uni, I often used to make mistakes when “showing my working” during problem solving. I knew that there were somethings you were “allowed to do” – for example “you are allowed to put “plus one” on both sides on an equality”. And so if I was faced with an equation like “y – 1 = 2”, I could invoke that rule to end up with “y – 1 + 1 = 2 + 1” and then eventually “y = 3”. It seemed to be a bit strange that my lecturers rarely wanted be to write down which “rules” I was invoking. Instead, I just had to write down the result of applying the rule, and keep doing that until I got to the end. The problem, of course, is that sometimes I wrote down an x when I meant to write y. Or, sometimes I’d apply a rule which wasn’t a “real” rule – for example, dividing both sides of an equation by ‘x’ when x might be zero. It occurred to me at the time that a computer would be far better at this kind of tedious bookkeeping that I would ever be. I had visions of telling the computer “please apply the ‘plus one’ rule” and the computer would produce the next line for me. Or “please apply the ‘divide by x’ rule”, whereupon the computer would say “dude, what about if x is zero, huh?”. I didn’t want the computer to solve the maths puzzles for me; I only wanted it to do the bookkeeping.

(fx: return from wavy effect!)

And so this is roughly what Coq does, in addition to being a programming language. It has all the usual builtin types and functions, like booleans/integers/lists and not/plus/concatenate. But you also get a whole load of proved ‘facts’ about these things. Some of them seem noddy, like “true and false are not equal” (but it’s pretty nice to know its provable true in this language). But some are more meaty; for example, (take n l) ++ (drop n l) = l (ie. getting the first n items, and all-but-the-first-n items gives you the entirely of the list).

And so when you write your own code, you can claim and then prove that it has certain properties. To do this, you can use the properties of the builtin types and function. When you use your own data types, you typically end up doing structural induction – similar to high school induction over the integers, except that your base cases are usually the no-argument constructors (like emptylist) and in the induction step, you show that the property holds for a complex constructor, assuming that it held for the constituent parts it was combining.

Why do I find this interesting? Well, the maths anecdote above is definitely a factor. But also, I’m frustrated that, in industry, “unit tests” are about the state of the art as far as widely adopted practises go. But I don’t want to just check one or two cases for my functions – I want to check them all! I might have a good understanding of the rules of java/haskell in my head, much like I had a good understanding of the rules of maths. But that doesn’t stop me making typos or think-o’s.

And I think, coming at it from that angle, I’m probably immune to some of the flames which attract the academic-only moths. At the end of the day, I’m learning this because I want to write better code – not because of the “beauty” of any of these systems. I don’t care about completely specifying the behaviour of my programs; I’d be happy with just showing that a few important properties held. If that halved my bug rate, I could live with that (if the overhead wasn’t too high).

I have been prewarned that, typically, you plunge off a cliff of complexity once you get onto non-toy examples. I don’t think I’ve hit that cliff yet. But I do feel like a complete n00b all over again. I submit code snippets into Coq without knowing what they’ll do, and then I peer at the screen to see what the heck happened. At first, it was all completely alien (which, in itself, was an alien feeling these days .. I must be stuck in a comfort zone). But gradually I’m beginning to find my way around a little bit. It reminds me of learning to program for the first time, on a 48k Spectrum twenty-odd years ago. I’ve learned many languages since then, and spent plenty of time dismantling then reassembling my world view to encompass new ideas. But Coq feels like a bigger perspective shift that anything I can remember .. apart from that first “nonprogrammer becomes programmer” change. Pretty awesome stuff – can’t ask for much more than that!

I recently read a paper by Benjamin C. Pierce in which he described his attempts to teach an undergrad CS course using a proof assistant. That paper was what rekindled my interest, and what lead me to find the tutorial I linked above. In a spooky coincidence, I noticed for the first time that he is actually presenting said paper at ICFP here in Edinburgh in a weeks time! Win! 🙂

Categories
General

Take that, Euclid

import Data.List
gcd' x y = maximum $ (allDivisorsOf x) `intersect` (allDivisorsOf y)
           where allDivisorsOf x = [ d | d < - [1..x], x `rem` d == 0 ]
Categories
General

Reducing vs decomposing

A small distinction which I remembered about today – it was in the SICP videos I watched a while ago. If a problem A can be reduced to a simpler problem B, then once you’ve made that step you can forget about problem A entirely, safe in the knowledge that the answer to B will do just as well. In contrast, a problem which can be decomposed into sub-problems might yet require the sub-answers to be combined in some way to get the final results. Therefore, you need to do more bookkeeping in order to track the results of the subproblems.

To a programmer, this is just the difference between a tail-call and a non-tail call. But the insight pertains to how some programmer use the phrases “reduces to ..” and “decomposes into ..” to mean two subtly different things.