Categories
General

Static checking java

As a distraction from learning about denotational semantics, I took a look at ESCJava2, which is an extended static checker for Java. ESCJava2 lets you add annotations to your source code which describe invariants and pre/post-conditions which you expect to be true. Then it goes away and tries to verify that they are true. Often, you have to go back and add extra annotations elsewhere (anyone who has const-ified a C++ program knows this feeling). But eventually you get to a point where you can be fairly confident that your program is (statically) free from all sorts of badness – eg. you’ll never try to pop from an empty stack, you’ll never get a null pointer exception.

This is all done declaratively, which makes it much tastier than unit tests.

Does it work? Well, I got out the source code for my trusty java raytracer, which was literally the first java program I ever wrote .. back in 1996 or something. At about 1,000 lines of code, it’s both “real world” enough to act as a decent test case, yet simple enough to grok quickly.

Initially, I had to spend a while adding ‘non_null’ annotations everywhere – every field in every object had that property. Dull, but useful to have it all checked.

Next was various bound-checking stuff. I had an image class with width/height members and a 2d array of pixels. I ‘knew’ that the array was of the right size, but by asserting some invariants to that effect, I could get ESCJava to statically bounds-check all the code that reads/writes pixels. The invariant were moderately complex things like “forall i < height; pixels[i].length == width". These "forall" quantifiers are what makes this approach tastier than unit tests. With unit tests, you have to pick "representative" data points and then trust that you chose well. In the haskell world, QuickCheck is a big step above that. Perhaps ESCJava brings some hint of universal tastiness to Java? Well, except that it didn't work in practise. My key 'raytrace all the pixels' method couldn't be processed by ESCJava. It gave an error which effectively said "the theorem which I'd have to prove to be sure that the code is safe is just too complex". Disappointing, considering that it was little more than some nested loops plus a bit of logic. ESCJava had a few other nice features though. Since my code is old, it the plain old Vector class. ESC can add a 'phantom' elementType field to your vector, which gives you 1.5-like static type checking on pre-1.5 collections. But in the end, the final nail in the coffin was that ESCJava does not support java 1.5+ features. So it doesn't understand generics. Oh well, no use for the real code that I work with. I'd love to have the time to understand this area more deeply. I like this kind of "unsound and incomplete but useful in practise" part of the statically-checked spectrum.

Categories
General

Hacking with HAppS – what type, handler?

(These posts are prettified versions of my notes that I made whilst stumbling around HAppS for the first time. I hope they’ll be of use to future HAppS explorers!)

Note 1 .. In which we figure out what ServerPartT and WebT really are.

When we start HAppS running with ..

   simpleHTTP config list_of_handlers :: IO ()

… what type do the handlers have? Let’s explore some possible universes!

In a simple world, a handler could have type

    Request -> Maybe Response

This allows handlers to be selective about which Requests they handle, which is useful. But it also means that handlers must be pure functions so there’d be no way of maintaining any ‘application state’ between requests. Not very useful!

Let’s try to remedy that. If the handlers had type:

   Request -> State S Response

.. then we’d be able to retain and modify some application state (of type S) between handler invocations. But that’s all we’d be able to do. We still couldn’t write logs to disk, read file contents or talk to databases.

To allow handlers to do that, handlers would need to have type ..

  Request -> IO Response

(As an aside, note that whilst simpleHTTP has type “IO ()” it’s up to simpleHTTP whether or not it exposes the full power of the IO monad to the handlers. However, as we’ve just seen, not doing so would pretty much cripple the handlers)

So is this really what HAppS gives you? Let’s see. In the real world HAppS handlers have type “ServerPartT IO a” which is defined to be …

newtype ServerPartT m a = ServerPartT { unServerPartT :: Request -> WebT m a }

In other words, if you have a value of type “Request -> WebT m a”, you can tag it with the ServerPartT constructor to say “this ain’t just any function of that type, it’s part of a web server, dogammit!”.

But, apart from the tag, our handlers really just have type “Request -> WebT m a”. So what’s a WebT?

newtype WebT m a = WebT { unWebT :: m (Result a) }

So, ignoring the tag, it looks kinda like our handlers can produce any monadic computation that, when run, yields a “Result a” (the “Result a” type lets us say “I handled this request, here’s my answer” or “I didn’t handle this request”)

Hmm, there’s something not right there. If that was the case, it would be possible to use, for example, both “Result -> IO (Result a)” and “Result -> State S (Result a)” as handlers. But how would simpleHTTP ‘run’ such monadic computations? To run a State computation you use “evalState”. To run an IO computation, you have to return it from main. There’s no single way to run all possible monads.

Did I miss something? Yes, sort of – I just blindly assumed ‘m’ was a monad because of the way it was used. But there’s no “Monad m =>” constraints anywhere. And, in fact, if we go back to the start ..

simpleHTTP :: ToMessage a => Conf -> [ServerPartT IO a] -> IO ()

.. we can see that, right from the top, simpleHTTP only deals with ServerPartT’s and WebT’s where the ‘m’ type parameter is IO.

So, after all that, the handlers in HAppS handlers can be anything of type: Request -> IO (Response a). The ‘a’ type needs to be something that HApps can figure out a content-type and be able to make an HTTP message from (the ToMessage type class). The “IO (Response a)” part gets dressed up as a “WebT” (think: computation that produces the response). And the whole thing things gets dressed up as a “ServerPartT” (think: request handler).

What does each bit do? The IO part gives you free reign to do side-effecting computations, which get sequentially executed by the runtime. The Response part allows you to say ‘I handled this’ or ‘didn’t handle it’ and the ‘a’ part can be your XML or HTML data type (so long as its an instance of ToMessage).

So that’s it.

Man, that was a lot of complexity for such a simple end result. I don’t understand why there’s so much generality in the types of ServerT and WebT – I’d love to know!

Categories
General

Hacking with HAppS

I’m always on the lookout for more fun ways of programming – I hate doing dull boilerplate work. So I’ve been trying out HAppS, a webserver framework for Haskell. It’s interestingly different from most web frameworks. I want to capture my early observations before I forget them. To give some context, my “toy app” I’m building is an “IMDB for Computer Scientists” type website which gives structured information about people (eg. Guy Steele), what papers they’ve published (eg. Growing a Language), what events they’ve participated in (eg. a talk about “Growing a Language” at OOPSLA, available via Google video).

So the biggest thing about HAppS it that it doesn’t have to use a database to persist state. It takes the Prevayler approach, which means all your data lives in RAM as normal in-memory data structures. When the data is about to be modified, a delta is first persisted to a fast log on disk (giving durability) and then the in-memory version is updated. Periodically, checkpoints are taken.

This is a huge win because it means you avoid the whole tedious ‘impedance mismatch’ involved with persisting to a database – O/R mappings, mismatches between DB types and language types. It means that you can create new datatypes easily, and get on with the real business of adding functionality rather than futzing with databases.

Of course, there are downsides to this approach which I will discuss next. But first, the big upside: I can develop features quicker. Life being what it is, I’m almost certainly making lots of mistakes whilst building my app. Without a DB layer to burn time on, I can find out that I’m (inevitably) building the Wrong Thing sooner and make a course correction.

Downside #1: you normally rely on a database to get fast indexed lookups. With HAppS, you do this by constructing an in-memory hash table to act as an index. Actually, that’d be quite boring to do by hand so HApps uses metaprogramming (template haskell) to build the index data structures for you. They are moderately smart, with lazy update strategies.

Downside #2: scalability and availability. A big benefit of the usual “stateless web frontend, stateful database” split is that you can trivially scale the web fleet (no state) and fairly easily scale the database (replication/partitioning). With the HAppS approach it sounds like you can only have a single webserver/state-store machine – which means no horizontal scalability and bad availability. Actually, that’s not the whole story – the HAppS team are working on a system whereby the state gets shared across multiple machines (permitting scalable reads) and sharded (each machine is the master for some subset of the state).

That should be a killer downside, right? The argument is looks like this: “all successful webapps are popular, and popular apps must scale”. However, I think that to become a successful webapp, you have to firstly be a webapp. My “projects” directory is littered with half-finished projects where I got bored before I finished it. Quite frankly, if using HAppS means that I can avoid a lot of tedious work and actually finish a project then I’ll happily embrace the ghost of future scaling pain. Also, not every successful website gets as much traffic as Flickr/Twitter – my toy app is an “IMDB for Computer Scientists” and there just aren’t that many computer scientists in the world. Not every website needs to be super-scalable from day one. I’m not being naive here – my dayjob is scalability central – but it’s a question of finding the most appropriate hammer for the particular nail you’re trying to hit.

Having said all that, let me qualify it a bit: I would ideally like to create a perfectly scalable webapp from day one if there was no overhead in doing so. Most of the industry is focused on scalability-via-database, and attempt to minimize tedious work in various ways (eg. metaprogramming and reflection in ActiveRecord). However, these invariably ends up being painfully ‘leaky’ abstractions. I’ve seen this many times, and it’s given me the motivation to look for other approaches which can avoid the persistence pain. So perhaps HAppS is the answer, especially if my question is “can I write a reasonable webapp quickly” rather than “can I write a superscalable website slowly?”.

Disadvantage #3: Data migrations. This is really a red herring. I don’t really believe that data migrations are an advantage of having your data in a database. If you are using an O/R mapping, you surely want to migrate old objects to new object, not old relations to new relations. The approach which HAppS takes is to version your datatypes (mostly done via metaprogramming) and you write haskell code to migrate from old versions.

Disadvantage #4: Language dependence. If you use a database, you can access your data from any language. You can produce reports, do adhoc SQL queries and such like. These are indeed useful; I do this a lot. However, language-independence comes at a cost (impedance mismatch between database/language data types, business rules usually not applied at the database layer). Often, language independent access can be better provided by service layers which wrap the database layer, providing insulation from the schema changes and allowing fine grained access control and throttling. I’ve never been able to query Flickr’s database directly, but I’ve accessed their data via their API many times. HAppS provides more metaprogramming support for exposing data types via XML.

So that’s why I’m investing the time into learning about HAppS. It’s good to challenge your assumptions about how to do things. In the next post, I’ll write a bit about how HAppS works under the hood.

Categories
General

Duct Tape In Space

ObRandom: A rather detailed view of how duct tape was used during the Apollo moon landings.

Categories
General

Structure and Interpretation of Computer Programs

Over the last year, I’ve got a lot more picky about how I spend my ‘free’ time – I think all new fathers encounter this! I used to read reddit and follow all the latest k3wl happenings in the programming world. Occasionally, I’d read something that was really insightful but mostly it was just here-today-gone-tomorrow fluff .. “my way is more awesomer that yrs” kinda stuff. So I decide to try and ignore the fluff by thinking “will this be interesting in a year from now?” before I start reading. And pretty soon afterwards, I decided to commit time to watching the videos of the SICP lecture series.

SICP (see amazon.com) was the basis of the introductory CS course at MIT for many years. That makes it sounds like it’ll be a noddy course for noobs. It really absolutely is not. It covers a massive range of material – stuff which I’ve been learning gradually since leaving uni – in a really pretty accessible way. I’m interested in programming languages in general, and I now wish I’d watched these years ago. Perhaps it wouldn’t have made so much sense back then, but it would’ve equiped me with a useful (albeit non-mainstream) way of looking at the world.

The course explores “programming languages” as a concept, not “programming as an activity”. It’s a lesson in how to make tools, not just how to use those tools. So, you won’t learn how to build a big object oriented application. But you will learn different ways to implement support for “objects” in a language – eg. different ways to do method dispatch. If you know about C++ or Java, this kind of thing is baked into the language. SICP will show you ways to to mix the cookie dough, rather than just describing the cookies. And it’ll also tell you how to make a mixing bowl out of cookie dough too!

The course uses the scheme language. The choice of language isn’t shockingly important – it’s just a vehicle for making the concepts discussed in the course concrete. The real concepts – abstraction, combination and naming – are the fundamental ideas and transcend languages. Scheme is just a convenient way of talking about these concepts. The scheme language (its pretty simple) is explained as you go along anyway.

The videos were made in 1986, so they provide an interesting snapshot in time. The vast majority of the content is still 100% relevant today. The lectures on logic programming and streams (lazy evaluation) are interesting because in 1986 it wasn’t so clear how these ideas would turn out – eg. Miranda is mentioned, but Haskell was still to come. For example, the lecturer motivates the introduction of mutable state by showing how threading state between pure functions breaks encapsulation – whereas today you could use monads to make this more manageable.

With the benefit of 20 years of hindsight, there’s a few things that come across as being klunky. There’s a lot of interesting discussion about the relationship between car/cdr/cons and how you could realize those concepts. However, when we get on to more implementation heavy stuff, car/cadr/etc are used purely as the way to pluck data out of lists. Coming from an ML/haskell background, I was crying out for pattern matching. Ironically, pattern matching is covered in lecture 4, but it’s not added into the standard vocabulary. Instead the lecturer keeps saying “to find the foo, we need to take the car of the cadr of the list”. It made me miss pattern matching a lot!

The video series is also a reflection of the MIT research interests of the time. If you’ve only ever used one language before, it’s difficult to understand how that language affects the way you express and understand things. In the early days, it wasn’t clear what the ‘best way to program computers’ was. It probably still isn’t clear today. But in the 70’s and 80’s MIT was busy churning out language after language to try to better solve the “difficult AI problems” of the day. People were trying lots of different ideas – procedural style, rule-based systems, logic systems, objects, lazy evaluation etc- and consequently there was a rich ecosystem of programming languages which acted as vehicles to try these ideas out. This is the culture in which scheme was developed, and the choice of topics covered in SICP reflects this period.

So what are the good bits of the lecture series? Higher order function are introduced early as a basic building block and thereafter taken for granted. I think that somewhat sets the tone as far as this ‘introductory’ course goes! The benefits of decoupling interfaces/contracts from concrete implementation is covered repeatedly via examples. And the lectures are peppered with “shocking” examples – eg. implementing car/cdr/cons just using lambda – which serve to illustrate the current topic but also tie together the deeper overall themes of the course.

I enjoyed the wide range of topics covered, and way that main themes continually recur throughout the lectures (abstraction and combination). I feel that this was a well-constructed course, carefully designed to lead students on a journey by showing them many facets of programming, whilst continually explaining how they interrelate. Additionally, I think that it all feels very genuine – it’s a reflection of what Sussman and Abelson were working on at the time. They often stress that the examples they use aren’t just toy examples. The metacircular evaluator isn’t there just for teaching purposes – it’s how these guys experimented with possible language features (say, call-by-name vs call-by-value).

The questions at the end of the lectures are often the best bit – they’re often “uhh, I didn’t understand this bit”, but the answers often shed new light on an area so I’m glad that the questions were asked!

Any downsides? The final lectures on interpretation and compilation gets somewhat bogged down in prolonged hand-cranked examples. In an introductory course, it’s good to illustrate how to make abstract language concepts realizable in real hardware. But even allowing for that, I think they overstretched somewhat.

I was sad when I finished watching the last video. These videos may be twenty years old, but I enjoyed watching them and learned a lot. Mr Sussman and Mr Abelson: thank you for sharing your knowledge and understanding so effectively! 🙂