This is the first of several posts on the topic of Haskell’s laziness. After several weeks of playing, I’m coming to the conclusion that laziness-by-default is a hinderance rather than a virtue. Let’s start at the start though by trying to add some numbers together.
-- Non tail recursive; 5Mb of live objects at end. mysum  = 0 mysum (x:xs) = x + mysum xs main = putStrLn $ show $ mysum $ take 100000 $ [1..]
As the comment says, this is a dumb version. It consumes 5Mb of memory because it’s not tail recursive.
Incidentally, after causing my machine to thrash several time during my experiments, I found it useful to use ‘ulimit’ to restrict the maximum heap size available to the process. Also, you can pass extra args to your haskell app to get it to report real-time memory stats, like this:
ghc sum.hs && /bin/bash -c 'ulimit -Sv 100000; ./a.out +RTS -Sstderr'
Anyhow, the memory blowup is easy to fix; just pass an ‘accumulator’ parameter when you do the recursive call:
-- Tail recursive, but 3.5Mb of live objects at end. mysuma acc  = acc mysuma acc (x:xs) = mysuma (acc+x) xs main = putStrLn $ show $ mysuma 0 $ take 100000 $ [1..]
Hmm, it’s now tail recursive but it still consumes 3.5Mb? This is where Haskell’s laziness makes things quite different from ocaml and other strict languages. When we pass the accumulated value, haskell does not actually evaluate the addition prior to making the recursive call. It will delay the computation until its value is actually required. So, on each recursive call, the accumulator looks like an unevaluated “1+2” and then “1+2+3” etc.
We can fix this by explicitly telling haskell to evaluate the addition prior to making the call:
-- Tail recursive, with 'seq' to force immediate evaluation of addition. -- 40k of live objects at end. mysumas acc  = acc mysumas acc (x:xs) = (acc+x) `seq` mysumas (acc+x) xs main = putStrLn $ show $ mysumas 0 $ take 100000 $ [1..]
Finally we have a program which only consumes a tiny amount of heap space. But it took a surprising amount of effort. There’s lots more information about this situation on the haskell wiki.
15 replies on “Haskell is possibly too lazy for me”
I tend to prefer the OCaml route: make it reasonably easy to choose laziness, but don’t make it the default.
Did you try compiling with -O3? I would have thought GHC would optimize quite well in a simple case like this.
I agree that seems annoying. I like the idea of lazy evaluation, but I’d expect the compiler to optimize it – in this case surely deferring the computation must be more expensive, seeing as it’ll need to represent the computation somehow (presumably as a closure?)… Static analysis at compile time ought to be be able to pick up on the most common cases of this.
The challenge of course is that some computation in Haskell depends on the laziness to work (infinite series etc.), but without knowing the language to well I’d hazard the guess that a compiler could definitively guarantee that a pretty large subset of computation won’t change in character by carrying it out immediately. I can’t see any case where computation without side effects and with only constant inputs would be affected, for example – in which case deciding whether to do it right away or defer calculation should be a pure time/space optimization issue.
For what it’s worth, I’d have written this as:
main = print . sum . take 100000000 $ [1..]
Note that “print = putStrLn . show”, and “sum” is in the Prelude. It happily runs in constant space as N grows.
Converting to an accumulating loop, since this recursion isn’t guarded under a constructor, I’d use bang patterns, and a worker/wrapper:
sum’ xs = go 0 xs
where go !n  = n
go n (x:xs) = go (n + x) xs
main = print . sum’ . take 100000000 $ [1..]
which is the same thing, but cheaper. Usually, its best to use a precanned loop though:
sum’ xs = foldl’ (+) 0 xs
main = print . sum’ . take 100000000 $ [1..]
The real joy occurs once you start describing data structures whose components are only allocated on demand. This makes for a very flexibly style of programming, as I guess you could expect.
Compile with -O2. That will also solve your problem. Being aware of space leaks and how to solve them is a great thing, but in practice, it’s only necessary when the code you write could behave differently due to laziness (i.e., when you very well may want laziness anyway). Otherwise, GHC’s strictness analysis does a fantastic job of making this change for you.
All you needed to do was tell GHC that you care how fast the code runs.
Did you try compiling with optimization so the strictness analyzer is used? Your first accumulator version takes up 84 bytes on the heap when I run it.
I commented also on the needless duplication of work in the accumulator, too, here, http://reddit.com/r/programming/info/6cuf8/comments/c03hz2k
mysumas acc  = acc
mysumas acc (x:xs) = n `seq` mysumas n xs
where n = acc+x
Which leads to the bang pattern version. Just as sometimes you need to be explicitly lazy, in a strict language, so sometimes in a lazy language you need to be explictily strict.
One word : “foldl”
But i agree i also don’t like laziness by default.
In practice, laziness doesn’t make coding difficult. I think it actually makes coding easier. It does, however, take time to get the hang of, probably more than few weeks of playing.
In the same way that most experienced Haskell programmers use folds instead of writing explicitly recursive code, they use strict versions of the corresponding combinators when strict behavior is what makes sense. There’s rarely a need to pepper code with lots of seq and $! annotations; let combinators do the work.
So if laziness seems hard to deal with, keep trying. Eventually it becomes second nature.
I just stumbled across the language Disciple today, which is a strict-by-default Haskell plus some interesting features, such as effect typing so that monads are not necessary. I haven’t tried it yet but if, apart from the laziness, you like Haskell, it seems like it might be an interesting thing to check out.
You’re perfectly correct that it can be tough for even for the experts to evaluate the time and space usage of a given bit of code. It’s an issue. However, you can learn a few combinators which solve the problem 90% of the time.
For example, when you see “I want to compute a value of a list, strictly”, you should think “foldl'”:
Prelude Data.List> foldl’ (+) 0 $ take 1000000 [1..]
I would simply use:
module Main where
mysumas !acc  = acc
mysumas !acc (x:xs) = mysumas (acc+x) xs
main = putStrLn $ show $ mysumas 0 $ take 100000 $ [1..]
And compile wih ‘ghc -fbang-patterns –make Main.hs’.
Some additional notes that led to this solution:
1) It is clear that this requires a tail-recursive loop as it produces non-lazy data. (There is some better term for ‘non-lazy’ but I’m sure an expert can help you there)
2) When doing tail-recursion, you want to make your accumulator strict. This is easily done with the ! bit
3) Normally I would not even define a function that traverses lists in such a trivial manner by using pattern-matching. I would simply use fold. And in this case the correct fold (namely for non-lazy data) is foldl’. This is a reasonably quick-learned fact if one spends time in the Haskell community (be it the mailing-list or the IRC channel). In fact, it would probably be the first answer to anyone asking how to make this code better, by most any Haskeller.
One additional note. It is unfair to claim this is an issue with Haskell. I see that you are on the O’Caml planet-list. I am 100% sure that you would write this as a tail-recursive loop in O’Caml as well. So the first sample you show is not even a fair objection.
foldl’ (+) 0
Be sure to explore both sides of the laziness coin. As your post shows, some programs you write in O’Caml aren’t written the same way in Haskell (the corrections are really minor, though). But there are also Haskell programs you can’t write that way in O’Caml, these are where laziness shines.